audio fingerprinting
Recently Published Documents


TOTAL DOCUMENTS

144
(FIVE YEARS 6)

H-INDEX

15
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Neetish Borkar ◽  
Shubhra Patre ◽  
Raunak Singh Khalsa ◽  
Rohanshhi Kawale ◽  
Priti Chakurkar

Author(s):  
Annapurna P Patil ◽  
Lakshmi J Itagi ◽  
Ashika CS ◽  
Ambika G ◽  
Mallika Ravi

Signals ◽  
2021 ◽  
Vol 2 (2) ◽  
pp. 245-285
Author(s):  
Meinard Müller

This paper provides a guide through the FMP notebooks, a comprehensive collection of educational material for teaching and learning fundamentals of music processing (FMP) with a particular focus on the audio domain. Organized in nine parts that consist of more than 100 individual notebooks, this collection discusses well-established topics in music information retrieval (MIR) such as beat tracking, chord recognition, music synchronization, audio fingerprinting, music segmentation, and source separation, to name a few. These MIR tasks provide motivating and tangible examples that students can hold onto when studying technical aspects in signal processing, information retrieval, or pattern analysis. The FMP notebooks comprise detailed textbook-like explanations of central techniques and algorithms combined with Python code examples that illustrate how to implement the methods. All components, including the introductions of MIR scenarios, illustrations, sound examples, technical concepts, mathematical details, and code examples, are integrated into a unified framework based on Jupyter notebooks. Providing a platform with many baseline implementations, the FMP notebooks are suited for conducting experiments and generating educational material for lectures, thus addressing students, teachers, and researchers. While giving a guide through the notebooks, this paper’s objective is to yield concrete examples on how to use the FMP notebooks to create an enriching, interactive, and interdisciplinary supplement for studies in science, technology, engineering, and mathematics. The FMP notebooks (including HTML exports) are publicly accessible under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


2021 ◽  
pp. 1-1
Author(s):  
Juan Zhao ◽  
Tianrui Zong ◽  
Yong Xiang ◽  
Longxiang Gao ◽  
Guang Hua

Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1483
Author(s):  
Maoshen Jia ◽  
Tianhao Li ◽  
Jing Wang

With the appearance of a large amount of audio data, people have a higher demand for audio retrieval, which can quickly and accurately find the required information. Audio fingerprint retrieval is a popular choice because of its excellent performance. However, there is a problem about the large amount of audio fingerprint data in the existing audio fingerprint retrieval method which takes up more storage space and affects the retrieval speed. Aiming at the problem, this paper presents a novel audio fingerprinting method based on locally linear embedding (LLE) that has smaller fingerprints and the retrieval is more efficient. The proposed audio fingerprint extraction divides the bands around each peak in the frequency domain into four groups of sub-regions and the energy of every sub-region is computed. Then the LLE is performed for each group, respectively, and the audio fingerprint is encoded by comparing adjacent energies. To solve the distortion of linear speed changes, a matching strategy based on dynamic time warping (DTW) is adopted in the retrieval part which can compare two audio segments with different lengths. To evaluate the retrieval performance of the proposed method, the experiments are carried out under different conditions of single and multiple groups’ dimensionality reduction. Both of them can achieve a high recall and precision rate and has a better retrieval efficiency with less data compared with some state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document