Two papers accepted for EvoMUSART

(1) I. Vatolkin, M. Gotham, N. Nápoles López, and F. Ostermann: Musical Genre Recognition based on Deep Descriptors of Harmony, Instrumentation, and Segments. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART)

Abstract: Deep learning has recently established itself as a cluster of methods of choice for almost all classification tasks in music information retrieval. However, despite very good classification performance, it sometimes brings disadvantages including long training times and higher energy costs, lower interpretability of classification models, or an increased risk of overfitting when applied to small training sets due to a very large number of trainable parameters. In this paper, we investigate the combination of both deep and shallow algorithms for recognition of musical genres using a transfer learning approach. We train deep classification models once to predict harmonic, instrumental, and segment properties from datasets with respective annotations. Their predictions for another dataset with annotated genres are used as features for shallow classification methods. They can be trained over and again for different categories, and are particularly useful when the training sets are small, in a real world scenario when listeners define various musical categories selecting only a few prototype tracks. The experiments show the potential of the proposed approach for genre recognition. In particular, when combined with evolutionary feature selection which identifies the most relevant deep feature dimensions, the classification errors became significantly lower in almost all cases, compared to a baseline based on MFCCs or results reported in the previous work.

(2) L. Fricke, I. Vatolkin, and F. Ostermann: Application of Neural Architecture Search to Instrument Recognition in Polyphonic Audio. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART)

Abstract: Instrument recognition in polyphonic audio signals is a very challenging classification task. It helps to improve related application scenarios, like music transcription and recommendation, organization of large music collections, or analysis of historical trends and properties of musical styles. Recently, the classification performance could be improved by the integration of deep convolutional neural networks. However, in to date published studies, the network architectures and parameter settings were usually adopted from image recognition tasks and manually adjusted, without a systematic optimization. In this paper, we show how two different neural architecture search strategies can be successfully applied for improvement of the prediction of nine instrument classes, significantly outperforming the classification performance of three fixed baseline architectures from previous works. Although high computing efforts for model optimization are required, the training of the final architecture is done only once for later prediction of instruments in a possibly unlimited number of musical tracks.

Dieser Beitrag wurde unter MIR Research, Publications veröffentlicht. Setze ein Lesezeichen auf den Permalink.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert