Paper describing Artificial Audio Multitracks (AAM) dataset published in EURASIP Journal on Audio, Speech, and Music Processing

F. Ostermann, I. Vatolkin, and M. Ebeling: AAM: a Dataset of Artificial Audio Multitracks for Diverse Music Information Retrieval Tasks. EURASIP Journal on Audio, Speech, and Music Processing, 13, 2023.

Zenodo link: https://doi.org/10.5281/zenodo.5794629

Abstract: We present a new dataset of 3000 artificial music tracks with rich annotations based on real instrument samples and generated by algorithmic composition with respect to music theory. Our collection provides ground truth onset information and has several advantages compared to many available datasets. It can be used to compare and optimize algorithms for various music information retrieval tasks like music segmentation, instrument recognition, source separation, onset detection, key and chord recognition, or tempo estimation. As the audio is perfectly aligned to original MIDIs, all annotations (onsets, pitches, instruments, keys, tempos, chords, beats, and segment boundaries) are absolutely precise. Because of that, specific scenarios can be addressed, for instance, detection of segment boundaries with instrument and key change only, or onset detection only in tracks with drums and slow tempo. This allows for the exhaustive evaluation and identification of individual weak points of algorithms. In contrast to datasets with commercial music, all audio tracks are freely available, allowing for extraction of own audio features. All music pieces are stored as single instrument audio tracks and a mix track, so that different augmentations and DSP effects can be applied to extend training sets and create individual mixes, e.g., for deep neural networks. In three case studies, we show how different algorithms and neural network models can be analyzed and compared for music segmentation, instrument recognition, and onset detection. In future, the dataset can be easily extended under consideration of specific demands to the composition process.

Veröffentlicht unter MIR Research, Publications | Hinterlasse einen Kommentar

Slides from the MDA lecture by Meinard Müller

On 03.02.2023, the last unit of  the interdisciplinary lecture on Music Data Analysis was organized by Prof. Meinard Müller, Audiolabs Erlangen. The slides on FMP Notebooks are available here (handout version).

Veröffentlicht unter Teaching Activities | Hinterlasse einen Kommentar

Two papers accepted for EvoMUSART

(1) I. Vatolkin, M. Gotham, N. Nápoles López, and F. Ostermann: Musical Genre Recognition based on Deep Descriptors of Harmony, Instrumentation, and Segments. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART)

Abstract: Deep learning has recently established itself as a cluster of methods of choice for almost all classification tasks in music information retrieval. However, despite very good classification performance, it sometimes brings disadvantages including long training times and higher energy costs, lower interpretability of classification models, or an increased risk of overfitting when applied to small training sets due to a very large number of trainable parameters. In this paper, we investigate the combination of both deep and shallow algorithms for recognition of musical genres using a transfer learning approach. We train deep classification models once to predict harmonic, instrumental, and segment properties from datasets with respective annotations. Their predictions for another dataset with annotated genres are used as features for shallow classification methods. They can be trained over and again for different categories, and are particularly useful when the training sets are small, in a real world scenario when listeners define various musical categories selecting only a few prototype tracks. The experiments show the potential of the proposed approach for genre recognition. In particular, when combined with evolutionary feature selection which identifies the most relevant deep feature dimensions, the classification errors became significantly lower in almost all cases, compared to a baseline based on MFCCs or results reported in the previous work.

(2) L. Fricke, I. Vatolkin, and F. Ostermann: Application of Neural Architecture Search to Instrument Recognition in Polyphonic Audio. Accepted for Proceedings of the 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART)

Abstract: Instrument recognition in polyphonic audio signals is a very challenging classification task. It helps to improve related application scenarios, like music transcription and recommendation, organization of large music collections, or analysis of historical trends and properties of musical styles. Recently, the classification performance could be improved by the integration of deep convolutional neural networks. However, in to date published studies, the network architectures and parameter settings were usually adopted from image recognition tasks and manually adjusted, without a systematic optimization. In this paper, we show how two different neural architecture search strategies can be successfully applied for improvement of the prediction of nine instrument classes, significantly outperforming the classification performance of three fixed baseline architectures from previous works. Although high computing efforts for model optimization are required, the training of the final architecture is done only once for later prediction of instruments in a possibly unlimited number of musical tracks.

Veröffentlicht unter MIR Research, Publications | Hinterlasse einen Kommentar

SIGMA #54

The program of the following 54th SIGMA meeting on 16.01.2023, 14:00-16:00, which takes place at the Chair of Algorithm Engineering, Deparment of Computer Science, TU Dortmund, Otto-Hahn-Str. 14, room 202, and online (please send an email to igor.vatolkin [at] udo.edu if you wish to get the Zoom link):

14:00-14:05 Welcome greetings

14:05-14:35 Master’s thesis (introduction)
Justin Dettmer: Expanding an Evolutionary Algorithm for the Synthesis of Polyphonic Music 

14:35-15:05 Conference study
Igor Vatolkin: Stability of Symbolic Feature Group Importance in the Context of Multi-Modal Music Classification

15:05-15:35 Research study
Leonard Fricke: Neural Architecture Search for Instrument Recognition

15:35-15:55 Conferences and calls, miscellaneous, next meeting

Veröffentlicht unter SIGMA | Hinterlasse einen Kommentar

Lecture on music data analysis in winter term 2022/23

For the fifth time, the interdisciplinary lecture „Music data analysis“ takes place at TU Dortmund (lecture on Fridays, 10-12, with excercises on Fridays, 12-13).

 

Veröffentlicht unter Teaching Activities | Hinterlasse einen Kommentar

Call for Papers: EvoMUSART 2023

The 12th International Conference on Artificial Intelligence in Music, Sound, Art and Design (EvoMUSART) will take place in Brno (Czech Republic) between the 12th and 14th of April of 2023, as part of the evo* event.

EvoMUSART webpage: https://www.evostar.org/2023/evomusart

Submission deadline: 1 November 2022
Conference: 12 – 14 April 2023

EvoMUSART is a multidisciplinary conference that brings together researchers who are working on the application of Artificial Neural Networks, Evolutionary Computation, Swarm Intelligence, Cellular Automata, Alife, and other Artificial Intelligence techniques in creative and artist fields such as Visual Art, Music, Architecture, Video, Digital Games, Poetry, or Design. This conference gives researchers in the field the opportunity to promote, present and discuss ongoing work in the area.

More information on the submission process and the topics of EvoMUSART: https://www.evostar.org/2023/evomusart

Flyer of EvoMUSART 2023: https://www.evostar.org/2023/flyers/evomusart
Papers published in EvoMUSART: https://evomusart-index.dei.uc.pt

Veröffentlicht unter Conferences & Calls, Events | Hinterlasse einen Kommentar

Paper on multi-modal music classification accepted for ISMIR

I. Vatolkin and C. McKay: Stability of Symbolic Feature Group Importance in the Context of Multi-Modal Music Classification. accepted for Proceedings of the 23rd International Society for Music Information Retrieval Conference (ISMIR)

Abstract: Multi-modal music classification creates supervised models trained on features from different sources (modalities): the audio signal, the score, lyrics, album covers, expert tags, etc. A concept of “multi-group feature importance” not only helps to measure the individual relevance of features belonging to a feature type under investigation (such as the instruments present in a piece), but also serves to quantify the potential for further improving classification quality by adding features from other feature types or extracted from different kinds of sources, based on a multi-objective analysis of feature sets after evolutionary feature selection. In this study, we investigate the stability of feature group importance when different classification methods and different measures of classification quality are applied. Since musical scores are particularly helpful in deriving semantically meaningful, robust genre characteristics, we focus on the feature groups analyzed by the jSymbolic feature extraction software, which describe properties associated with instrumentation, basic pitch statistics, melody, chords, tempo, and other rhythmic aspects. These symbolic features are analyzed in the context of musical information drawn from five other modalities, and experiments are conducted involving two datasets, one small and one large. The results show that, although some feature groups can remain similarly important compared to others, differences can also be evident in various application cases, and can depend on the particular classifier and evaluation measure being used. Insights drawn from this type of analysis can potentially be helpful in effectively matching specific features or feature groups to particular classifiers and evaluation measures in future feature-based MIR research.

Veröffentlicht unter MIR Research, Publications | Hinterlasse einen Kommentar

SIGMA #53

The program of the following 53rd SIGMA meeting on 30.06.2022, 14:00-16:00, which takes place online (please send an email to igor.vatolkin [at] udo.edu if you wish to get the Zoom link):

14:00-14:05 Welcome greetings

14:05-14:35 Conference study
Mark Gotham: What if the ‚when‘ implies the ‚what‘?

14:35-15:10 Master’s thesis (introduction)
Alexander Ostrop: Generation of Orchestra Pieces with Transformer Models

15:10-15:40 Research studies
Hauke Egermann: Predicting Listener Experience in Functional Music Settings: Research at the Intersection between Music Psychology and Computer Science 

15:40-16:00 Conferences and calls, miscellaneous, next meeting

Veröffentlicht unter SIGMA | Hinterlasse einen Kommentar

SIGMA #52

The program of the following 52nd SIGMA meeting on 31.03.2022, 14:00-16:10, which takes place online (please send an email to igor.vatolkin [at] udo.edu if you wish to get the Zoom link):

14:00-14:05 Welcome greetings

14:05-14:35 Bachelor’s thesis (results)
Marcel Schrauder: Music genre classification with artificial neural networks

14:35-15:05 Master’s thesis (results)
Pia Eickhoff: Extended replication study to tone consolidation after Carl Stumpf 

15:05-15:35 Master’s thesis (results)
Florian Scholz: Inclusion of different instrument bodies for robust training of neural networks for instrument recognition

15:35-16:05 Research plan for a PhD thesis
Fabian Ostermann: Artificial Intelligence as a tool for music composers

16:05-16:10 Conferences and calls, miscellaneous, next meeting

Veröffentlicht unter SIGMA | Hinterlasse einen Kommentar

Paper on multi-modal music classification using six modalities published in TISMIR

I. Vatolkin and C. McKay: Multi-Objective Investigation of Six Feature Source Types for Multi-Modal Music Classification. Transactions of the International Society for Music Information Retrieval, 5(1), pp.1–19, 2022.

Abstract: Every type of musical data (audio, symbolic, lyrics, etc.) has its limitations, and cannot always capture all relevant properties of a particular musical category. In contrast to more typical MIR setups where supervised classification models are trained on only one or two types of data, we propose a more diversified approach to music classification and analysis based on six modalities: audio signals, semantic tags inferred from the audio, symbolic MIDI representations, album cover images, playlist co-occurrences, and lyric texts. Some of the descriptors we extract from these data are low-level, while others encapsulate interpretable semantic knowledge that describes melodic, rhythmic, instrumental, and other properties of music. With the intent of measuring the individual impact of different feature groups on different categories, we propose two evaluation criteria based on “non-dominated hypervolumes”: multi-group feature “importance” and “redundancy”. Both of these are calculated after the application of a multi-objective feature selection strategy using evolutionary algorithms, with a novel approach to optimizing trade-offs between both “pure” and “mixed” feature subsets. These techniques permit an exploration of how different modalities and feature types contribute to class discrimination. We use genre classification as a sample research domain to which these techniques can be applied, and present exploratory experiments on two disjoint datasets of different sizes, involving three genre ontologies of varied class similarity. Our results highlight the potential of combining features extracted from different modalities, and can provide insight on the relative significance of different modalities and features in different contexts.

Veröffentlicht unter MIR Research, Publications | Hinterlasse einen Kommentar