Skip to main content

Call for Papers: Deep Learning applied to Music Signal Processing

Deep learning has now become a key technology in the field of intelligent music, and it has been used for a range of music signal processing tasks, such as intelligent music generation and analysis, intelligent music codec, and immersive music analysis and creation. However, most existing approaches directly apply deep learning models to intelligent listening arts without considering the uniqueness between data and tasks. This massive movement of data across the network requires smarter and more advanced techniques to manage, search, store and retrieve data, in addition, it requires a great deal of intelligence and cognitive understanding to manage it. However, the massive amounts of data and resources, increasingly complex structures, and stunning forms of data are challenging to traditional computational algorithms. Recent advances in artificial intelligence, big data, and deep learning technologies have made data analysis and computation feasible for large-scale listening art, and intelligent listening art is gradually landing applications and transforming into a commonly noticed phenomenon. Through intelligent computing and deep learning methods, today's music emotion recognition and processing, perception, and cognition in music is becoming increasingly personalized, intelligent, and scenarioizing. To cope with these computational tasks in the art of intelligent music, current deep neural networks, including their architectures, training, and inference methods, must be adapted or even redesigned. In addition, new deep neural network models are needed to handle some application problems of intelligent music algorithms, such as music wireless control analysis and understanding, intelligent generation and creation of music, and cross-modal retrieval.

The topics of interest for the special issue include, but are not limited to:

  • Music Information Retrieval
  • Music/Voice Generation Standard and Evaluation Method of AI Music
  • Cross-Modal Generation and Retrieval
  • Music/Audio Understanding
  • Music Emotion Recognition
  • Music Perception and Information Recovery
  • Generate Music with Sentiment
  • Music Therapy for Mental Health
  • Intelligent Music Ensemble
  • Music Conditioned Dance Generation
  • Intelligent Music Recommendation System
  • Digital Musical Instruments
  • Music Wireless ControlInteractive Computer Music
  • Music/Audio Source Separation
  • Music Analysis and Transformation

Important Dates
Submission deadline: 31 January 2023

Lead Guest Editor
Xiaolei Zhang, Northwestern Polytechnical University

Guest Editors
Wenwu Wang, University of Surrey, UK
Ivan Lee, University of South Australia, Australia

Important: Authors should select "Deep Learning applied to Music Signal Processing" when they reach the Thematic Series section in the submission system.

Submisison guidelines