Artificial Intelligence for Music

A workshop at 2025 AAAI Annual Conference

Workshop Summary

This one-day workshop will explore the dynamic intersection of artificial intelligence and music. It explores how AI is transforming music creation, recognition, and education, ethical and legal implications, as well as business opportunities. We will investigate how AI is changing the music industry and education—from composition to performance, production, collaboration, and audience experience. Participants will gain insights into the technological challenges in music and how AI can enhance creativity, enabling musicians and producers to push the boundaries of their art. The workshop will cover topics such as AI-driven music composition, where algorithms generate melodies, harmonies, and even full orchestral arrangements. We will discuss how AI tools assist in sound design, remixing, and mastering, allowing for new sonic possibilities and efficiencies in music production. Additionally, we'll examine AI's impact on music education and the careers of musicians, exploring advanced learning tools and teaching methods. AI technologies are increasingly adopted in the music and entertainment industry. The workshop will also discuss the legal and ethical implications of AI in music, including questions of authorship, originality, and the evolving role of human artists in an increasingly automated world. This workshop is designed for AI researchers, musicians, producers, and educators interested in the current status and future of AI in music.

Topics

  • Impacts of AI on music education and careers of musicians.
  • AI-driven music composition.
  • AI-assisted sound design.
  • AI-generated audio and video.
  • Legal and ethical considerations of AI in music.

Schedule

Time Topic
09:00AM Welcome by Organizers
09:10AM Invited Speech by Zhiyao Duan
09:50AM Invited Speech by Miguel Willis
10:30AM Break
11:00AM Paper Presentations (selected from submissions)
12:00PM Lunch Break
01:00PM Invited Speech by Hao-Wen Dong
01:40PM Invited Speech by Gus Xia
02:20PM Panel Discussion by the Invited Speakers
03:20PM Break
03:30PM Paper Presentations (selected from submissions)
04:30PM Open Discussion: Future of AI and Music
05:00PM Adjourn

Call for Papers

Submission Requirements

Submissions should be a maximum of 6 pages. Work in progress is welcome. Authors are encouraged to include descriptions of their prototype implementations. Additionally, authors are encouraged to interact with workshop attendees by including posters or demonstrations at the end of the workshop. Conceptual designs without any evidence of practical implementation are discouraged.

Topics of Interest

  • AI-Driven Music Composition and Generation
  • AI in Music Practice and Performance
  • AI-based Music Recognition and Transcription
  • AI Applications in Sound Design
  • AI-Generated Videos to Accompany Music
  • AI-Generated Lyrics Based on Music
  • Legal or Ethical Implications of AI on Music
  • AI's Impacts on Musicians’ Careers
  • AI Assisted Music Education
  • Business Opportunities of AI and Music
  • Music Datasets and Data Analysis

Paper Format

Please follow the format required by AAAI at AAAI 2025 Main Technical Call for Papers .

Submission Site Information

Please submit your papers through the CMT Submission Portal.

Important Dates

  • Submission Deadline: November 22, 2024
  • Notification of Acceptance: December 9, 2024
  • Final Version Due: December 31, 2024

Accepted papers will be posted on the workshop website.

Invited Speakers

Hao-Wen (Herman) Dong is an Assistant Professor in the Performing Arts Technology Department at University of Michigan. Herman’s research aims to empower music and audio creation with machine learning. His long-term goal is to lower the barrier of entry for music composition and democratize audio content creation. He is broadly interested in music generation, audio synthesis, multimodal machine learning, and music information retrieval. Herman received his PhD degree in Computer Science from University of California San Diego, where he worked with Julian McAuley and Taylor Berg-Kirkpatrick. Herman’s research has been recognized by the UCSD CSE Doctoral Award for Excellence in Research, KAUST Rising Stars in AI, UChicago and UCSD Rising Stars in Data Science, ICASSP Rising Stars in Signal Processing and UCSD GPSA Interdisciplinary Research Award.

Zhiyao Duan is an associate professor in Electrical and Computer Engineering, Computer Science, and Data Science at University of Rochester. He is also a co-founder of Violy, a company aiming to improve music education through AI. His research interest is in computer audition and its connections with computer vision, natural language processing, and augmented and virtual reality. He received a best paper award at the Sound and Music Computing (SMC) Conference in 2017, a best paper nomination at the International Society for Music Information Retrieval (ISMIR) Conference in 2017, and a CAREER award from the National Science Foundation (NSF). His work has been funded by NSF, National Institute of Health, National Institute of Justice, New York State Center of Excellence in Data Science, and University of Rochester internal awards on AR/VR, health analytics, and data science. He is a senior area editor of IEEE Signal Processing Letters, an associate editor for IEEE Open Journal of Signal Processing, and a guest editor for Transactions of the International Society for Music Information Retrieval. He is the President of ISMIR.

Miguel Willis is the Innovator in Residence at the Law School’s Future of the Profession Initiative (FPI), University of Pennsylvania. He concurrently serves as the Executive Director of Access to Justice Tech Fellows, a national nonprofit organization that develops summer fellowships for law students seeking to leverage technology to create equitable legal access for low-income and marginalized populations. Prior to joining FPI, Willis served as the Law School Admissions Council's (LSAC) inaugural Presidential Innovation Fellow. Willis currently serves on the advisory board of University of Arizona James E. Rogers College of Law’s Innovation for Justice (i4J) program and serves on The Legal Services Corporation’s Emerging Leaders Council.

Gus Xia is an assistant professor of Machine Learning at the Mohamed bin Zayed University of Artificial Intelligence in Masdar City, Abu Dhabi. His research includes the design of interactive intelligent systems to extend human musical creation and expression. This research lies in the intersection of machine learning, human-computer interaction, robotics, and computer music. Some representative works include interactive composition via style transfer, human-computer interactive performances, autonomous dancing robots, large-scale content-based music retrieval, haptic guidance for flute tutoring, and bio-music computing using slime mold.

Organizers

Meet the team behind the 2025 AAAI Workshop on Artificial Intelligence for Music.

Yung Hsiang Lu

Yung-Hsiang Lu

Professor of Electrical and Computer Engineering

Yung-Hsiang Lu is a professor in the Elmore Family School of Electrical and Computer Engineering at Purdue University. He is a fellow of the IEEE and a distinguished scientist of the ACM. Yung-Hsiang has published papers on computer vision and machine learning in venues such as AI Magazine, Nature Machine Learning, and Computer. He is one of the editors of the book "Low-Power Computer Vision: Improve the Efficiency of Artificial Intelligence" (ISBN 9780367744700, 2022 by Chapman & Hall).

Kristen Yeon-Ji Yun

Kristen Yeon-Ji Yun

Clinical Associate Professor of Music

Kristen Yeon-Ji Yun is a clinical associate professor in the Department of Music at the Patti and Rusty Rueff School of Design, Art, and Performance at Purdue University. She is the Principal Investigator of the research project "Artificial Intelligence Technology for Future Music Performers" (US National Science Foundation, IIS 2326198). Kristen is an active soloist, chamber musician, musical scholar, and clinician. She has toured many countries, including Malaysia, Thailand, Germany, Mexico, Japan, China, Hong Kong, Spain, France, Italy, Taiwan, and South Korea, giving a series of successful concerts and master classes.

George K. Thiruvathukal

George K. Thiruvathukal

Professor and Chairperson of Computer Science

George K. Thiruvathukal is a professor and chairperson of Computer Science at Loyola University Chicago and a visiting computer scientist at Argonne National Laboratory. His research interests include high-performance computing and distributed systems, programming languages, software engineering, machine learning, digital humanities, and arts (primarily music). George has published multiple books, including "Software Engineering for Science" (ISBN 9780367574277, 2016 Chapman and Hall & CRC), "Web Programming: Techniques for Integrating Python, Linux, Apache, and MySQL" (ISBN 9780130410658, 2001 Prentice Hall), and "High-Performance Java Platform Computing: Multithreaded and Networked Programming" (ISBN 9780130161642, 2000 Prentice Hall).

Benjamin Shiue-Hal Chou

Benjamin Shiue-Hal Chou

PhD Student and Lab Graduate Mentor

Benjamin Shiue-Hal Chou is a PhD student in Electrical and Computer Engineering at Purdue University, supervised by Dr. Yung-Hsiang Lu. His research focuses on artificial intelligence applications in music technology, particularly on detecting errors in music performances. Benjamin has co-authored “Token Turing Machines are Efficient Vision Models” (arXiv preprint arXiv:2409.07613, 2024). He earned his Bachelor of Science in Electrical Engineering from National Cheng Kung University (NCKU) in Taiwan, receiving awards such as the Outstanding Student Scholarship, Transnational Research Scholarship Grant, and the Tainan City Digital Governance Talent Award.