Welcome to AIM.

AIM (Artificial Intelligence for Musicians) is a Purdue music technology research group, whose aim is to create reactive, human-like systems which support musicians during their practice sessions and performances.

Some of AIM's projects are supported by a National Science Foundation grant.

We are looking for motivated graduate or undergraduate students to join our team. Click here for info.

News & Upcoming Events

2025 Transcription Competition

December 1, 2024 - March 31, 2025

Challenge yourself to create the best transcription model for classical music. Learn more

Workshop: AI for Music at AAAI 2025

Date: March 3, 2025 (Monday)

Discover the latest advancements in AI for music at AAAI 2025. Learn more

Workshop: Robotic Musician

Date: 2025

Join us for an exciting workshop where we'll explore the use of robotics in music. Learn more

Automatic Music Transcription

Automatic Music Transcription

Fall 2024 - present

Automatic Music Transcription is a project focused on streamlining audio-to-MIDI transcription for musicians and educators, with applications in isolating sounds in noisy environments. We are conducting a systematic review of AMT models, examining their strengths and limitations with complex, multi-instrument music. To drive innovation, we're hosting a competition in April 2025, challenging participants to create accurate transcription models for classical music. This project aims to drive advancements in AMT and refine current transcription methods.

Evaluator

Evaluator

Fall 2023 - present

Evaluator is an app that aims to help musicians practice more effectively. It utilizes computer vision and YOLO localization techniques to help musicians track, analyze, and improve their posture. It also uses spectrogram analysis and multi-modal transformers to help the musicians identify their mistakes in music and correct them.

Drawing by Cecilia Ines Sanchez.
Companion

Companion

Fall 2023 - present

Companion is an app that not only plays along with a human player during a chamber music piece, but actively responds to their playing habits and voice commands like a real human would. The project involves machine learning and filtering/DSP algorithms to analyze and edit sound quickly and accurately and utilizes small NLP language models for voice command implementation.

Drawing by Cecilia Ines Sanchez.
Mus2Vid

Mus2Vid

Spring 2022 - present

Mus2Vid is a real-time art project that uses diffusion models to generate video depictions in response to classical music. It uses recurrent and transformer networks to analyze input audio and estimate its emotion and genre qualities, which are converted into text and fed to a text-to-image diffusion model to generate images.

Robot Cello

Spring 2024 - present

As the name suggests, Robot Cello is a project about using reinforcement learning to teach a robot arm to play cello. The project is currently in its survey phase but is currently investigating using motion capture technology to get training data for an RL model.

We partner with the Purdue Envision Center to collect motion data for our robot arm to train on. On the left is a video of Prof. Yun playing cello while wearing a motion-capture rig.

Research Areas + Questions

Our projects often span most or all of these areas, as they are all important to making effective, human-like music technology.

Generative Audio and DSP

How can Companion utilize machine learning and filtering to resynthesize string instrument articulations on-the-fly?

Beat Detection and Tempo Tracking

What are the most effective methods for Companion, Evaluator, and Mus2Vid to follow a musician's playing in reference to a score, and play along to match?

Emotion and Perception

How can Mus2Vid analyze emotion of classical music in real-time and utilize it to generate real-time video accompaniments?

Music Classification/Information Retrieval

What musical features extracted from various media, such as tempo, key, genre, notes, are useful to music performance technology, and how can we extract such features?

Human-Computer Interaction

How can our apps be designed in ways that are human-like and natural for humans to interact with?

User Studies + Deployments

How can we ensure that our users actually utilize and enjoy the apps we develop?

Meet the team!

Our team comes from a wide variety of backgrounds, including the Purdue Colleges of Engineering, Science, Liberal Arts, Management, and more. We have a mix of Purdue professors, graduate students, and undergraduate students leading our projects and research efforts.

Learn more

Outreach

Here are some of the ways AIM ensures that it is integrated with both Purdue's local community and the global community.

Vertically Integrated Project

AIM has an associated Vertically Integrated Project, which enables research experiences for Purdue undergraduates.

Multidisciplinary Research

Music technology is a field with very multidisciplinary problems and solutions. As such, we draw from a wide variety of departments for our talent, including Music, Electrical and Computer Engineering, Computer Science, Technology, Art, Design, and Management.

User Studies

AIM actively runs user studies on the tools that are developed to ensure that users can figure out how to use them and enjoy them.

Presentations/Concerts

AIM members have given speeches, presentations, and concerts about and using AIM technology around the world.

Vertically Integrated Projects Team

AIM stands out among other music technology research groups because of its pedagogy. While other music technology groups may cater primarily to graduate students and professionals, our group is open to all Purdue students of any major and experience level.

We hope that by helping any student interested in music technology/machine learning learn to work with these technologies, we can make a difference in these students' lives while simultaneously encouraging the adoption of music technology.

team

Get In Touch With Us!