EAAI Mentored Undergraduate Research Challenge:
Human-Aware AI in Sound and Music


This challenge was held for EAAI 2023. The new challenge for EAAI 2024 is AI for Accessibility in Communication.

Using machine learning, the Blob Opera can generate opera singing and harmony. People have been using Blob Opera to cover and/or arrange various popular songs. If you need some background music while you read, let the AI-generated opera-singing blobs serenade you. This playlist's songs are covered and/or arranged by Tom O'Connor.


Challenge Details

  • The purpose of the mentored undergraduate research challenge is to provide undergraduate students exposure to the complete research life-cycle through the guidance of a mentor familiar with the research life-cycle. The research life-cycle includes all the steps from identifying a problem, to hypothesizing solutions, to implementation and experimentation, to ultimately reporting results in a written publication.
  • Participating teams will submit a manuscript of their research project for peer review at the EAAI-23 Symposium, which is collocated with AAAI-23. Teams with accepted papers will have their submission published and presented at the EAAI-23 Symposium.
  • Research challenge teams must include:
    • At least one undergraduate (including community college) student,
    • At least one mentor (faculty or with a Ph.D.),
    • Anyone else, but the undergraduate student must be involved in the majority of the research and the mentor must provide regular guidance to the team.
  • The objective of this year's challenge is to perform and publish research on human-aware AI in the application of sound and music. The project should be doable within one semester or summer---be sure to keep the project simple and doable, addressing a single question if your problem is large. There are many possible projects in this understudied area of research; some examples include, but are not limited to:
    • Automated accompaniment (AI system plays music alongside person)
    • Understanding a person's interests/feelings from what they play/listen to (AI system thinks about a person who performs or listens to music)
    • Playing music based on what a person does (AI system decides what to play based on its observations of a person)
    • Intelligent performance tutor (AI system provides feedback to a human performer to help them learn and improve)
    • Communicating sound and music to persons who are hearing impaired (AI system adapts conveying audio information to the user)
    • And more! Check out Project Ideas below to get some inspiration.
  • An introduction to this year's challenge can be found in the AI Matters column: "2023 EAAI Mentored Undergraduate Research Challenge: Human-Aware AI in Sound and Music"
  • Timeline:
    • Submission deadlines and the peer review timeline will follow those at EAAI 2023. Paper submissions must:
    • Accepted papers will be presented at EAAI 2023 in Washington, D.C. on February 11 or 12, 2023.
    • Specific dates:
      • Submission Deadline: September 11, 2022 at 11:59 p.m. UTC-12 (anywhere on Earth)
      • Paper Notification: November 18, 2022 at 11:59 p.m. UTC-12 (anywhere on Earth)
      • EAAI 2023: Februrary 11 and 12, 2023

Registration

  • If you have a team who is interested in participating, then please contact Rick Freedman (rfreedman at sift dot net) with:
    • Team member names,
    • Team member e-mail addresses, and
    • Note who are the undergraduate(s) and mentor(s) on the team.
  • Why register your team?
    • Non-commital: registration is not a requirement to participate, but it lets the organizers know your team is considering participation.
    • "Customer service": if your team has any questions about the challenge, then we can do our best to answer them.
    • Updates: we can send teams updates about the challenge, including new resources, timeline changes, and deadline reminders.
    • Program committee: to provide peer reviews to all submissions, we need to form a program committee of researchers familiar with undergraduate research. If we can estimate the number of submissions, then we can make sure our program committee is large enough to avoid reviewing delays. It would be appreciated, but not required, if team mentors are also willing to serve on the program committee and review other teams' submissions---there is no conflict-of-interest because this is a challenge for undergraduates to experience the complete research life-cycle, not a competition for the best research.

Resources

We plan to share more resources as they become available. If you have any relevant resources that you recommend, then please send them to Rick Freedman (rfreedman at sift dot net) for consideration. Disclaimer: None of these resources are endorsements or advertisements. The organizers identified these as useful materials and are sharing them for educational benefit.

Code-Related

  • We created an open-source digital modular synthesizer for this year's challenge, available on GitHub. It is not required to use this code for the challenge, but it has a GUI for human interfacing (reported to an AI system) in addition to commands for an AI system to create unique sounds and play notes. The software is written in the Processing programming language, which is built on top of the Java programming language.
  • A GitHub project by FinFetChannel provides an open-source programmable sound generator in Python. With some modifications to the available code, one can change the waveforms (sound that it plays) and have a computer use it to play notes.
  • Spotipy is a Python library that wraps around Spotify's Web API. It is not required to use this code for the challenge, but it has access to licensed music, information about licensed music, and can interact with a user's Spotify account (if they have one).

Research-Related

Many references are listed in the AI Matters column for this year's challenge, but additional resources about both the topic and undergraduate research are listed below:

Modular Synthesizer-Related

The challenge this year provides optional code (see above) for a modular synthesizer that both humans and AI systems can use to generate music. However, teams are not required to know how to play and/or program synthesizers to participate. For those interested in learning about synthesizers to use them in the challenge, here are some basic resources to get started. You can learn a lot from playing around with the code as well.

Project Ideas

Far from a complete list of things a team could research, but the first step in the research life-cycle is to observe the world and come up with some questions you want to answer. Check out the videos below for some related research projects and video-inspired questions to get started brainstorming. What will your team investigate?

  • Shimon the robot at Georgia Tech can play music alongside human performers. What does Shimon's AI system need to understand about its fellow performers?

  • Can an AI system automatically perform Mickey-Mousing based on someone's actions like this pianist?

  • What can an AI system conclude about someone's mood based on the music they play or listen to?

  • How would an AI system stay in sync with a human performer? When should an AI system join in during the duet?

  • How can an AI system effectively portray sound and music to individuals who have hearing impairments?
  • What can an AI system do to interpret rhythm, emotion, and other musical properties performed without sound?
  • In which ways could an AI system personalize and spice up karaoke night?

  • About what would an AI system provide feedback when teaching someone an instrument and/or song?
  • Watching a human act, what music should an AI system choose to play when? Flip that around: listening to a human play music, what video should an AI system choose to display when?

Results

The following papers were accepted for presentation at EAAI 2023 (links to the papers coming soon):
  • Emotion-Aware Music Recommendation
    Hieu Tran, Tuan Le, Anh Do, Tram Vu, Steven Bogaerts, and Brian Howard [link to pdf]
  • Music-to-Facial Expressions: Emotion-Based Music Visualization for the Hearing Impaired
    Yubo Wang, Fengzhou Pan, Danni Liu, and Jiaxiong Hu [link to pdf]
  • Predicting Perceived Music Emotions with Respect to Instrument Combinations
    Viet Dung Nguyen, Quan H. Nguyen, and Richard G. Freedman [link to pdf]
  • MoMusic: A Motion-Driven Human-AI Collaborative Music Composition and Performing System
    Weizhen Bian, Yijin Song, Nianzhen Gu, Tin Yan Chan, Tsz To Lo, Tsun Sun Li, King Chak Wong, Wei Xue, and Roberto Alonso Trillo [link to pdf]
  • Learning Adaptive Game Soundtrack Control
    Aaron Dorsey, Todd Neller, Hien Tran, and Veysel Yilmaz [link to pdf]
  • A Multi-User Virtual World With Music Recommendations And Mood-Based Virtual Effects
    Charats Burch, Robert Sprowl, and Mehmet Ergezer [link to pdf]

Organizers:

Program Committee:

Past EAAI Mentored Undergraduate Research Challenge Topics: