Presented by B.Earl, a Hollywood writer and producer who has been involved with AI since 2017, our three part series will explore the polarizing conversation around using artificial intelligence as an artistic companion and creative entity. Over the past few years, AI has become the buzzword in every sector as it challenges how we view our roles as humans within society. It has threatened jobs and economic markets, promising future prosperity through unwavering faith in something that has yet to be proven let alone understood.
In Part Three of the series we will be exploring sonic AI tools, an area that doesn’t get as much attention as the text or visual components. Yet this area is quickly transforming the music landscape with big names like Timbaland signing the first AI artist, calling this new genre A-Pop. Programs like Suno and ElevenLabs are reshaping the audio arena, giving young creatives ways to fully iterate new systems into their creative process. That said, this is also quite controversial in that these programs are being trained on massive amounts of existing musical data much like other genAI systems. In this study, we will discuss both sides of this ongoing transformation to the creative process as well as how actually playing music will factor into AI’s sonic evolution.
Just as the phone camera transformed YouTube, AI’s generative models will continue on this trajectory albeit at an even more accelerated rate. These models give access to powerful creative suites, allowing for a much greater breadth of visual explorations at an expedited production rate. This comes as a double-edged sword, though, since style without substance has the potential to overwhelm the audience with meaningless slop. Art and music gives us the ability to see life through a poetic lens, layering meaning within our human wants and needs. When this poetry is lost to a deluge of “content” it all becomes white noise, forcing us to have to focus on what is meaningful.
In B.Earl’s role as the Head of Technology for his Saudi-based studio, WeirdBunch Entertainment, we will examine companies and technologies that he is currently working with as well as integrating into their own future production workflows. Join us for this controversial series on AI and Music as we drift off into these mechanical dreams while navigating the recursive hallucinations!
Please, note:
- the total duration of the event is 1 hour ( ~ 50mins lecture and 10mins Q&A)
- the talk doesn’t require any prior training and anyone can join
- This event will be recorded: the video will be available in 3-5 days after the talk
- Guests can access all videos at a small fee; videos for members are free of charge
- if you wish to become a member – please, learn about our membership plans

RSVP
BY SUBMITTING YOUR INFORMATION, YOU’RE GIVING US PERMISSION TO EMAIL YOU. YOU MAY UNSUBSCRIBE AT ANY TIME.

SPEAKER – B. EARL
B.Earl is an American comic book writer and filmmaker who lives in Los Angeles. He has been working with Marvel Entertainment since 2017, co writing Masters of the Sun, Werewolf by night, Ghostrider: Kushala, Deadly neighbourhood spider-man and most recently Daredevil and Echo. Earl also contributed to the MARVEL #1000 anthology. Currently, he is writing a sci-fi graphic novel for the video game Planetquest as well as a superhero series to launch his new Saudi creative studio, WeirdBunch Entertainment.
As a filmmaker, Earl is directing the upcoming hip-hop documentary Balistyx as well as developing several scripted projects with the production company, Gaumont. B.Earl is also a well-known voice in the emerging technology space, speaking on digital collectibles and A.I. strategy, working alongside such companies as Sideshow Collectibles and Peer Music. When
not working as a storyteller or tech consultant, he can be found teaching Tarot workshops and running esoteric game nights at LA’s Philosophical Research Society.
IG: @b.earlwriter| LinkedIn | Marvel Books | Website
Leave a Reply