Is AI Killing Real Music? The End of Human Creativity with Jay Bartot | Your AI Injection

Has AI already replaced your favorite musician?

In this episode of Your AI Injection, host Deep Dhillon sits down with Jay Bartot, tech innovator and musician, to debate whether AI-generated music is empowering artists or erasing their relevance entirely. Jay reflects on how through his life digital tools have transformed his musical creativity, from his punk-rock roots to his later MIDI experiments. Now, as AI mimics iconic musicians and even completes unfinished Nirvana tracks, are we witnessing the end of authentic human musicianship? Join the conversation and discover how AI is rewriting the rules of music-making, and whether that's something to celebrate or fear.

Learn more about Jay here: https://www.linkedin.com/in/jay-bartot/

Check out the article on our website about this topic: How AI is Transforming Music Composition

Or check out our related content:

Get Your AI Injection on the Go:


[Automated Transcript]

Deep: [00:00:00] Welcome back to your AI Injection. This week's episode features Jay Bartot. Jay's a technology entrepreneur with a number of startups and a few nice exits as well. Jay's got, 20 plus years of experience developing data and machine learning apps for businesses and consumers.

Jay, thanks a ton for being here. Hey, Deep. Thanks for having me I'd love to dig in with you about your tech background. And, you know, just for our listeners benefit, the two of us have worked together on multiple projects over the years. And Everyone's got that short list of people they just love to work with.

And Jays certainly one of mine, if not at the very top of that list. But we're going to talk about music today. So Jay and I, we've nerded out on uh, AI systems for years. But we also have had the privilege of being able to play some music together.

He's got one of the most amazing music studios that I've had the privilege of being a part of. Jay, I want you to go way back and tell me about how you got into music. I remember you having a pretty awesome story how you [00:01:00] got into the punk scene. Take us back into those days.


Check out some of our popular podcast episodes:


Jay: Yeah, happy to. and I appreciate the sentiment on working together. You're at the top of my list as well. my music background and my technology and entrepreneurship career are actually pretty closely linked. that'll be obvious as I start to tell my story, when I was around nine years old, my family moved from.

rural Connecticut to inner city Chicago. My mother was pursuing PhD in sociology at the University of Chicago. she picked the family up and moved us in the late seventies and plunked us down in Hyde Park in Chicago. Where the university of Chicago is.

And as you know, deep uh, you and I have connected on, on this before. The South side of Chicago is a little,

Deep: for our listeners benefit, I went to school not too far from UFC uh, at the Illinois Institute of Technology and, if anyone's read Freakonomics, the very original version, that was pretty much our life we were surrounded by the heart of the North American crack trade back then, so it was [00:02:00] rough.

Yeah.


Xyonix customers:


Jay: Yeah. I had to build up survival skills and street skills and was, plunged into that world pretty dramatically. I was a pretty nervous, anxious kid. That summer that we had moved from Connecticut to Chicago was, I think in retrospect, pretty anxiety provoking and I taken to tapping and beating and as my siblings used to say, banging on any surface I could find. looking for an interesting sound or tone. and so not long after we arrived in Chicago I had to pick an instrument at school in fourth grade, I think, so I picked the drums.

um, so I took up the drums in the school band and, got into it pretty quickly. I had older siblings who were, you know, into the 70s rock music of the day. I was probably more of a sophisticated, listener to rock and pop music of that era than some of my classmates given I had older siblings.

didn't take me long [00:03:00] to be playing my drums along to the radio and all the bands of the time. you know, of course, this is the seventies. So. You know, We were all listening to Led Zeppelin and Pink Floyd and all the 70s rock bands. And then in the early 80s bands like The Police uh, and Genesis Were certainly things that I listened to. And by the time I was in eighth grade, I had formed my first band. And gathered up a few other kids in school who were playing guitar and bass and singing. we had our first band and because we weren't very good thought, we should play Ramones covers.

Deep: You got the one string bass line and yeah, that's, that makes sense.

Jay: Yeah, so we were playing Blitzkrieg bop and a variety of other Ramones tunes and starting to play, some gigs in school at lunchtime and a few parties here and there.

by the time I was going into high school, I was convinced that I wanted to be a professional musician [00:04:00] and be a professional drummer. that was what I was going to do. I spent most of high school playing in bands and, I pretty quickly graduated. From playing with kids, my own age to playing with adults who are older than me, and the main reason for doing that wasn't because I was so good or talented Certainly I had lots of schoolmates who were talented, but I just found that older people were more serious and focused. kids my age in high school. sure they were playing in a band, but that wasn't necessarily their top priority. so I was very passionate and focused.

spending a lot of time doing that. One of the things I noticed early on and early high school was that if you were into 70s rock and 80s rock that's where the masses were, kids who, could play but there weren't really any gigs, to be found playing Led Zeppelin covers Yeah, you could play a school party or, or something here and there. But I got introduced to. [00:05:00] Hardcore punk in the early eighties in Chicago. And what struck me about it was that it was a scene. you know, It wasn't a bunch of kids who, were just trying to do their best to imitate Jimmy Page.

It was very raw music. It was playable music, much like the Ramones that. I had played in my first band, but, there were clubs, professional clubs where these gigs were happening. and there was even like a local Chicago hardcore punk fanzine. so there was a scene happening.

I remember telling people that once I joined one of these bands I was playing at, such and such club there was a club called on the north side, I'm trying to remember exactly where it was, but in the near north side, that was a well known music club they were trying to make money like everyone else.

Deep: once a week they would have hardcore, punk night and all the punks and the mohawks would come in. and so I was playing in real clubs by the time I was 15.

One of the things [00:06:00] that's, that's intriguing and I'm sure you're going to help us make this transition but punk is special for its spare ness and rawness and it feels like nothing could be more, Not punk than computer music and all this digital, hoo ha or whatever, punk's pretty raw.

Deep: And so when I was back in Chicago, I was totally obsessed with ministry. these guys are, you know, kind of like maybe punk evolved into super fast industrial for those who don't know. One of the things that struck me about ministry was they had these crazy drum machines that would play a lot faster and tighter than humans really did back then.

And one of the things that I noticed is like a few years later, drum and bass had a similar phenomenon where people were working in this electronic world. They're, putting together these drum lines that normal humans didn't really play at the time. And then you fast forward a few years, even like Ministry themselves had a live drummer, they started playing this stuff.

So they started playing like, robots or like the machines were playing and same thing with drum and [00:07:00] bass. it became this whole thing where, it's the snare, it's the hi hat and the kick and you're copying the machine. So the question for you is are you seeing this kind of thing even today?

This sort of where the machines are teaching the humans Like it's this feedback loop where it's not just replacement, it's more like inspiration from the other side.

Jay: Yeah, great question. in all, pop music, there's always an element or an edge of.

People doing things that are quasi, virtuoso or acrobatic. And so, you know, I think about metal, for example and, watching these bands come up with, incredible chops and people practicing, you know, hours and hours a day.

Probably, similar to what classical musicians we're doing during their training, the long hours and putting the passion and effort in and, even though, metal music, was poo pooed by more established musicians. The chops they were able to build were pretty [00:08:00] incredible.

I would say, honestly, that what computers have done have more leveled the playing field. And you know, I think. although I have my own stories of using computers to, do things that I couldn't do myself on my own instruments.

I think probably mostly, computers have more, more main things, you know, standard. And probably more pop and more, more basic than really inspiring people.

Deep: that when you think they've leveled the playing field, are you speaking to something like a garage band, for example, that takes a recording studio out from being an incredibly expensive proposition to being pretty cheap now to put together some good stuff. Is that kind of where you're going with that?

Jay: Certainly PCs have democratized music production tremendously. I was just thinking more about your specific question because a computer could do, certain things so much more easily than a human, was that inspiring, humans to actually be able to play that way?

I guess I generally [00:09:00] say that's not my impression. I remember using early MIDI programs and going in the, the editor with a little pencil tool and being able to create a pattern that I couldn't play myself. either slowing down the recording so I could play the pattern really slowly or just, hand editing a pattern and then playing it back um, and sort of being impressed with what I produced but, but at the same time, feeling a little alienated from it because as an experienced musician I knew it was synthetic that said I've heard many metal guitarists and metal drummers.

Do acrobatics on their instruments that were, astonishing and impressive, but I don't think that was probably inspired by computers.

Deep: Interesting. when we were chatting, you mentioned that you see your tech and music careers as being kind of in parallel.

what did you mean by that?

Jay: Yeah so these two worlds slammed into each other when I was in college. continuing a little bit with my story from high school, and my [00:10:00] determination to be a professional musician.

I went to Berkeley College of Music. For my freshman year in college for those who don't know is uh, a professional music school. It's really known for jazz but also, pop music session players and so forth, really talented people go there. And learn the trade, if you will learn, learn music uh, from a bit more of a vocational perspective.

And for example, my professors there were, people who maybe played one night with a well known jazz musician. The next night they played in a wedding band. The day after that they were giving high school grammar school kids private lessons.

Deep: When I lived in Cambridge, across the river from Boston, I remember uh, there were a couple of clubs where the Berkeley kids would show up and you'd see the same set of folks one night playing, rockabilly. all the way into the costumes but they look like that was their thing.

And then the next night you'd see him playing like a totally different genre, like out in the jazz group. And then the day after they might be, in Marilyn Manson was big. So you [00:11:00] had some of those heavy goth rocker guys one night. that always cracked me up

That was pretty much like the antithesis of the, grunge movement. So I remember thinking like,

Jay: but

Deep: I remember being incredibly impressed with these guys ability to not just pull off the music impeccably, but to pull off the personas and everything, like they just looked like they came from that world.

Jay: And what was taught there was versatility because versatility pays the bills. And so, it was a great experience. I only stayed at Berkeley for one year. It was a great experience in that it taught me that being a professional musician, making your living with your instrument is really hard. And there's a lot of really good people out there.

and you have to do whatever you have to do to pay the bills. And I was feeling more altruistic about music still at the time. And that really didn't settle with me very well. so I left Berkeley and I went first to the University of Illinois, Chicago for about 18 months, and then finally settled at the University of Iowa.

But all along, you know, went [00:12:00] to these state schools. I was, very interested in just broadening my horizons. And I was starting to take uh, lots of social science classes and ended up switching my major to anthropology. I was still, very interested in music and music production.

 And I was working when I was in school and paying a lot of my own bills. And any money I could scrape together to buy gear. And one of the pieces of equipment that took me a while to get a hold of, but I finally did was in the late eighties and early nineties, these products on the market for doing multi track analog recording on cassette tapes.

And so you could buy one of these little tabletop units from Tascam and, and other folks, they were about 500 bucks. There are four tracks and they're pretty shitty. And so, So I'd come up with these compositions and I'd record the bass and the drums and guitar and vocals. And they were frustrating because, you know, you could work all day on a [00:13:00] composition and then, if the tape started to wear out or, there was an electrical burst on the power system or something.

It could ruin your composition. And, just in general, the fidelity was really low because the tape width was so thin. I was talking to a friend at school one day and He said, if you go down to the computer center at the school, they'll lend you the money to buy a computer.

And he said, you do that, you can do MIDI composition on your computer. And it's much better than, you know, trying to do stuff on tape. And so I heard a bunch of things in that statement that really changed my life. First of all, Someone would lend me money. That was just a crazy idea in general.

But then, you know, I, went and figured out what MIDI was and really, you know, MIDI in a lot of ways was my introduction to computing.

Given the fact that it's a digital medium it's a seven bit protocol and for some reason I was just fascinated with it. So my next piece of gear, so I went [00:14:00] down to the computer center. The guy said, you can buy a PC and he showed me a PC clone running DOS.

He said, that's 700 bucks. He said, or you can buy this other thing over here, this Macintosh it's 2, 500 bucks, but it has a graphical user interface and it'll be much easier for you to learn. And again, you got to remember, you know, I wasn't a math head in high school. My friends were but I was probably more friends with them because they were punkers too.

And so I had friends with PCs, but, you know, I never entirely. Understood. So I was intrigued with the idea of a personal computer.

Now I have one. Then I realized I needed a MIDI interface for my new Macintosh. And I needed a multi timbral synthesizer to be able to do the MIDI equivalent of multi track recording. But, you know, once I scanned all those pieces and cobbled them together.

It was game changing for me, the kinds of compositions I create. And this is, we're getting back into what we were [00:15:00] talking about before was pretty astonishing. So I got a copy illegally of a program called master tracks which was a multi track MIDI. sequencing program.

And now I could record a measure worth of a groove and cut, copy and paste it. Or I could record a couple of measures of a groove and go in and edit out the mistakes and then cut, copy and paste those. And within moments, had this beautiful sounding composition. And I, I always tended more towards.

analog type synthesizer sounds versus synthetic type synthesizer sounds. you know, I would choose drum sounds that sounded more like real drums And so that was really profound for me. That power. And of course, you know, I was also writing my papers on this Macintosh.

I was, drawing things with Mac paint. you know, I was buying uh, all kinds of programs uh, or, copying all kinds of programs from friends and to me, it was the ultimate instrument. [00:16:00]

Deep: The Mac itself.

Jay: Yeah.

Deep: Have you seen uh, open AI's new jukebox program?

Jay: I have.

Deep: Cool. So let's, jump into the AI world and then let's try to Stitch these two worlds together a little bit, do you have any thoughts on these kind of generative pre trained transformer like technologies?

I just thought it was kind of a wild idea. Like, We're both pretty familiar with this idea of how you train up these things in the text world. You basically train these systems to try to predict the next word or sequence of words based on, this Gigantic corpora of web documents.

Deep: But the first time I saw what the jukebox guys were doing, reducing all of a music composition down into this lower heavily down sampled space I think 350 or so hertz. And then within there, trying to take the same approach to predict the next samples in the model.

Jay: Yeah.

Deep: All right. I found it funky and just kind of wild. And you see the outputs of some of this stuff. I think it was Rolling Stone did an article and they'd taken a bunch of Nirvana stuff and was finishing a whole [00:17:00] composition that, you know, Nirvana never of course made, what are your thoughts on, on that whole approach?

Jay: Yeah, I mean, it's, very much of a natural evolution of where things have been evolving for 30 years. you know, I was talking about MIDI earlier. And just how democratizing the personal computer and MIDI was, remember the days as late as the early nineties where, going into a recording studio and cutting an album could have been, half a million dollar endeavor or more depending on, you know, what you were doing even just to buy the two inch tape.

You know, that most major artists were recording their, their albums on where those, 500 bucks. So, The computer and technology democratized access to music production. And again, it started with MIDI, but then. You mentioned GarageBand earlier which you know, I also think was uh, a really key, you know, addition to the arsenal GarageBand wasn't just MIDI, but it was also audio [00:18:00] samples and loops and, certainly, you know, it doesn't take a rocket science to listen to most pop music and realize that it's composed of.

repeating loops hopefully arranged in, you know, interesting and creative ways. you know, as of 10 or 15 years ago, you could fire up garage band, grab some pre recorded loops from its library, you know, and put together a pretty nifty composition with drag and drop ability.

So you didn't even really have. to create a really cool composition, assuming you had a bit of an ear for these things. So now to make this leap towards, these generative models, which, can take. Some of these loops or base audio signals and twist them in, different ways.

you know, I think as a natural evolution of where things have been headed

Deep: So you, if I'm reading you right, you're saying the future of jukebox is not doing track separated.

Prediction, right? It's doing the whole thing. The guitars, the bass, the [00:19:00] vocals, it's all swirled together. Just predicting the next sample points. So are you suggesting like the future? There's like kind of track separated instrument sector separated and maybe predictive from a compositional advantage, like authoring.

Scores are and then leveraging it within a garage band context to add an instrument to render or something like, like, how, how are you thinking about that? Bridge those two worlds for me a little bit.

Jay: Yeah, I mean, I think it could go in a number of different ways.

Think about the style transfer stuff that we've seen. on the visual side with convolutional neural nets there you're taking the higher frequency components of one image and hanging them on. The lower frequency bones of another image. And so, you know, you can certainly imagine that paradigm working in music.

It's a little hard to imagine, like how things would actually sound. but you know, You might say like, Hey, I really like, this Rihanna song, and I really liked the music production values on it. And I really liked [00:20:00] the sounds that are on it. But take all of that and put it on top of this other chord structure.

We know, of course, in rock music and in pop music that chord progressions are fairly standard. One, four and five is incredibly standard. one, six, two is, another something that's really standard. Those I think could be the bones.

you lay the higher frequency elements of how the song sounds onto those bones. that's one way. I want a new song and I want it to be a mix of. Things that sound like this and things that sound like that. also I think working with individual tracks is pretty interesting you know, including track separation.

Which to me is a holy grail, if you can there, there's a bunch of multitrack stuff floating around on the web right now, and I don't know where, how this stuff got out or where it came from, but, there's a guy whose videos I watch quite a bit, Rick Beato.

Who's a musician and producer who's about [00:21:00] my age and is a lot of really interesting content out there. He finds these multi track recordings from the 70s and so forth and then does song analysis. And when you're able to isolate those tracks. That's when you really

have the most flexibility

Deep: because now you can put all kinds of effects on the guitar or on the bass or even just change it out.

Jay: I mean, What if you took a Led Zeppelin song and pulled out the bass track and you added your own the vocal track or whatever. there's precedent for some of these things that really go back a surprising number of years.

it started with sampling, the rap artists in the eighties and nineties were sampling chunks of recognizable pieces from well known tracks. Some of them got the pantsuit off them for it. But it's an extension of that.

I'm going to come up with this whole new song, based upon this really well known baseline. And I've got the flexibility and freedom to lift that baseline, right out of this recording, and now it's a canvas. [00:22:00] And I can, do whatever I want with it. That's really powerful.

Deep: Yeah. I mean, this is kind of like the crux of it what is left of the musician in this world and is it good or bad? If you rewind to the seventies. Most musicians had their instrument, had their group of folks they played with. Maybe they swapped instruments.

They had a good time. Maybe they wrote it down, you know, but even then that might be a big step for those guys and that might be quite high tech, to be a, composer you know, during like Mozart's era, for example, was a whole other thing. They had to hear it in their own head basically,

intimate with the notation and be able to write, and then they had to have access to a, you know, the ability to render a whole orchestral piece or something. going back to your democratization point, has It's like anyone can grab a garage band and compose stuff now and then swap in other instruments

It feels like it's a great thing, but most musicians I know love just playing music together live and communicating, via their instruments And then they [00:23:00] almost like go into a different mental mode, more like the software programmer kind of mindset when they're mucking around in electronic world, like Is this good or is this bad? Like especially if you start pulling in the, the toolings on its evolutionary path. But now we start talking about. Machine learning AI techniques, track separation is one thing, but downright straightforward authoring of drum beats of loops what is it?

Is it good? Or is it bad?

Jay: yeah, I mean, I think it's probably in the eye of the beholder, to some, it's great, to others it's, it's really bad. there are two elements of creating professional music. There's the compositional part, but then there's the production part.

And so some of the, listeners may know that, on most of their, favorite recordings the role of the producer. Is a really big role and a silent role to the consumer of the music, but really the person kind of running the show and the recording studio historically the producers are really empowered by this technology.

you know, many [00:24:00] folks, especially, from my generation complain that most popular music today. is a producer and their laptop and a singer, and an auto tune program.

the feeling I think is, that this is an art and it's devoid of as well. I especially think that The auto tune feature, you know, which is surely leveraging state of the art AI technology is playing a big role there.

And so I, I I think there's a lot of folks who are not aware of any of these dynamics and just, turn on their music and listen to it and enjoy it. I think there's a lot of people who long for the days when. The people producing pop music were actually talented and can play.

I watch a lot of videos on YouTube from the 70s and 80s. Especially when the only way to see your favorite bands perform was on TV late night whether in music videos or live like on Don Kirshner's uh, rock concert. that stuff's all on [00:25:00] YouTube now. And when I go down and read the comments, it's the same comment over and over again, again, from probably somebody from my generation saying, man I, I really long for the days when my favorite artists could actually play. Cause you had to,

Deep: I feel like, music like art, I use this example a little bit in the art world, they, a lot of times folks, used to ask what's actually the role of the artist Artists are pretty amazing at nipping out BS. even back in the 80s, as soon as everyone, found out that Milli Vanilli was lip syncing. I mean, They were poo pooed. a lot of these singers who are maybe auto tuned in the recording studio can still sing, you know, live yeah, they might auto tune them here and there, but they still can sing.

But then there's a lot of folks that, really get these organic audiences like Rodrigo and Gabriella come to mind, pair from Mexico City that play pretty heavy metal, but like flamenco style largely the musicality and the and the rendering ability is I'd say would even trump the compositional aspects for those guys.

Jay: Yeah.

Deep: So, it [00:26:00] feels like every time something gets automated. It's no longer okay to be the artist who does the thing before it got automated, right? Like, when photography evolved and we started getting photography all over the place, it was no longer okay to call yourself a painter simply because you painted realistic imagery.

If you were going to still dabble in realism, you had to be doing something different with the realism that can't just be captured by the lens. the optimist in me wants to say that to the extent that things get automated, musicians are very creative people. They will bore quickly of anything that smells like simple minded automation.

Jay: Yeah, I think real musicians will for sure. there's producers, lawyers, record companies. a lot of those dynamics have. Greatly changed over the years. And yet, they're still the same to some degree as well.

I do think you're seeing more [00:27:00] artists now that are producers and they're artists as well. And so they're, they're composing and they're producing writing and creating. So I think that role has been more elevated. But then there's a lot of people out there too, that just want to play just to play.

And I try to do both. when I'm doing composition in GarageBand which I love because it's simple enough that it doesn't get in my way creatively other software I used over the years was probably more than I needed and, created problems.

But, you know, I will, out of. Convenience and efficiency. I'll lay down two bars of something quantize it randomize it and then copy and paste it. And that gets a composition together really quickly and it's super efficient. Maybe I feel a little bit guilty if I play it for someone.

And they're like, wow, did you play this? but then, you know, I also try to jam with friends and we actually play. And so I hope that the playing part [00:28:00] makes a comeback. and learning guitar or learning some instrument as a teenager.

returns to being a rite of passage.

Deep: So I want to talk a little bit about this new world that opens up when you can leverage AI, and it's different from what we've pretty much been talking about so far, like. singer songwriter artists bands playing in front of, people are recording to for the sale of, records and stuff.

But there's this new thing that's kind of enabled. by a massive demand in low cost music generation and computation. So if you think about video games, You've got to create all of this audio to go with the video games that somebody's moving through the game space. you know, there's a company like, AIVA that, has this dynamic, totally personally tailored to the scene in the game, uh, music generation.

And then there's other companies that are doing things like composing on the fly by weather or emotions. I feel like these companies are trying to address this sort [00:29:00] of dearth of easily accessible music, for all the content being generated out there.

amper music is an example of that. what is that like when you put the traditional music world off to the side and you put on your, tech future hat and you start seeing a, everything from smart homes and you walk in and, cause at the end of the day, actual known public music catalog is I think there's 10 million songs published totally that are like record label equivalent

And of those, maybe 50, 000 or they get played over and over again. And probably, a thousand of those that drive us nuts. But there's this whole other world that could emerge where you could start to like have very personally tailored or environment tailored music. What are your thoughts on that?

And what role AI might play in that?

Jay: I think like a lot of things like news stories and other content. Music has already gotten caught up in the recommendation engine, collaborative filtering bubbles and, so it's really a poster child for that.

And, [00:30:00] is that a good thing or a bad thing? Well, you know, It's great when I have Pandora or Spotify on, and I am wanting to play some reggae when I have some friends over. we're enjoying music and having a few drinks But the bad part about it is that I'm not being exposed to new music.

you know, The experimentation over exploitation formula gets out of whack. there may be a great song out there that I would really enjoy that I never get a chance to listen to because, you know, the recommendation algorithm says we don't think he's going to like it.

I think that Pandora's boxes is already open. And whether it's a good thing or a bad thing, I think like a lot of the other things we've talked about, it really depends. The other thing I would say is that, I've always been curious about.

dynamic music. a given song has a structure to it. And sometimes there's alternate mixes of songs, like an extended dance mix, or something like that is special purpose. But, you know, a song is like, you're going to start with a verse, go to a [00:31:00] chorus, then you're going to go to a bridge and then back to the chorus

But I've always wondered what if that was randomized? And it was back when I've got my first CD player in the early 90s I wondered if we'd see stuff like this, maybe it could learn a preference so that, you might like to hear a particular song.

being intro, verse, chorus, verse, chorus, bridge outro, but I would want to hear that same song with a slightly different structure. Maybe the bridge comes sooner Or maybe I would want the mix to be different than you would, and it could dynamically adjust.

And that could happen again if you could pull tracks apart on the fly, or if you had the individual tracks. Stored together, but separately that kind of personalization may be a frontier. I don't know. I think one thing is if the technology can support it, it's the other thing if whether people care.

Deep: Yeah. I mean, It feels like the personalization lens that you're talking [00:32:00] about, feels like it can play in with Other emerging forces, one of the things that's always driven me nuts is muac. You get in the elevator, you hit this, you hear this kind of garbage muac stuff, and then it feels like, you got a smart elevator.

I'm the only one in the elevator. Why the heck are you playing me muac? But all of that's like track selection and you're getting at like maybe helping people think about what is underneath the surface that they really like like I love music with really dramatic sound and volume differences, a lot of dynamics that way.

Whether it's, classical or rock or heavy music that to me is really important. All right. I'm sure we could talk about this forever. This has been an awesome conversation. Thanks a ton, Jay, for helping us just, scratch the surface here.

Jay: Thanks for having me. Always fun to talk about this stuff.

Deep: That's all for this week, folks, Thank you to our listeners for tuning in. Uh, If you've got a chance, we're a new podcast. We'd love you to jump out and give us a rating.

If you like what you're hearing. We've also got an article that's dropped onto [00:33:00] our website that's all about This music composition topic that we've been talking about here today We go into a lot more depth there. So if you can find that at xionix. com slash articles, that's x y o n i x dot com slash articles