Sunday 4 January 2009

Marty O'Donnell in Interview - Part 3

Between the last post and this one, Marty suggested that I go to GDC (which I would love to do one day) before we return to talking about music. This time, Marty mentions how close he’s coming with his own musical approach to realising a high level of ‘granularity’ in the music. I ask whether you can get away with more if you’re aiming for an ‘electronic’ sound rather than orchestral, and Marty talks about Rez and the potential for more ‘synaesthesia’ in games.

M: We were talking about granular synthesis or granular approaches to generating music… I would say this: I have come closer with each one because technology has been getting better, technology being that I can have more simultaneous voices and more synchronization under my control.

I’ve had ambient tracks that have had lots of different random looping elements in them that I can overlap which causes entirely different pieces to play… when I write it and dissemble it and put it in the game engine, I call it glue because I know I can glue more traditionally linear pieces together on either end with some of this more ‘gluey’ kinds of pieces that tend to be less rhythmic and stuff like that…

So in Halo 3 I was calling it “micro glue”, just for myself. And these were just individual pitches, just individual sustained notes on strings, or just an individual string note, in all sorts of different pitches so I could just generate individual notes to overlay or connect or do what I needed to and it could come in based on what people were doing…

So it would change what the piece was doing. I could essentially compose on the fly by knowing “well I know this piece is in Dm, so I could use a high E string note or a high D or an F or something and it wouldn’t clash, and I could bring that in here if this event happens… and that could sustain and then a new piece could start that’s maybe in F. So I could modulate from Dm to F because I’m also connecting it using just a single sustain or something. There’s a lot of fun just playing around with that stuff, knowing I could overlap all sorts of stuff and have it actually qued to actions that either the player does or things that we know are scripted in the game – ‘If this happens or the player gets here, this event will happened, and simultaneous to that even being triggered we trigger this new piece of music. And the new piece of music might be something as simple as just a sustained string note. But you’ll never hear that by itself, cause there will already be something going along with it…

B: It sounds like you’re actually getting quite a high level of control over instruments and notes, if you can sequenece just a single instrument and sustained notes and things, it sounds like it’s getting quite close to that granular approach.

M: It is. I’m starting to get around to around to more granular approaches. I don’t want to go completely granular… it’s one thing to record a live violin section playing a nice high F sustained note and making sure that the loop itself, that the sample has a good attack and a good release and really good loops in between so I can have it sustain as long as I want – that’s a good sample on a sampler, right? But that’s just one note. I can’t play a really good string melody using just samples. We’re getting better at it, but it still doesn’t compare to actually having the string section play that melody. The difference between what I hear when I have a string section play the melody and what I have when I’m playing samples on the keyboard playing a string melody is still really, really [large]. When I use midi performances I have to use all sorts of parameter controls just to get it even close to what a live group sounds like when they play, or just even a live soloist play. At this point I don’t have that kind of processing power to have that great of a midi playback engine inside a game engine. It could happen eventually, I’m not saying it couldn’t and I’d love to be there when it starts happening, but I still think we’re a ways off on that one. It’s not the number one priority for game engines. *laughs*

B: I was going to ask in relation to the performance aspect, a lot of the covenant and alien sounds use a lot of synths and electronic sounds. Can you get away with a more random, more granular approach with synths if you’re after that sort of alien approach?

M: Yeah, actually I think there would be nothing… you could have an entire game sorta like, have you played Rez?

B: No I’ve seen it, but never played it. I really want to play it.

M: Yeah it’s a wonderful game, it’s essentially a shooter game with all vector and Tron like graphics – like the old Tron movie. What’s really cool is that all the sound effects and everything you do is all synchronized… basically it’s all trance music that’s being played. It’s trance music that only has certain elements that play back and every more that you make is in sync with the basic beat of whatever is happening, and all the effects that you hear are musical pitches. And because they’re always in synch, while you’re playing you’re creating a piece of music on the fly that is completely in sync with what you are doing. It’s very, very effective. And because it’s this otherworldly Tron-like universe that you’re in, it works great.

B: That whole synaesthesia game genre is really interesting, and I think a lot of the ideas from that inspired my own thinking in my thesis, like what would it take to get that relationship between the music and the visuals in an FPS.

M: Yeah actually I think that’s a great place to explore… what I find is that… it works great for electronica and that sort of genre. It’s like its just tailor made for that kind of thing, but as soon as you’re doing something, like ‘this needs to have a more orchestrally scored, epic feel to it’, you struggle with that. And as soon as you say, hey you know what I love trance or I love electronica, this is perfect, but if you want to move away from that then you find it’s a lot harder to implement things that way and move into a different sort of genre of music. At least that’s my experience.

Next, Marty explains the relationship between music and location as well as how much of the music he composes in response to the level. Then, in perhaps my favourite part of the interview, Marty answers my question about why he didn’t (and still doesn’t) allow players to change the volume of the music and sound in the Halo games.


Kirk Battle said...

I'm really enjoying these. Emergent music games has been an idea that has been rattling around in my head ever since Gaynor posted that big piece on immersive narrative.

An example of someone doing something without techno in this medium is 'Everyday Shooter', which used Steve Reich's 'Electric Counterpoint' as a foundation. The game's potential only starts to show once you beat it and unlock travel mode. That makes you invulnerable, so you literally just cruise around making music however you like it.

The creator did a lot of interesting interviews with Gamasutra about background noise becoming music, worth checking out if you have the time.

Ben Abraham said...

Well I'm glad someone is enjoying these!

Yeah I've done my research on Everyday Shooter. I can't recall if it made it into the final version, but certainly early on I referenced your Everyday Shooter post from PopMatters in my thesis. And I read the interview with Jon Mak on Gamasutra where he talked about the influence of soundscape - brilliant stuff.

And yet I haven't unlocked travel mode yet, even though I've had it on PC for months. Oops.