Showing posts with label sound. Show all posts
Showing posts with label sound. Show all posts

Monday, 27 July 2009

Permanent Death, Interstitial - Music on Mute

While chatting to some of my blogger friends the other day, some happened to mention that they liked to play Far Cry 2 without the music – either for added realism or some other reason. Since I’m experimenting already with permanent death, I thought I’d give it a shot and see how it changes the experience. I can now confirm that it is… different.

I had always enjoyed the music for Far Cry 2, but even I must admit that after hundreds of hours of exposure to it my ear had grown weary and stopped noticing it. However, as soon as I turned it off the lack was entirely apparent and I realised just how much I had relied on the music to provide much of the game’s pacing.

When you get close to enemies, even if you don’t know they are there, the music often adds a layer of tension to the mix and if you’re experienced at playing the game you will probably pick up on this subtle change and prepare yourself. With the music gone, being ambushed can become truly a surprise…

The first time this happened to me, it was indeed a surprise - but it was also less tense than it would have been with the addition of music. I was unconsciously waiting for it to ramp up and provide me with an emotional guide as to how I should be acting and responding. Lacking this musical signpost, I spent a lot of the early time in an emotional plateau – never feeling particularly bothered by the enemy soldiers.

I became more brazen – I found myself charging through the middle of a series of checkpoints in pursuit of a weapons convoy and was hit by a sudden fear. I was apprehended by a feeling that I had dangerously over-stretched myself and that I could be only seconds from death.

I came away from the encounter unscathed, but the tempo of the Far Cry 2 experience had changed. I started ramping my concentration levels up and down much faster– snapping from the baseline ‘pseudo-boredom’ of driving through beautiful scenery, to full-on attention which would then be paid to the effort of staying alive. Or perhaps that was what I was already doing when I had the music on and it just became more transparent without the music.

I suspect not – for without music to tell me that the fight wasn’t over, I had to judge for myself. and it subsequently felt like I entered a great many more discreet conflicts. In one instance I had just finished clearing a camp of soldiers and was searching around for some diamonds, when more turned up and I had to break off the search, bare seconds passing between the first and second encounter. If the music was there, it might well have stayed at that same level across the whole period because it often takes a reasonable period of time to 'calm down'.

Whatever the case may be, I’m still convinced that the people responsible for the music of Far Cry 2 have done a singularly excellent job in creating a score, and implementing a system, that has held up for a hundred hours or more, in my case. If I keep it off for a little bit longer I hope Jeff Wesevich won’t take it personally.

Monday, 18 May 2009

Audio Vision, Part 1

A big part of my research thesis – the core conceit if you will – was one part a belief in the power of the relationship between audio and visuals, and an equal belief that videogames present a new direction for that relationship to be explored.

Images and music (even just sound generally) have a particular way of working together – when combined in certain ways they inform and affect each other to change the meaning we derive from them. When viewed together they have the potential to change what you would perceive, and understand them to mean, than if you examined them independently. Think of any popular song you have seen a music video for –after viewing the attendant visuals created to accompany that song, from then on you may have trouble thinking of the song without taking into account what the visuals add to the music.

In any serious theory based discussion of how and why music and visuals go together as well as the effect it has, one quickly runs into audiovisual theorist Michael Chion as he had some quite a bit to say about the subject. Chion was focused specifically on the music of cinema, and how music and visuals inform each other in that medium. He notes the because of the inherent difficulty in pinning down descriptions and categories of sounds,

“…there remains the risk of seeing the audio-visual relationship as a repertoire of illusions, even tricks- all the more contemptible for being so. Audiovisual analysis does not involve clear entities or essences like the shot, but only “effects”, something considerably less noble.”

Does that mean, then, that we should give up trying to better our understanding and ability to control these audio-visual relationships? Certainly not, and Chion advocates a number of activities we can do to improve our understanding.

He recommends one activity in particular which he says reveals the power of something he calls “forced marriage” – one aspect of potential audio-visual relationships. Forced marriage demonstrates that even music composed with no regard for the visual elements can present a synchronisation with the image onscreen. As we shall see, in some cases no effort even needs be made to try and match the audio to the visuals, they can just ‘work’ almost as if it were intended.

He describes an experiment we can do to look at the effect of “forced marriage”, saying…

“Take a sequence of film and also gather together a selection of diverse kinds of music that will serve as accompaniment. Taking care to cut out the original sound (which your participants must not hear at first or know from prior experience) show them the sequence several times, accompanied by these various musical pieces played over the images in an aleatory [random] manner. Success assured: in ten or so versions there will always be a few that create amazing points of synchronisations and moving or comical juxtapositions, which always come as a surprise.”

So that is exactly what I have done in the following seven videos. I have replaced the music from a piece of footage that (hopefully) you will have never seen before (or forgotten the specifics of if you have) and overlaid some other random pieces of music over them. The video is taken from a pre-launch trailer for the videogame Army of Two, it is arranged in a playlist with all seven ‘forced marriage’ examples from A to G followed by the original. The videos are approximately one minute in length and the only thing different between them is the sound. The very last video is the original footage.

A video playlist with all seven videos plus the original at the end.

Chion designed this experiment to highlight how, even accidentally, music and visuals can be made to work together. He says,

“Changing music over the same image drastically illustrates the phenomena of added value, synchresis, sound-image association and so forth. By observing the kinds of music the image “resists” and the kinds of music cues it yields to, we begin to see the image in all it’s potential signification and expression…”

He says that the effect of revealing the original videos sound after all the other alternatives,

“…never fails to be staggering. Whatever it is, no one would have ever imagined it that way beforehand; we conceived of it differently, and we always discover some sound element that never would have occurred to us. For a few seconds, then, we become aware of the fundamental strangeness of the audio-visual relationship.”

What did you notice about the sound and music in the videos? Was the "real" music and sound surprising? Did you notice "the fundamental strangeness" of the relationship between sound and image? Even having heard the final clip several times, after watching all the videos again followed by the original, I can't help but feel certain things - particularly some of the weapon sounds - sounded strage or mis-matched.

Understanding and utilising the Audio Visual relationships is particularly applicable for games since, as Marty O'Donnell noted in the very first part of my interview with him, there is no "real" sound to go with a particular image - it's all virtual. It is entirely created, designed, sculpted, engineered and controlled.

I hope that I’ve gone some way towards convincing you that audio-visual relationships are worth examining. As we have seen, even accidental synchronisation can prove startlingly interesting. In the second part of this series I will examine the issue further by asking the question 'What would be the result of a more deliberate synchronisation of sound and image?' and hopefully have some interesting points to make.

In the mean time - I would like to encourage you to discuss the different effects you felt particular pieces of music had on the video, as well as any salient points of synchronisation you noticed. I'd love to have a conversation on which pieces of music people felt were "better" and which ones they felt were "resisted" by the images, as well as why.

Friday, 8 May 2009

Stay a while and listen

One of the ways that valve tells a story in their games is by scattering little ‘micro stories’ around the game world, often in slightly out of the way places. If you’ve played any of their games you’ll know what I’m talking about – graffiti on the walls in Left 4 Dead that hint at survivors who have gone before are a great example. In Half-Life 2 there is a small corner in the level “Water Hazard” that I wish to highlight as an excellent example of using sound in a particularly videogame way.

This place, which has no name as far as I know, appears at first glance to have once been another outpost on the Underground Railroad and occurs a few hours into the game. Looking so much like an overnight camp, there are a few scattered supply boxes, a mattress, resistance graffiti and a clutter of other detritus; obvious evidence of human habitation. There is no-one there when you encounter it, but a curious feature remains, giving the location a very unique feel.

When I encountered it the other day, I greedily lapped up the atmosphere. I actually sat and listened for a number of minutes, becoming utterly absorbed in the sound of the place – and more than that, in the sense of place that the unique sonic signature gave to this tiny corner of City 17.

There are a few things I want to note about the sound in this scene, and most obviously anyone who comes across this place in the game will probably be struck first by the ‘wind chimes’. They stand out from the sound of the rest of the game because of their volume in the mix and their distinct timbre. The sound contains a variety of both pitched and unpitched sounds that work to blur the line between musical elements and sound effects. Obviously, a sound doesn’t have to be pitched to be ‘musical’ or to have beauty – the timbre of a sound plays as much a part in our response to sounds as does the fundamental frequency of the sound.

Another feature that adds beauty and a unique sonic signature to this place is the background sound of a constant strong wind. What we call the sound of the wind doesn’t really have a fundamental note that we can identify as a pitch as it is really a complex combination of sounds. Perhaps most importantly, wind doesn’t even have a sound until it blows past an object, and so what we associate with a “wind” sound is actually a series of sounds that will often change and fluctuate in unison in accordance with the wind speed. However, even this ‘sound’ of the wind blowing has a musicality to it – that is, it has musical properties that I find incredibly attractive. In this corner of Half-Life 2, it fluctuates seemingly as if at random, increasing and decreasing in volume and pitch as the speed of the wind changes. There is, of course, no real wind in the game, but the sound is so convincing a recording that I half-expect to feel the breeze on my face while I stood listening to it.

The ‘aural scene’ of the location is reinforced by the visual, that is, the sounds make sense with the visual. The windmill above the player reinforces the notion that there is a strong wind blowing. The audible and the visual combine to reinforce a single impression and it’s all the more powerful and impacting for it.

While the wind chimes themselves arguably become the aural focus in the place because of their prominence in the mix (i.e. they are quite loud comparative to other sounds), they actually have no visible propagating source. Which is an important point to note, as sounds don’t have to have a visible source and leaving their location hidden or up to player interpretation is a completely valid artistic technique. Cinema has used this trick for years, and will often use a sound to tell the viewer about something happening off-screen. With games, however, a developer can rarely ever tell where a player is looking, that is, what is “off screen” at any one time is up to the player, making it difficult to use “off screen sound” in a deliberately artistic way.

What often happens is that the game will spawn an object behind the player or in a location above and out of view of the player and attach a sound to it to attract the player’s attention. For example, in Half-Life 2 there are many occasions in the area around this location where soldiers spawn on bridges above the player.

You might argue that the sound of a gun cocking behind you in a game happens “off screen”, which is true, but strictly speaking, from the point of view of the game engine that place in game exists whether the player is looking at it or not. I’m not entirely sure about this analogy, but to me the equivalent of actual “off screen” in a game would be a sound coming from a place that doesn’t even exist in engine. That is kind of what is happening here, with the sound of the wind chimes and the wind coming from an invisible, virtual location.

The end result of all of the above points is that while technically being sound effects, the wind chimes, and the wind, take on a similar role to what music would – giving the location a very real and specific atmosphere. This blurring of the lines between more pure sound and music is something that I am very interested in, and was a core finding of my research thesis last year. It seems to me to be an somewhat unexplored area ripe with potential.

I hope you’ve enjoyed taking a moment to listen to the sound in the one corner of Half-Life 2 as much as I have.

Friday, 27 March 2009

The Audio of Halo Wars – A victim of unambitious design


A couple of weeks before it’s official release I got the chance to visit Microsoft HQ in Sydney and play a preview build of the now defunct Ensemble studio’s Xbox RTS Halo Wars. I wrote up a little bit of what I thought for Kotaku Australia as part of the competition I won to go and play it. You can read what I said via the link, but essentially what I most wanted to express was that This is a game that knows what it wants to be, but I’m going to tell you that’s not exactly a good thing.

It’s the Halo Universe, even down to the splash of the plasma weapons. Edge Magazine noted in a review that appeared online briefly that the game is unashamedly a console one, and to paraphrase the reviewer, they said that “Halo Wars stares you down and says ‘This here’s console country, we do things a little differently’.” It’s strange because I get the feeling most games fail because of the opposite problem – often grafting in ancillary ideas from other games and genres in an effort to siphon their success. In development circles it’s known as ‘feature creep’.

So Halo Wars knows what it wants to be and we the audience already know what to expect. Kieron Gillen in his review of the game for Eurogamer said of the game that,

The biggest strength… is the fact that most people understand the Halo universe. It's not just the geek thrill of seeing a Scarab in action - it's that you understand what the Scarab means on the battlefield (trouble). We know which characters are best against tanks, and which are probably best in special vehicles.

Given that foreknowledge, Ensemble is (was?) hardly the studio to farm out a cheap sequel to – so where did they carve out a creative space of their own? Where did they leave their mark on the Halo universe? Unless you count inventing some new units, basically nowhere – and unless there’s another story behind the scenes, they seemed perfectly happy to do that. But, as we saw with the discussion surrounding FEAR 2, is “good enough” really good enough anymore?

Halo Wars is a nostalgia fest, make no mistake, and it would be easy to cynically view it as an attempt by Microsoft’s to cash in on one of their more successful pieces of videogame intellectual property. I’m giving them the benefit of the doubt, however, and I’m going to suggest that it was probably only partly an exercise in ensuring gamers maintain some interest in the Halo franchise. Ensemble’s efforts at channelling the Bungie vibe seem to fall well short of the mark, failing to spin as convincing and exciting a yarn as their predecessors (caveat: I haven’t finished the game yet). Ironically, one of the points Marty O’Donnell made in my interview with him was that some people are just better story tellers than others and for whatever else you can fault them, Bungie do often manage to tell a good story.

So Halo Wars rests in that uncomfortable half-way place between a creative and original expansion of the universe and outsourced publisher cash-in. I admit it would be self-aggrandizing to try and account for this by blaming it on the misappropriation of ‘classic’ Halo sounds in the game, but at the very least it is a symptom of the larger problem at hand.

Indeed, we saw it happening from the first trailer which was itself a studied exercise in comforting the confused and afraid fans. Halo is changing?! But I like Halo the way it is! Change is scary and confusing. Did the marketing executives focus test that first trailer to see whether that perception was actually correct? It is obvious that is what they were thinking, even electing to preserve as much of the ‘classic’ Halo as they could.

The look of Halo was obviously always going to change – that much was evident from both the first trailer and the fact that the game itself would change into an RTS. But the sounds, ‘Oh no’, the executives said with a million dollar smile, ‘The classic sounds we can preserve!’ In actuality, it was probably closer to someone saying ‘Hey, can we get the Halo plasma rifle sounds for our trailer? Sweet!’ and then they never considered changing out these place holders.

In my thesis which examined the sound and music of Halo 2, one of the main discoveries was an identifiable link between the music and sound effects – that a level of ‘musicality’ is imparted into the sound effects via common timbres and pitches. Many of the covenant weapon sounds were in fact produced using similar synthetic techniques, even the same instruments that were used in the production of music tracks which ended up picking up the role as musical themes for the covenant. These same sound effects have now been re-used in Halo Wars in much the same way that ‘stock’ library sounds would – and the result is an unnatural grafting along with the loss of their musical context.

‘Stock’ audio from bought libraries of recordings is often used in films (and games) in order to keep down recording costs and, occasionally, to artistically draw upon their sonic history. The ‘Wilhelm Scream’ doesn’t just sound like a scream for western listeners, if you’ve had even a moderate exposure to late 20th Century films, you will have heard it enough times that it has become a scream. We hear the Wilhelm scream and know instantly what it means because we have heard it in context before. The sound itself has a history – we’ve heard it in Star Wars, we’ve heard it in Indiana Jones, and we’ve heard it in a million other movies since.

But stock sounds, like the Wilhelm scream, are designed to be memorable on their own, whereas the Halo sounds are made memorable by their connection to music and ‘level ambience’. The blurring of the lines between diegetic and non-diegetic sound and music in Halo 2 is a powerful technique that, in my mind, makes the sonic material all the more powerful. (See Chapter 4.4 of my thesis for a much more detailed discussion and an explanation of why it is such an effective technique.)

Halo Wars re-uses weapon sound effects from earlier Halo games with the aim of capitalizing on our history and experiences with them. We are encouraged to think back to those Halcyon days of Halo 3 and earlier, remembering our positive experiences with them. While it is a problem that these sounds don’t have nearly as much of a history as sounds such as the Wilhelm scream, lessening their historical impact, the more serious problem is that these sounds aren’t designed to exist without their musical and level-based context. Halo Wars carries the legacy sound effects of the Halo universe while unceremoniously dumping their musical counterparts. In losing their musical context, they have lose much of their attractiveness.

The argument for sonic ‘continuity’ between the Halo series is pre-eminently redundant – with every Bungie made Halo game, you get everything changing. From weapons and character designs to the visuals and the sounds, they are all subtly changed and revaluated. Even the way that sounds are mixed together live (what Marty says is actually half of what makes good game sound) is constantly being refined, polished and improved also. Bungie knows the value in re-imagining, redesigning and reconceptualising - even if we occasionally prefer aspects of the old to the new.

In the end, Ensemble opted for safety in their final game – and honestly it’s a struggle to blame them. Halo Wars is still horribly polished, and it’s still Halo… but it’s not really Ensemble saying ‘If we did Halo, what would we do differently?’ Instead it’s Ensemble kowtowing to the marketers, and more’s the pity. The sound, it’s really just a symptom of the wider problem – a disappointing lack of imagination.

Tuesday, 27 January 2009

Marty O'Donnell in Interview - Part 6


Sometimes, the most valuable lessons are learnt from the things that just don’t work as expected. In this, part 6 of the seemingly never ending interview saga that is my interview with Bungie’s Marty O’Donnell, he talks about the ‘misses’ in the music of Halo 2, working with music from outside Bungie, in particular the rock band ‘Incubus’, some of the reactions to the music and whether he likes to use musical ‘leitmotif’.


Ben: So can I change gears for a second and ask you if there were any areas in the music for Halo that you weren’t happy with? I think you mentioned in one interview that you probably didn’t quite the level of polish for the mix in Halo 2 that you wanted to. Was there anything that if you had another week or month you’d change?


Marty: The mix is probably the biggest thing. It’s really hard because you never know how somebody’s going to play. And the mix isn’t something you set, it’s that you allow the game to mix based upon what’s happening. So, if dialogue happens to come in, then whatever music is playing will be ducked 6db or whatever explosion is happening gets ducked… and playing around with all of those, attenuation and EQ and all the rest of it that we can do in real time and actually really testing it all out in as many ways a possible is just tediously hard to do, and we just never get enough time to do that… yeah, the mix is my biggest complaint.

One of my other complaints would probably be… just the amount of music that I would be able to compose after a level is completed and designed. I would rather be able to compose more music in response to what I’m playing rather than having to do it prior. My ideal would be that all the music that’s composed is composed for what I’m seeing and I never use another piece twice. Especially in Halo 2 I think you’re going to at least hear I think every piece of music probably is repeated somewhere in the game. But it’s not going to be repeated verbatim, because the system won’t let it, but it’s still the same piece. That is simply because we only have so much time and room to shove things on the disc. I would love to have it be a more original from beginning to end… I still think themes should return and be developed but I would rather have it truly be developed and be new than repeated.

And then the other thing for Halo 2, I experimented with using some music from some outside bands. I think that it’s a sort of mixed bag – stylistically it wasn’t as successful as I would have hoped.


B: I quite liked them, especially Incubus’s instrumental ‘The Odyssey’ pieces.


M: Oh good, well I liked that one too but it was tough. There were a couple other things I wasn’t as happy with and I actually wish I had more time to play with the incubus piece and actually work it in better than where I worked it.


B: So that wasn’t where you would have ideally put it in the game?


M: No, it was OK just I think in the long run, that particular level was a flying level and it was the only place I used it and I think it was a little bit soft and it didn’t’ quite do everything I was hoping it would do for that level.

And then I’ve had enough push-back from other people who were like “What the heck was that doing in there, that stuck out like a sore thumb!”


B: *laughs* Really? Wow.


M: Yeah, I’ve had people say that. But, y’know a lot of people, other people really liked it, and I liked it too, but there were some other things scattered throughout the game that I wasn’t necessarily as happy with.

And there’s always little choices that I’ve made, like here’s a piece I wrote and put in this section and think it’s great, then a month later after we ship the game I listen to it, ‘Wow, what was I thinking that really didn’t work’. That’s always going to happen.


B: On that development of themes idea… do you deliberately aim for that Leitmotif style?


M: Yeah, I’ve had people say ‘Hey here are the 5 main characters and here are the 6 locations and we think it would be great to have a theme for each location and a theme for each character and I basically say, that’s not me that’s not the composer that I want to be. I’m not writing Peter and the Wolf, as much as I love Peter & the Wolf… I’m not Wagner, but I think Wagner did some things that I think were kinda cool…

So it’s not like ‘Here’s a theme and it comes in every time this guy comes [on the screen]’. Sometimes by having certain themes that are kind of associated with some characters or even the emotion a character might feel, what happens is… this is just me, my personal opinion… if you say here’s the heroic Master Chief theme and every time Master Chief does something you play this theme, it’s like, well, how does that work when Master Chief is the player, and the player is really freaked out or scared or something – that theme just doesn’t work. I like ‘emotional leitmotifs’, so to speak. So it’s like, ‘Here’s the love theme, here’s the heroically successful theme, here’s the scary theme’…and then it doesn’t matter who the characters are if they’re going through that emotion I can bring that in.

These things tend to develop almost on their own – you realize… this music worked really worked well over here and this is the emotion that was happening, so I can do that again over here because that same kind of emotion is happening.

It seems like it ties together, and it seems like a thematic thing but it’s almost like, ‘Wow, isn’t that great that that works?’ Anyway that’s my approach if that makes sense.


In part 7, the final installment in the series, Marty Talks a bit more about his ‘emotional leitmotif’ approach, the affect of instrumentation on his music and finally he answers the question ‘What gets you out of bed in the morning’.

Saturday, 20 December 2008

Audiosurf - Breakfast of Champions

I’ve been playing Audiosurf recently, and it struck me that buried deep within my thesis was a nice little bit of theorising about the game. So I've chosen to reprint it here, slightly edited, for the convenience of anyone who can’t be arsed to wade through my multiple thousands-of-words thesis and pick out the good bits (probably most people).

____________________________________________________________________


Audiosurf was the work of primarily by one person, Dylan Fitterer, and was released on the Steam digital distribution platform in February 2008. Audiosurf requires music to play – it takes your music collection, and creates a 3D track based upon features of the music which is then navigated by the player who, depending on the game-mode, collects coloured blocks that visually correspond to the music. The game ostensibly provides a way to ‘ride your music’ as the game’s tag-line suggests[1] - a feat of musical gameplay that is operating on a rather different level to a game like Guitar Hero. It’s also a great step towards overcoming some of the widely acknowledged problems with games like Guitar Hero - many critics have noted that the strength of a music game is largely subject to how good its track listings are[2]. Alec Meer says,

…we were all playing Guitar Hero and wishing we could stick our favourite music into it. Audiosurf says “fuck it, why not?” and provides the scaffolding of a game around it[3]


Audiosurf’s particular implementation of representing and performing music in a game does however come with a number of its own disadvantages. Firstly, the way the three dimensional track is generated by the program is fixed and determined by a set algorithm[4]. In an interview with Ars Technica, the developer Dylan Fitterer commented on the way that the algorithm turns the song into a three dimensional track, saying;

…when the music is at its most intense, that's when you're on a really steep downward slope, like you're flying down a rollercoaster in a tunnel. When the music is calmer, that's when you're chugging your way up the hill, watching that peak in the distance you're going to reach.[5]




The experience of playing the game itself is where I personally find the major innovations of Audiosurf as well as its major problems. When surfing a song the game’s analysis algorithm has pre-determined the majority of the course’s parameters from the musical elements contained within the recording. Some aspects of the course are determined from relatively transparent musical parameters – the track’s length corresponds directly to the length of the song and the contours of the course are derived from reasonably straightforward aspects such as volume and dynamics. In music with a strong steady beat, the track will often appear to undulate along beneath the player’s ship character in time with the rhythm of the song. The comprehensible translation of the music into visuals, or lack thereof, is where I encounter the main problem of Audiosurf.


In the examples outlined above, the relationship between music and the visuals (the track environment) is clear and direct, making sense to the player and allowing for a pleasurable and organic merging of knowledge of the song with knowledge of the corresponding Audiosurf track. This is a significant aspect of the appeal of the game as much community discussion goes on about the suitability of tracks for surfing[6]. Indeed the process works effectively on the macro structural scale, however a core component of Audiosurf is a ‘match 3’ type block collection game, where the block placement – called ‘traffic’ by the game – is generated from the rather more musically ambiguous parameter of “volume spikes”. The developer, Dylan Fitterer, describes the process saying

…whenever there's a spike in the music, the intensity of that spike determines the block's color. So the most distinct spikes, like a snare drum, that would tend to be a red block, a really hot block. If something is a little more subtle, like a quiet high hat, that would be a purple block, which is worth less points.[7]

This kind of relationship between music and visuals or environment becomes, musically at least, increasingly murky on this micro level as a sheer ‘spike’ in volume is no guarantee that a listener would make the corresponding connection to what they are hearing. Indeed the issue of what a listener actually perceives about a song when listening to it is much, much more complicated. Albert S. Bregman, author of the comprehensive text ‘Auditory Scene Analysis: The perceptual organisation of sound’ coined the term “stream” for what he identified as an audible cognitive process which was lacking adequate terminology. Bregman’s research noted a significant distinction between the cognitive process of the grouping of sounds that ‘go together’[8] from what might be distinguished as pure ‘sounds’. He notes that, ‘A series of footsteps, for instance, can form a single experienced event, despite the fact that each footstep is a separate sound.’ He also makes a musical comparison, saying that,

A soprano singing with a piano accompaniment is also heard as a coherent happening, despite being composed of distinct sounds (notes). Furthermore, the singer and piano together form a perceptual entity – the “performance” – that is distinct from other sounds that are occurring.[9]

Kieron Gillen writing for Rock, Paper, Shotgun says that

The problem with Audiosurf is that the concentration you take to really make the block game work is entirely the opposite of what you need to do to feel the music. The two parts of the game can tug at each other a little...On one hand, a zone game. On the other, a high-speed sorting puzzle.[10]

What I believe that Gillen has identified here is the inherent disjunction between what the musical listener focuses on when listening to the song, and what the game makes the player focus on. I suggest that this phenomenon is somewhat analogous to Ian Bogost’s term ‘simulation fever’. The concentration Gillen identifies as being necessary for successful play means that the player is acutely aware of block placement, largely determined by the volume spikes mentioned earlier.


I would argue that simply focussing on volume spikes is not adequately representative of the music to withstand the scrutiny that a player applies to it. I propose that, in a situation of high concentration on music, a more complex system is needed, one which addresses the issue of how a listener perceives a song. Admittedly, this is a daunting prospect and one inevitably encounters certain apparently insurmountable barriers to rendering onscreen what any one particular person is most likely to concentrate on within a song at any one time, needing as it would to take into account personal differences and background as well as individual musical training. However, the fact remains that this process is undertaken by humans themselves leads me to believe that a more accurate model is possible. When listening we can (and do) lock onto a number of particular elements of a song – the melody, a catchy lead rhythm or hook – and this is not always represented visually on screen. While Audiosurf often wonderfully represents the underlying kick-drum rhythm, especially if it is prominent, it will rarely pick up and single out an element like the aforementioned melody or hook unless it stands out in a particular way – namely through sheer volume.


Guitar Hero, in contrast, sidesteps some of these problems through both its position as a guitar game (with the player’s concentration largely limited to being focussed on the guitar) and by having a human pre-define the on screen actions the player has to undertake to ‘perform’ the song. However it does not yet allow for any meaningful input of a players own music library, and for that I am continually thankful for Audiosurf’s existence – imperfect though it may be.

____________________________________________________________________

[1] Wikipedia contributors, "Audiosurf," Wikipedia, The Free Encyclopedia, http://en.wikipedia.org/w/index.php?title=Audiosurf&oldid=241996378, accessed October 7, 2008.

[2] See for example, Mitch Krpata, ‘Rock Band 2: Why now?’, Insult Swordfighting, http://insultswordfighting.blogspot.com/2008/07/rock-band-2-why-now.html, accessed October 7th, 2008.

[3] Alec Meer in ‘The RPS Verdict: Audiosurf’, Rock, Paper, Shotgun, http://www.rockpapershotgun.com/2008/03/03/the-rps-verdict-audiosurf/, accessed March 3, 2008.

[4] Thomas Wilburn, ‘Catching Waveforms: Audiosurf Creator Dylan Fitterer speaks’, Ars Technica, http://arstechnica.com/journals/thumbs.ars/2008/03/11/catching-waveforms-audiosurf-creator-dylan-speaks, accessed

[5] Dylan Fitterer in Thomas Wilburn, ‘Catching Waveforms: Audiosurf Creator Dylan Fitterer speaks’.

[6] See the comments section of any Rock, Paper, Shotgun Post tagged ‘Audiosurf’ – every single one involves readers suggesting songs that others should try: http://www.rockpapershotgun.com/tag/audiosurf/

[7] Dylan Fitterer in Wilburn, ‘Catching Waveforms: Audiosurf Creator Dylan Fitterer speaks’.

[8] Albert S Bregman, Auditory Scene Analysis : The Perceptual Organization of Sound, 2nd MIT Press paperback ed., Cambridge, Mass.: MIT Press, 1999, p.9

[9] Ibid, p.10

[10] Kieron Gillen in ‘‘The RPS Verdict: Audiosurf’, Rock, Paper, Shotgun, http://www.rockpapershotgun.com/2008/03/03/the-rps-verdict-audiosurf/, accessed March 3, 2008.

Tuesday, 16 December 2008

Marty O'Donnell in Interview - Part 1


Marty O’Donnell is a man on the bleeding edge, being responsible for the music and audio vision for the stratospheric Halo trilogy from Bungie studios. I wanted to try and find out what this leader in the field of music and sound for games had to say about the subject and get some input for my then to be completed thesis. Around August of this year I embarked on a campaign of multiple emails to multiple addresses to somehow get in contact with Marty for an interview. After what seemed like a longer period than it probably was, finally in October a scant week before I had to hand in my thesis I spoke to the man himself via phone. Even though I approached him essentially out of the blue, Marty held no qualms about chewing my ear off for close to 90 minutes, to my own delight of course, and it is clear that music for videogames is a subject about which he is fiercely passionate.


In this, the first of a series which will include nearly the full transcript of our interview, I ask Marty about a few things to do with my thesis. Namely, what I identify as the inherent musicality of the sound effects of Halo 2, whether or not his unique role as audio director uniquely enabled that process to occur, and Marty elaborates on his own personal philosophy for music in games.


Ben: I’m doing a music degree, in my 4th year (honours), and writing a thesis that combines my love of music with my passion for videogames. So I wanted to look into what makes videogames unique and specifically the new ways that videogames can use music.

So, coming to the music of Halo, I started with this hunch that Halo did a bunch of interesting things with music, but I had no idea what it was… And while doing my analysis I came to realize that if I think about the music of Halo 2 as broader than just the composed music… I started noticing all this musicality within the ambient sound and the composed sound effects. That relationship between the music and sound effects is really interesting. It seems to me like there’s so much cross pollination between the music and the sound effects. The first thing that came to mind, or that inspired me to think about this was the covenant carbine & it’s scope zoom. It’s got this sort of low synthy, resonant note. And I though, hang on I can kinda pitch that, and it sounded like an interval of a perfect fourth going in and out. So do you think this is an accurate kind of assessment? Can you see the musicality in the sound effects?


Marty: The most general thing that we do is we try to make the basically human vehicles and weapons sounds actually sound close to what real world sounds might be and we tend to take the alien sounds and try to give them a little more… I guess you could say they were a bit more synthetic. We used more of our musical instruments to create the sounds of the aliens so that… that whole suite of sounds sits in a different place from the human sounds.


B: I think that’s really, really cool. So you just mentioned that was intentional, do you think that was helped by the fact that you were a composer and musician yourself, as well as the head guy in charge of sound? Do you think that would have happened if it was just some sound engineer in charge of the sound effects?


M: That’s an interesting question. I had a long career in music and sound design for movies and television & commercials so I was used to dealing with all those areas but when I was able to get into the game business it was much more wide open. I could be an audio director, which means I was able to say, ‘look, there should be nothing that ever comes out of speakers in these games that I didn’t approve or create’… that’s something you don’t necessarily get to do when you’re part of a team making a film or a commercial or a TV show or something… you’re a composer or sound designer or you’re a sound effects editor or a re-recording mixer – you have a lot of different roles and sometimes there’s no singular audio vision for the whole project and I was always hoping that was something I could do, and games gave me the opportunity to do that.


B: Is that a unique thing to games?


M: If you look for the term audio director in any movie you won’t find it… so it’s unique to games. And I am happy to say, I am probably a pioneer in that area.


B: I’d definitely agree with that assessment, yeah.


M: I mean I actually sorta insisted. I kinda made up the term and said ‘No I’m audio director’ and I told those young guys ‘here’s what that means: It means anything to do with audio comes through me’.


B: Sounds like a good approach, it’s definitely working.


M: Well thanks. *laughs*


B: If you were, say, trying to make a game like the Call of Duty games where they’re aiming for realism and that strict attention to detail do you think that you would be a bit more constricted?


M: In videogames and especially in the Halo audio engine…content is probably only half of what is important; the other half is how all the music and sound effects are manipulated in real time. So you can record the sound of a realistic engine but that doesn’t help make it seem real when someone’s actually driving it in a virtual world. You have to have a lot of different parameters you control to make it feel like the engine, or the sound of the suspension on the warthog, or the tyres or the gravel getting kicked up… all of those things can’t just be pre-rendered. You want to have good content to begin with but it has to be manipulated in real time. We’re using all sorts of real time parameter controls and digital signal processing that is controlled in real time to make it feel that it’s actually happening.

You have a little more latitude with weapons and vehicles that are alien because you’re not starting from something that you’re trying to recreate… like, the way a jeep sounds when it’s driving over sand. …The way an alien hovercraft sounds when it speeds past you – no one knows what that sounds like, you can make it up as you go along. And you can probably be a little more crazy with how the real time parameter controls actually manipulate the audio when it’s an alien weapon, or vehicle or sound effect. You can be a little more adventurous.


B: I found it really interesting reading a piece by Jim Rossignol that recently got reprinted recently on Rock, Paper, Shotgun. In it he mentioned the Multiwinia developers Introversion and how they were working on some animations that at first seemed somewhat so-so, but when they put sounds to it, it made the actual animations look better as well.


M: Right. And that’s one of the other things that’s actually interesting about games, and for the most part especially the Halo games, everything is virtual so we have no source to begin with… there’s not live action that we shot [to] listen to what we recorded and implement, everything is virtual. Character models, weapons, vehicles and you name it are all completely virtual so they have absolutely no sound. It’s like animation, there’s no sound at all, so in order to bring these things to life we have to do it in real-time.


B: There’s a guy who wrote a book called The Acoustic Ecology of the First Person Shooter, and he talks about how sound effects and the in game sounds that the player hears form an ecology, but he kind of writes off music from that relationship. Do you think the music, the linear compositions of music, can contribute to the ecology of sounds?


M: It’s probably somewhat difficult to disassemble [the music of] Halo just by playing it, but if you play sections over and over again… I would suggest actually recording the music you hear when you’re playing it, and then comparing the performance of the music back to each other, especially the in-game music. The reason I would suggest doing that, is ...I think you’ll be surprised at how much non linear-music is actually happening in Halo 2.


B: I guess the impression that you get if you just go straight through, and listen to all the CD’s you go “Oh yeah, it’s that song, it’s that song” and it’s sorta hard to pick out where it does change more than you think.


M: Just so you know, my own philosophy of how music works in games is that’s actually my ideal. I want people to play through a level or play through a game or play through a section or whatever and actually think that somehow the music just happened to be scored for their experience. And what they had was a linear experience because that is all you have – When you play a game you are having a linear experience, you’re playing for 10m you have a 10m experience. But if you play that 10m over again, believe it or not, the music will not play back the same way.

And that is all on purpose…I don’t want people to be aware that the music is actually adapting or interacting with their interaction, I want people to think that it’s just a linear piece of music that seemed to somehow fit what they were doing and they had a good experience. If they are aware that they are changing the music then I think I’ve failed.


B: Right. So you want to avoid what happens in some games, where you know you’re being attacked because the battle music has started.


M: Yeah, right. But it’s not just that… even when a piece of music starts, there are sections where the linear pieces of music that I basically reassemble in order to make a CD soundtrack out of it, but these things are not just disassembled and have stems… you know, you have a rhythm track or a pad and these things cross-fade or whatever, I actually have different sections of music that randomly fit together with each other and give you a different linear experience based on some random chance elements that happen. And I can weight the chance so that section D has only a 10% chance of playing. So you have at least a random playback of what is happening; then there will be more intense sections, or less intense sections, or things that overlay that only happen because of actions that you are doing. If you decide to speed through an area you’ll get a completely different piece of music that basically feels linear from beginning to end, but if you linger and do something different you’ll have a completely different musical experience. That’s actually something that I think is fun to do and to try to find those sections, and try to analyse how many different things are you hearing that are changing up.


In the next post our discussion moves onto granular music approaches in videogame music, why Marty has not been one to advocate a granular approach to videogame music, and what his own alternative approach is.