How video game music is created

From those primitive little bloops and bleeps to the cinematic sound tracks of this generation of video games, the creation of the music of games is an interesting and technically challenging aspect of the industry.

  • Old Game Systems

  • The sound hardware available on older systems actually resembled the hardware found in analog synthesizers.

    These sound chips usually consisted of 1 to 3 simple oscillators, capable of producing a handful of wave shapes. Sine waves, square waves, triangle waves and white noise generators were common.

    The main technique used to shape these sounds was to use envelope generators. The envelope of a sound is like a graph of its amplitude or volume over time.

    The ADSR model for envelope generation was quite common and could actually vary the sound these simple chips could produce quite a bit.

    Composing music with such limited hardware was, to say the least, challenging. Especially when you consider that the music had to share the limited number of oscilators with the game's sound effects.

    Usually the composer would create a melody and simple accompaniment. The composer would then assign a higher priority to the melody. The game could then 'steal' less important oscilators or channels to use for sound effects, without harming the melody.

  • Modern Game Systems
  • Fortunatly for us, a few advances in sound chips paved the way for the much richer experience we now enjoy.

    The first advance was the introduction of DACS. These chips used digital sampling to reproduce recorded sounds. These chips also had their own memory, which was used to store the digital representation of the sound. Some of these chips were actually separate processors that ran their own programs, concurrent with the game system's main processor.

    The second advance was the dramatic increase in the number of available oscillators or hardware channels, as they are now called. While the SNES could hardly be called a 'modern' game system, it had 8 hardware channels available, and the original Playstation had 24 channels. That may seem like a lot, but the Playstation 2 has 48 channels!

    The third advance was the increase in the amount of sound RAM or memory that the systems now have available and the ability to play compressed sounds, which can take up 1/4th to 1/10th the size of an uncompressed sound.

So what did all this new technology do for the composer working on the game? Well, it became possible to create realistic instrument sounds, using sampling of actual musical instruments, and it gave the composer the ability to create much more complex music in the MIDI environment that most modern musicians are very comfortable with. The final result of moving the music over to the game very closely matches the MIDI music the composer creates using their computer and MIDI synthesizers.The modern game system has enough hadware channels for the music and the sound effects to co-exist peacefully, without the sound effects stealing channels from the music.

One other technique exists for the ambitous composer, one that allows the ultimate flexiblility. It's a technique called streaming, and it involvs taking a complete recording of a song and breaking it up into little chunks that can be loaded into small buffers or areas in sound RAM continuously while the game is being played. Since the music is a recording, the artist is not limited in any way. It's possible, using this technique, to produce a musical score and have a symphony orchestra playing it, for a truly cinematic experience.