Blog

AES Berlin, May 2017 Round-up

AES142nd-500x300
AES 142nd International Convention

Last month we eagerly welcomed back the AES International Convention in Europe, in the wonderful city that is Berlin. To me it was a return to Berlin after traveling to the city a month before for the International Games week.

Incidentally my last AES convention was also in Berlin while I was still studying in this city. I find the AES an exciting conference showcasing latest technologies, insightful presentations, paper lectures and in general a great place to be at to meet people from our industry. This year it was even more exciting for me as I presented my first technical paper at the AES for the first time.

The paper was extracted from my bachelor’s thesis research results that I did back in the ending months of 2016 with the virtual acoustics group at the Fraunhofer Institute for Digital Media Technology (IDMT). My thesis research dealt with comparing room simulations within a standard 5.0 channel-based surround system and an object-based multi-channel system developed by the IDMT known as the Spatial Sound Wave (SSW) (based on Wave Field Synthesis (WFS) theory and additional perceptual algorithms). To give a very brief overview of object-based/scene-based audio and the SSW, this method of audio production deals with rendering audio sources (recorded or synthesized) as virtual sources in a pre-defined virtual environment. As a result, mixing in such an environment becomes autonomous of the loudspeaker setup. Furthermore, this technology theoretically provides a more accurate representation of the virtual source wave front (based on WFS theories).

csm_2016_11_490_3a12c40fed.gif
The Fraunhofer Spatial Sound Wave (SSW) User interface showing the virtual sources within a virtual 3D environment

Back to my research, the methodology involved re-producing  simulated reverberant soundfields within both setups (making use of the same loudspeaker layout) using a Lexicon 960L digital reverberation unit. Impluse response (IR) measurements were then taken for a selection of several different parameter settings from the 960L at the several pre-defined locations within the circular array of loudspeakers. For the 5.0 setup, the 5 discrete outputs from the 960L were directly routed to the loudspeaker setup according to the ITU-775 recommendation. In the object-based setup, the discrete outputs from the 960L were routed to the SSW system and rendered as virtual sources, which were then placed at approximately 1 meter behind the loudspeakers used in the channel-based setup.

More details can be found in the paper itself which can be downloaded from the following link:

Bernard Camilleri_AES_142_paper_75 [Final Submission]

A hot topic at the AES 142 was spatial and 3D audio, with importance on ambisonics, including recording, mixing and delivery techniques for these relatively new formats (mostly for VR applications). MPEG-H was in fact a big topic as a new delivery format as was binaural audio and the use of HRTFs for tailored end user experiences.

19441415_10155360369500396_300487548_n
Fraunhofer IIS presentation on ‘Capturing audio for 360 VR’

I also attended the first technical committee (TC) for game audio at the AES142 which included representatives from Dolby, Native Instruments and several universities among others. Guys like Nuno Fonseca (with his Sound Particles software), who was also present are providing interesting tools for sound designers to create spatial audio for interactive media.

19398983_10155360369530396_1978561874_n
AES142 presentation from Prof. Nuno Fonseca on 3D audio

One point that definitely came through most of the lectures and presentations is that technology is currently rushing forward (as is most of the time) which has also brought affordable ambisonic microphones in the market, but there is a lot of confusion with regards to delivery formats and methodologies, including for capturing and mixing for VR applications.

Another presentation which I enjoyed was ‘The Ins and Outs of microphones’ by John Willett which refreshed some fundamentals on microphone technology. The poster sessions also had some interesting topics on discussion, including motion tracking for hand gestures, soundscape recording techniques, and others.

The exhibit area had some mouth (or ear?) watering technologies, including the NEVE DFC 3D console; the integration of Dolby Atmos within Pro Tools (which hopefully also comes to other DAWs); the new professional-grade tape from ‘Recording the Masters (RTM)’ which acquired the technologies from BASF (resurgence of affordable tape recording?), among others.

19441345_10155360402155396_1144558869_n.jpg
Neve booth, featuring the DFC3D
19433666_10155360402095396_440602069_n
AVID booth

My only regret from this conference is that there were more presentations that I wanted to attend but I couldn’t due to clashes with other presentations, such as the lecture from David Griesinger. However I was glad to get a personal word with Mr Griesinger himself on his new publication.

What’s certain is that we are currently living in an exciting period for audio technology in general and audio in immersive applications. It is interesting to see many people and companies/institutes working in the 3D audio field which will push the technology and methodologies forward.

Did you also attend the AES 142 Convention? If yes, what was your experience?

 

Welcome!

I would like to welcome you all to the brand new website of Xekill-ton studio, a boutique sound design studio located in Malta focused on providing audio services to game developers and film/animation studios.

Furthermore, the studio is a hub for all things audio which will also function as a research and development lab in the field of audio. This will be the first place on the island that will be working within this field. For more information about this check out the blog page frequently to learn more about sound design processes in general with theoretical and practical examples from past and current projects.

For the launch of the studio we are offering a 15% discount on all services provided by the studio. If you have any audio related questions, don’t hesitate to get in touch.

I look forward to working with you!

Designing sound – Part 1: Intro

In this first official blog I would like to briefly (and not so technically) explain the processes of designing sound for games and how it differs from films/animations in general. The target audience for this blog is game developers who are interested in these processes and how professional sound design can benefit their game project (but is also open for anyone who’s interested in learning more about these processes). For the first part of this series I would like to present a general theoretical overview of sound design. In the following parts I will explain in more detail each separate process that forms part of sound design with theoretical and practical examples from projects that I worked on and other interesting examples.

Sound design is both technical and artistic in its purest form. It is very important to know the technicalities of sound capturing and sound creation in the digital domain and how to engineer such processes to achieve a quality audio file. However, good quality sound is not the only factor, and here’s where the artistic part comes in. It is also very important to be able to imagine, manipulate and match sounds to whatever is on the screen, including the  ambience/location of the scene, sound effects required, voice and also non-diegetic sounds (for non-screen events) to complement the project. Apart from that, it is also important to complement the sounds with the laws of physics and acoustics for a more realistic experience where required (for example, in very basic terms, sound behaves very differently indoors than outdoors). This is why a professional dedicated sound guy is a requirement in a team. Sound is such a vast field and unfortunately many people still fail to realise its power that you need to have someone who knows what he/she is doing when it comes to audio for your project. Sound design is not simply the addition of ready made samples from the internet directly into your project. Although occasionally this might work, there’s much more to it than that and it is very difficult to achieve a professional, complete package this way.

The following list shows the typical basic categories that form a complete sound design package for a game or film project:

  • Foley
  • Sound effects (SFX)
  • Ambience/Atmosphere
  • Voice-over / Dialogue
  • Music

In future posts of this blog series I will go into more detail in each of these categories and explain their processes, technicalities, how they differ, why each is important, and how they are produced to fit into a project.

There are also different methods of how sound design is produced for games and animations. While films and animations follow a linear timeline and have a defined start and end point, games do not follow these ‘traditional’ cue points. Games are non-linear and can be endless and this is what makes them more exciting and challenging. They are challenging for sound designers to create randomised, interactive and adaptive audio depending on the current position of the player in the game. For example, how long will the player stay in a level, and should the music loop forever? One extreme example is No Man’s Sky with it’s procedural audio dependent on the world that’s currently being generating according to the player’s advancements. Another example of interactive audio is Playdead’s Limbo where music is not separated from the ambiences and effects and is generated alongside the player’s in-game behaviour (notice how the music is linked to the sound design of the game in the attached link). This is where middleware comes in very handy. Middleware software allows the sound designer to randomise sounds and create interactive audio more easily and releases some of the load from the programmer who doesn’t have to implement sounds one by one and script for basic behaviours. I will also talk more about this in future posts.

So to conclude, sound design is the art and technique of creating either a realistic or an artificial, exaggerated aural experience to complement the project at hand. Usually sounds are always exaggerated for both games and films to make a greater impact and we’ll look into this in more detail in future posts. It is important to make sure that the sound matches the style of the game and what you would like to achieve. Remember that sound can really make or break a game and if you’re not up for the task, seek the help of a professional, dedicated sound designer.

Please feel free to comment and let me know what you think or what you would like to read more of in this blog series in the future.