null AES Virtual Vienna 2020
AES Virtual Vienna 2020
2. - 5.6.2020
Technology, workflow solutions and the latest research will all be discussed. The conference will also include streaming presentations of Keynote addresses, Papers, Workshops, Tech Tours and other technical program content, along with live- and forum-based dialogue with presenters.
Regardless of your level of expertise, Virtual Vienna will let you build your skills and enhance your career across the full range of audio specialities. The conference will cover recording and production, acoustics and psychoacoustics, sound reinforcement, archiving and preservation, networked audio, product development and audio education, along with student and career development sessions.
Our own Senior Technologist Thomas Lund will be participating in two workshops, as below.
Active Sensing and Slow Listening
Tuesday, June 2
11:30 AM – 12:30 PM CEST
Thomas and Professor Susan Rogers of the Berklee College of Music will be summarising
new medical studies on human perception, and will discuss if sensing in adults primarily should be regarded as a reach-out phenomenon.
They’ll also be exploring the term “slow listening”, and why it can be beneficial to allow more time when conducting subjective tests, evaluating content, equipment or rooms, and when preserving content for future generations to enjoy.
Wednesday, June 3
12:00 PM – 1:00 PM CEST
Thomas and Dr Hyunkook Lee of the University of Huddersfield examine how new immersive audio formats allow excellent music performances to be preserved more sentiently. They’ll also discuss how immersive audio is forcing us to re-evaluate established mixing principles.
Our R&D director Aki Makivirta will be presenting the following paper:
Accuracy of photogrammetric extraction of the head and torso shape for personal acoustic HRTF modelling
Available on demand from 11.00 AM CEST on Tuesday, June 2
Photogrammetric computational methods are now able to acquire precise personal head, external ear, and upper torso shapes using video captured with a mobile phone. In this paper, Aki examines the accuracy and repeatability of generating such 3D shape information, and how it enables a realistic personal head-related transfer function to be calculated.