Image: from Rhiannon May's 'Crash Landing' R&D as part of DaDa Holograms at Metal Liverpool
DaDa Holograms is a project to look at the potential use cases for augmented reality when it comes to audience access for live and digital theatre, specifically focusing on BSL interpretation.
Read on to find out about DaDa Holograms' second phase in a blog from DaDa's Digital Producer, Joe Strickland.
Co-funded by The Space, this project will examine three different use cases for this technology, and draw conclusions about the effectiveness of each use case in being a tool for audience access, as well as how simple and affordable each use case might be for artists and organisations to implement.
Phase 2 of the project looks at the effectiveness of a digital, at home, AR experience where an audience member can watch the BSL interpretation of an on-demand production in a variety of different combinations of AR and screen based presentation formats. The intention of these designs was to allow for a variety of different at home solutions for increasing the accessibility of pre-recorded theatre productions for Deaf audiences, which went beyond the traditional small interpreter in the bottom right hand of the screen. This includes for the potential presentation of these productions using near future technology.
In order to create these applications, we have to overcome two main design hurdles:
What combination of screen-based and AR performances works best?
How do we best blend these different performances together, if at all?
In order to address this first hurdle, we needed to record a short section of performance in as many different combinations of regular and 3D, or volumetric, footage as possible. We recorded an excerpt of Crash Landing by DaDa Fellow, Rhiannon May, a production with both a speaking and Deaf signing performer side by side, in multiple ways. Regular video was taken of both performers together, as well as each performing by themselves, and an empty set. Volumetric video was taken of both performers together, as well as each performing by themselves.
Step 1, recording the performance:
We then used Unity, a game making software, to create different combinations of these pieces of footage. By overlaying the volumetric video over the regular video, we could digitally insert performers into the regular video recordings. This worked considerably well for inserting the volumetric pair of performers into the video of the empty set. It also worked reasonably well for inserting one volumetric performer alongside the recording of the other, although there were several issues that would come about as a result of this that would require consideration to refine this combination of media further.
A video demonstration of how performers were digitally inserted into regular video:
This brings us to our second hurdle; how do we best blend a regular video and a volumetric video together?
One way of presenting this would be to not bother to do this, instead keeping the regular video presented on a screen and a volumetric interpretation or accompaniment being viewed through a separate device, such as a phone. This is discussed in more detail in Phase 3, which is documented in another blog being released soon.
Some of the factors that helped these media combine easily is that they were filmed with the same camera from the same viewpoint, so aligning the regular and volumetric video was relatively easy and allows the performances to feel like they make sense in the same space. Likewise, the performances all had the same lighting, which helps further tie the volumetric performance into the regular video performance. I’d highly recommend recording these performances in the same location, with the same lighting states, whenever possible.
Some issues with combining these performances include synchronization, that is making sure that each separate performance is performed at the same pace so that they line up when being assembled without tweaking their timings. Also, synchronizing their positioning of props across the performances became an issue, with performers lacking a reference point for interaction and so placing props or aiming action at a place without considering the position of the performer to be inserted later on. This resulted in a volumetric performer holding a ring binder while also sitting on the same ring binder filmed in a regular style.
To overcome both problems, rehearsal is key, making sure performers are able to hold in their minds the actions of their volumetric scene partner so as not to interfere with them. Likewise, a set timing, such as a backing track, click track, or prerecorded dialogue, could be used to ensure that performances stay in synch with one another across different recordings.
Another issue with combining these recording media is occlusion, or parts of the volumetric video overlapping and hiding parts of the regular video recording. Our recording set up had the floor recorded as part of the volumetric video which, while making the alignment of the images very simple, also obscured performers recorded regularly, particularly their legs or any movements they made around the performance space. A more careful positioning and framing of the volumetric camera set up, or a different volumetric capture software that allows you to remove more of the unwanted parts of the image, could be used to rectify this issue with ease.
It's worth pointing out that the section of Crash Landing that we filmed was chosen because we knew it would highlight these issues and considerations with filming AR content for hybrid presentation in this way. We could have filmed a volumetric video of a stationary Deaf performer or interpreter and inserted them in after the fact without it interfering with the regular recording of the performance, but this is a relatively uninspired and uncreative way of building access into a show. We add additional in-person BSL interpretation to shows in this way already, but best practice dictates that access should be creatively built into productions from an early stage whenever possible. In the spirit of this, we decided to tackle that much harder design problem rather than just make a more convoluted version of what is already common practice when it comes to accessibility for Deaf audiences. Having said that, a similar BSL presentation method to this is explored in Phase 3 of the R&D.
However, given that we had volumetric video of both performers we decided to make a purely AR version of the performance, without any regular video. This could be viewed in the audiences home on any flat surface using an app on their smartphone or tablet and allowed the 3D feature of the performances to be fully appreciated, something that is difficult to achieve on a 2D screen. We did this because if AR content is going to be more commonplace in the future of entertainment, something that is sure to happen, then we should experiment with trying to build accessibility into this new medium from the get go, and learning from creatively accessible theatre and live performance would be a really easy way to do this.
Footage demonstrating a purely AR version:
This phase of the R&D ended with a series of prototype productions that blended regular and volumetric video, as well as a protype for a purely AR performance excerpt, all accessible to Deaf audiences on demand and at home.
--
Want to find out more?
If anybody would like further information about the results of this R&D process, or would like to help try out the augmented reality experiences when they are ready for audiences, please get in touch with DaDa’s digital producer by emailing digital@dadafest.co.uk