Image: DaDa Holograms prototypes included in Adam Fenton's //Tuning In//
DaDa Holograms is a project to look at the potential use cases for augmented reality when it comes to audience access for live and digital theatre, specifically focusing on BSL interpretation. Co-funded by The Space, this project will examine three different use cases for this technology, and draw conclusions about the effectiveness of each use case in being a tool for audience access, as well as how simple and affordable each use case might be for artists and organisations to implement.
Phase 3 of the project looks at the effectiveness of an AR experience where an audience member can watch the BSL interpretation of a live, in-person production. The intention of thisphase of the project was to explore the possibilities of using AR and hologram Interpretation from Deaf performers on stages and in live touring work. We created a prototype experience where a pre-recorded Deaf performer could be presented alongside a live performer and would be able to keep in time with the live performance, solving any potential synchronisation issues that could occur with this method of presentation.
In order to create this application, we have to overcome two main design hurdles:
- How do we keep the recording of the Deaf performer in synch with the in person performer?
- What are the limitations for presenting an interpreted performance in this way?
Using a similar method to the previous stages of the R&D, we recorded a 3D, or volumetric, video of a Deaf performer interpreting alongside an excerpt of Adam Fenton’s play //Tuning In//. This was then put into Unity, a game making software, for further tinkering, alongside the audio of the speaking performer synchronized to the BSL interpreation. We wrote some code to allow us to control this volumetric video in several ways. Predominantly, we gave ourselves control over the speed of video playback so that we could speed up or slow down the interpretation to match the in person live performance. By listening along to both the in person performance and the recorded audio that was synchronized with the volumetric interpretation, an operator was able to ensure that the presented BSL performance matched the spoken in-person performance accurately, speeding up or slowing down the recorded interpreter to re-synchronise the two performances. This speeding up or slowing down was done in 10% increments and a change of speed greater than 20-30% wasn’t actually that noticeable in the recorded interpretation, meaning that if it fell out of synch and needed it’s speed changing to re-synchronise it with the live performance this could be done without effecting the quality of the volumetric Deaf performance.
We also added some controls so that different sections of the Deaf performer's interpretation could be queued to begin playing at specific times. Deaf performers often want to reinterpret sections of a performance to make sure they are accurate interpretations and this allowed us to ensure that the performer's favourite parts of their interpretation were present in the eventual in-person showing.
Also of note is that the volumetric video was placed in front of a black background. This was to make sure that, when projected, only the Deaf performer would be seen, as black can’t really be projected as a colour. This makes the projection of the Deaf performer feel like they are in the room rather than projected onto a screen with a visible outline. Some of our performer's clothing was dark in colour and was difficult to see in the final projection because of this same reason, so future recordings should either utilise brighter colours or more considered lighting to avoid this issue.
However, this might be the only real limitation of presenting a performance or interpretation in this way. Ensuring the synchronization of the recording and the performer was not particularly difficult and required no extra skills than those a theatre operator may already possess. Plus there are a host of other benefits to this way of working. The same Deaf performer can interpret for multiple characters at the same time, with multiple holograms of the same performer being projected simultaneously to give this effect. By recording these different performances for each character with different costuming, it was even easier to tell which instance of the Deaf performer was interpreting for which character in the live production.
In a similar way, this also allows for multiple instances of a Deaf performer to appear on stage at once, which could represent any number of feature of a production, from crowd scenes to representing how overwhelming a series of thoughts is, to detailing different features of a soundscape and communicating their presences in the mix of the sound design. Another great feature of this presentation of a Deaf performer is that, if touring internationally, other sign languages can be included with very little effort on the part of the rest of the performance, such as ASL for American audiences.
Another benefit is that, for recordings of this live performance, the volumetric Deaf performer can be included as an interpreter in a number of different ways; from existing on stage with the performer, to being placed in the bottom right hand corner of the screen for audiences, to existing as an AR interpreter in the world of the audience as they watch the performance on a screen. This gives Deaf audiences more control over how they position and receive the interpretation of the event. Also, who says that the deaf performer has to be the hologram? An in-person Deaf performer with a spoken English, or any other language, translation could tour internationally in this same way.
It is important to stress again, as I have done in each of these blog posts, that access should be included in a creative work from the get-go and not just tacked on at the end. The creative inclusion of access in this way allows for disabled, Deaf, and neurodivergent audience to feel considered and not as an after thought, plus it can be a catalyst for creativity and not a chore. This phase of the R&D highlights this perfectly, with the live audience we presented this phase in front of agreeing with us that what we had made here was a phenomenal tool for artists of all levels to be able to consider building Deaf interpretation into their work from first inception. This is very much what we had in mind when running this R&D process as a whole; making sure that whatever methods or prototypes we generated would be replicable by the average artists or venue regardless of budget or technical literacy. We have used a lot of commercially available hardware and software, kept as simple as possible in its combinations, and have achieved all our prototypes in a variety of ways that are affordable for the budget of most creative projects, and understandable for a non-expert level of technical knowledge.
This phase of the R&D ended with a live performance of a section of //Tuning In// in which the AR Deaf performer was projected into the performance space next to the in-person performer, with choreographed interactions between the two of them to help sell the illusion of co-presence between the on stage performers regardless of their live or prerecorded nature. By building this Deaf performance into the production section from the get-go, we created a performance that was enjoyable to audiences regardless of whether they were Deaf or not.
Watch a playlist below demonstrating the Phase 3 prototypes, and find out how you could get a sneak peek and feed back yourself here: