Virtual Production: creative solutions for all

By Dimension
09 Nov 2023

Visit profile

Perhaps it was a long time ago in a galaxy far, far away – maybe even, a dark desert highway, with cool wind, and the warm smell of Colitas…

However the story is told, the art of great storytelling typically begins with great scene-setting and building a world for an audience to hook engagement. Dictating or giving clues as to where we, the viewer, are, and the rules that exist in a world for characters and plot to unfold.

Approaches to world, scene, and set design and construction have evolved through the ages from the early exploration of Chroma key compositing in the 1930s to CGI enhancement in the mid-1980s and have been a mainstay ever since. In 2019 a new chapter opened in this art. Full-CGI approaches (most notably, Virtual Production) emerged via The Mandalorian, The Lion King, and later, Black Widow, Avatar: The Way of Water, and Barbie. 

So what powers does this new approach possess? Where is it going? This article aims to focus on the Virtual Production (VP) methods, as well as the collaboration, flexibility, efficiency, and creative opportunities that it presents for not just Hollywood studios, but for brands and agencies exploring the possibilities of real-time technologies and the powerful combination of digital and physical.

Image caption: In camera VFX (Dimension Studio, DNEG, Sky Studios)

We speak to two industry leaders pioneering virtual production. Read on for Territory Studio’s run down of the benefits of virtual production, while Dimension Studio’s Callum Macmillan explores emerging virtual production techniques.

Territory: Speaking volumes

Primarily, VP has been adopted for in-camera VFX. It allows for fully immersive, imaginary worlds to be played out on large canvases, reducing the reliance on extensive VFX post-production. This content can be dynamic and navigable, pre-rendered, or rendered in real-time with a video game engine such as Unity or Unreal Engine. Setups can range from 360 wrap-around 3D volumes, to modular LED walls integrated into physical sets.

In its purest sense, by displaying a content on screens backing a set, filmmakers can achieve realistic background, lighting, and reflections for actors to work in.

Now that we’ve ‘set the scene’, let’s delve deeper…

Image courtesy of Soho House

Customisation and Predictability

Filmmaking can be a complex and challenging pursuit. Imagining or framing shots and curating the action typically takes interactions between the director, cinematographer, production designer, actors, and the list goes on. Whilst working with real-time engines to create environments doesn’t necessarily grant infinite opportunity to tinker on-set, it does offer a level of customisation that is otherwise not possible to achieve with other approaches to set building.

As well as this opportunity for customisation, virtual environments offer set predictability such as consistent (and highly customisable) lighting scenarios, no matter what the time of day or weather forecast. This can be a brilliant save for pick-up shots for example, a solution which may well be more cost-effective than returning to a physical set location.

Image caption: Virtual production on set (Dimension Studio, DNEG, Sky Studios)

From Storyboard to Reality

Working with VP fundamentally changes the project approach. In many respects, VP is the inverse of traditional workflows, so the content, shoot and any VFX Post all require consideration upfront.

Shots can be mocked up and tested in advance of a physical shoot day – practical issues can be foreseen and ironed out, and efficiencies can be found. For example, pre-visualisation can help dictate which areas of an environment content require higher or lower levels of detail for efficiency.

Meanwhile, modular and nimble technical setups are opening up further opportunities for hybrid approaches, blending green screen and LED. It’s become a really useful approach to problem-solving when shooting for real is either too difficult, expensive – or simply impractical.

Image courtesy of Soho House

Maximising Resources

VP offers several financial benefits when compared to traditional production methods by minimising or reducing the need for location shoots and physical sets. Of course, this may also reduce the carbon footprint associated with production and improve time-efficiency too.

It offers greater control, flexibility, and efficiency in the production process, making it an increasingly attractive option for filmmakers looking to optimise their budgets while maintaining creative freedom.

VP introduces a level of fluidity to the decision-making process. Teams can swiftly adapt, or experiment with different visual elements, and quickly gauge the impact of alterations. This not only enhances creative freedom but also expedites decision-making, ensuring that productions stay on track and within budget.

Dimension: Beyond the Wall

Virtual production has been somewhat narrowly understood as ‘filming live-action scenes in front of an LED wall’. In reality, virtual production exists as an umbrella that encompasses and employs many technologies and capabilities, from real-time Visualisation and Simulcam to Deep Learning AI and Neural Reconstructions.

From conception to pre-production, virtual production alters the traditional creative process by playing multiple roles at once. Visualisation is proving to be one of the most powerful applications of the technology. Using a real-time engine, a 1:1 model of all set elements can be placed alongside lights and cameras as an exact replica. Directors can then move around the virtual set and find filming locations or camera angles well before ever stepping on set. Rather than waiting months to catch a glimpse of their vision, you get the immediate opportunity to play around with a comprehensive model in a real-time environment.

Image caption: Production visualisation demo at Production Summit LA

“For filmmakers, the ability to visualise early concepts in a real-time engine can elevate the entire project. Not only can they view the world, but virtual production also enables a fairly cinematic  visualisation of the world in a compact studio setting.” 

The benefits of this process go well beyond shot selection. Visualisation and concept planning with real-time engines allowing for more planning and therefore fewer revisions. In essence, it allows you to ‘fail faster’, in a safe environment. Concepts can be rapidly brought to life, giving you more than a pitch deck to keep stakeholders interested. We’ve had directors rewrite entire sequences after showing them how a scene would appear at night – something never before possible.

Even in film, we’re working to make processes as efficient as possible, using kitbashing and library / asset packs..This content can be  easily licensed, and is editable in real-time – offering short form productions such as advertising and marcomms, a practical way into utilising and benefiting from virtual production

Transmedia: One Input, Multiple Outputs

Beyond film and TV, virtual production can be a powerful tool in the hands of agencies and content creators. If you were to create a virtual human or environment for use in a short branded film, that same asset could be used for real-time interactions with the public, a branded XR experience, or for branded influencer content on social media.

“Getting people comfortable with a free viewpoint and traversing a virtual space can only be a good thing for the future of the internet. Eventually, every piece of content will be 3D and interactive in some way.”
Callum Macmillan, Dimension

Deep Learning AI

At Dimension, we’re really excited to be exploring 3D (still image) / 4D (moving image) Neural Reconstructions and View Synthesis to generate realistic digital representations of real-world people, objects and environments. Our goal is to enable these AI assets to be editable and play in real-time, inside a virtual production world for whatever screen or medium the end content will be displayed on. Deep learning frameworks such as Neural Radiance Fields (NeRF) and Differentiable 3D Gaussian Splatting (3DGS) have the potential to fix many issues which arise in the traditional mesh-based method of photogrammetry we’ve previously used (non-AI, algorithmic 3D/4D reconstruction from images), resulting in an ever higher level of photorealism.

For context, traditional photogrammetry analyses features between multiple photographs which leads to creating 3D structures out of polygons and applying textures (maps) over the top of those polygonal shapes in an attempt to represent the photographed scene in three dimensions. This classical approach has struggled with recreating crucial elements, like mouth cavities, eyes, teeth, fine hair, view dependent specular reflections, semi-transparent areas, etc.  In the context of creating a ‘digital twin’ of a person, which is one of Dimension’s key specialisms, these issues can contribute to the ‘uncanny valley’ effect when viewing a virtual human likeness of somebody.

So what these neural representations offer is, rather than using the photogrammetric approach of polygons and triangles in the first instance to define the ‘scanned’ (recorded)  person or object, instead they attempt to take a learning-based approach, bypassing explicit 3D reconstruction (meshing, texturing etc), and learning to directly map from source views (images) to target views (wherever you choose to place your virtual camera or PoV around the subject). Hence that same ‘data set’ of images that was traditionally fed into a photogrammetry workflow can be input into something like 3D Gaussian Splatting (3DGS) for example. 3DGS uses a cloud of egg shapes. By changing the orientation and colour and transparency of the eggs (eggs and Gaussian Splats being one and the same) based on the viewing direction, it is possible to approximate a scene in 3D remarkably accurately. The 3DGS AI system is trained on many 2D images, which give it an idea of the likely distance and angles between the egg shapes.

Scanning using 3DGS results in capturing more detail, for example with hair, better results for shiny and transparent materials and more.  We get a better quality of render from 3DGS, but it’s (currently) very heavy for moving image data (video).

While there’s no denying that AI and Neural Reconstructions will change the industry, its actual impact will be more disruptive than threatening. Some AI will help speed up the creative process, but in many areas it still struggles to match the quality of traditional methods. It will always be the people who can use the tech in an interesting and novel way that will be the most successful, but everyone benefits from familiarising themselves with the tech ahead of time.

Looking Forward

Virtual Production’s ability to bring forward aspects that would traditionally happen in post-production is possibly its greatest selling point, regardless of whether an LED volume is used. Not only does it save time and cut costs, it gets stakeholders more involved early on, enabling a better result. It avoids making drastic and costly changes later once the cameras are rolling, while allowing for open experimentation and decision-making without consequences.

Knowing when to use LED volumes and other virtual production methods is vital to understanding its efficacy. As virtual production matures, it will prove to be a vital tool for anyone involved in content creation.

Image courtesy of Bedlam Film Productions

For more about virtual production:

Check out the Virtual Production Innovation video series

Dimension and Territory Studios are part of the Immersive Technology Council at BIMA.

Immersive Tech

Latest news