First came Connor, then Markus. Finally, with Quantic Dream revealing the third of its Detroit leads, Kara, we thought it the best time to look back on the tech demo that sparked the android revolution. Five years after its creation, we asked David Cage to revisit Kara, offering his commentary to the demo, and in this exclusive story, offer his insight into the evolving technology that has defined Quantic Dream’s games.
Quantic Dream is one of the few studios in the world to develop a new engine for each game. The objective of this ambitious endeavor is to push the envelope (and the hardware) as far as we can and give our fans the best looking game possible.
We also try to improve the quality of acting performances game after game, which is strongly related to the quality of our technology.
Our evolution from Kara to Detroit illustrates the progress we’ve made in these areas.
Learning from Heavy Rain’s motion capture
One of the objectives in making the Kara short was to complete our first shoot in Performance Capture. Heavy Rain was shot entirely in “Body” Motion Capture, with facial movements and voices shot separately.
All our actors did an amazing job, but their performances were captured in two parts: first we filmed all body animations, then we recorded voice and facial animations in a sound booth, hoping everything would synch together.
As a result, the performances were disjointed – the eyes could never look in the right direction and it was very challenging (for both us and the actors) to get the level of performance we were looking for.
For the Kara short, we upgraded our Motion Capture system to be capable of recording body, face and voice all at once – what we call Performance Capture.
We really wanted a setup that wasn’t intrusive. This meant no helmet, no face camera, no backpack and no wires. We wanted the setup to be as invisible and light as possible, so our actors could quickly forget it. So we installed a wireless mic for the actors to wear and developed a system for tracking markers without a helmet or a projector, a system precise enough to track eye movements simultaneously.
Last but not least, we needed to capture data good enough to minimize the need for post-animations. We shoot massive volumes of dialogue, so we couldn’t afford a system that would provide data of a quality that would require a lot of work to look good in the game.
In short, we wanted high quality data captured with a very light setup, which was a very interesting challenge…
How Kara helped refine the capture system for Beyond and Detroit
The Kara short is the result of this first iteration. When I saw the first captures, I realized that there was no going back. The gain in quality of acting performance was so high that we couldn’t understand how we were doing it before…
After Kara, we kept improving the precision of our capture system. We also greatly increased the area in which we could capture: working on Kara, we were capable of shooting one actor in performance capture in an area of 2 meters square; on Beyond it was four actors in 9 meters square, and on Detroit we were able to shoot six actors in 16 meters square.
The precision of the data we capture has improved dramatically. We now capture details on Detroit that we could only see on set before.
We have also continued to improve all the technologies linked to the quality of acting performances.
We have developed a muscle simulation system, a wrinkle simulation, a shot by shot lighting rig to have soft and detailed shadows, real time translucency (like how your ears become red when there is a light behind you — yes I know, it’s not a common situation) and many other technologies that you may not see but that play an important part in the impression you get playing the game.
Since the engine we created for Kara in 2012 and the one we use for Detroit today, our rendering technology has also been through many iterations. The engine used for Kara was an evolution of Heavy Rain’s engine and the first version of Beyond: Two Souls’ engine.
After Heavy Rain, we wanted to improve the rendering of skin and eyes, and we wanted to have more subtle light and shadow on faces. We also worked on some improvements regarding image rendering, especially depth of field (the blurry area in the background when the camera is focused on the character).
We were quite satisfied with the progress compared to Heavy Rain, and I remember we all feared that the demo would over-promise compared to what we could deliver visually in our next game Beyond: Two Souls. Working on a short demo is always different to a full game, so we had many discussions about whether it would be fair to show this short. In the end, we decided to present it because we were confident that Beyond would look at least as good, if not better.
The evolution of engines from Beyond to the Dark Sorcerer to Detroit
Beyond used another iteration of the same engine, which improved every single aspect of the tech. To my mind, the game looks considerably better than the Kara short.
Dark Sorcerer was a major step forward for the studio as it was our very first PS4 engine. It remains, for me, one of the best-looking demos we have created.
For Detroit, we’re using a brand new engine again. We invested a lot of time in having optics that are physically correct, unlike some virtual cameras. In short, virtual cameras have no limitations and can emulate optics that cannot exist, resulting sometimes in visuals that are not very convincing.
For Detroit, we worked on aligning all parameters on real optics so we can use the rules that are commonly accepted by our audience. This little change had a massive impact on the visual quality of the game. We added many new features, from bokeh, advanced lens flares, improved lighting, real-time motion blur, volumetric lights, higher resolution on PS4 Pro and many other features.
This new engine combined with our progress in performance capture makes Detroit the most advanced title ever produced by my studio. From Heavy Rain to Detroit, Quantic continues to seek new ways and create new technologies to better capture and inspire emotion.
Although technology will never create emotion, it opens new possibilities and gives creators access to nuances and subtleties that were impossible before.
from PlayStation.Blog http://ift.tt/2ic2PBs
via
IFTTT