Over the past few weeks, I have been spending my Wednesdays pulling together the planning for my emerging tech essay, and I decided that it’s about time I wrote a post about it. I plan to write about the uncanny valley in my essay, and talk about how this has effected game animation in the past, and speculate on how it will effect game visuals and animation in the future, so most of my research is based around motion capture from the games I plan to talk about, and photogrammetry along with how it works.
Until Dawn: Behind the scenes bonus content – https://youtu.be/UR-IgPhsHls
I plan to talk about the uncanny valley in Until Dawn as part of my essay, because the facial animation in that is really weird, so I took to YouTube to watch the behind the scenes bonus content that shows how that works. They marked the faces and made the models with the same topology as the actors who played the characters to try and keep it accurate. For the facial animation, they marked where the muscles are on the actors’ faces and used a small camera attached to a helmet that is positioned directly in front of their face so that it would accurately work out the muscle movement, and this recorded the voice lines at the same time. And then for the body animation, they filmed the footage separately with different systems, using reflective bead suits with an infra red camera that is connected to the bone hierarchies in the character models.
Siggraph 2017: Photogrammetry workflow talk – https://youtu.be/Ny9ZXt_2v2Y
Even though I already checked out what photogrammetry is through a general google search, I figured I’d check around on YouTube to see if there was any GDC like talks on it. There was a good one on Star Wars battlefront, however I managed to find this good Siggraph 2017 one that talks through the workflow of making a 3D environment. This talk has given me a better general understanding of what photogrammetry is and the sheer amount of work involved with this technique. Hundreds, maybe even thousands of high quality, well lit photos are needed to be able to digitally re-create something in the form of a 3D model. Objects in digitally re-created environments are sometimes scanned individually, as is the case with the acorn in this talk, whereas sometimes the photographers painstakingly separate a square of ground, and take photos working their way around the edges inwards to the middle in a spiral pattern so that no footprints obscure the photos and change the way the ground looks. They also have to make sure the lighting is right, which sometimes required artificial lighting (for example, big fill lights like the ones used when taking photos of models) to be brought in, in order to soften shadows and make it easier for the software to recognise edges and where vertices should be.
Extra Credits: The Uncanny Valley – https://youtu.be/9K1Kd9mZL8g
Extra Credits is one of my all time favourite YouTube channels, and this video was my gateway introduction to their channel years ago when I first discovered them. This video describes how Dr Masahiro Mori first encountered the Uncanny Valley while making and presenting robots, and it also talks about about possible solutions to overcoming the uncanny valley. The video suggests either developing our technologies further to achieve better photo-realism within games (which can link to photogrammetry for my essay), or making things stylised, because stylised things are anthropomorphised versions of things that aren’t supposed to look exactly human, such as Super Mario games. It’s a couple of years old, but the content is still useful because it talks about photorealism and gives good insight into how the field was looking 5 years ago.
Update: I recently did more research to support my essay, however since it has been 2 weeks since I first posted this, I decided to make a new post for it, which you can see here!