I’m seeing the uncanny valley everywhere right now. It’s something I wrote about in the context of visual aesthetics 18 months ago, but since then I’ve realised it has a much wider scope. Watch the beginning of this video (Ignore the Havok physics title, it’s Euphoria by NaturalMotion):
A self-righting Indy with gripping hands impresses me… but it doesn’t fool me.
Similarly, Image Metrics have been showing some astounding work recently. Looking at the CGI head on the latest Develop cover, I am once more impressed, but once again not fooled. There are all sorts of subtle mismatches in texture and light that add up to it not quite being human.
Likewise, when set against speech bubbles, the body language of these avatars in PSHome is reminiscent of stereotypical cult members.
Everything that might be wrong with a digital representation of a human is also potentially wrong with a digital representation of anything else. The difference is in us: a significant amount of the human brain is specialised for evaluating human appearance and behaviour.
As Scott McCloud pointed out in Understanding Comics, we can infer a face from a circle, two dots and a line. Put a circle with a big dot in it inside any random shape, and it will suddenly tend to resolve into a face. We can recognise faces made of less than 100 pixels.
We intrinsically have higher standards in evaluating humans as real compared to, say, rocks and trees. If we weren’t human, the level NaturalMotion and Image Metrics are at might fool us. We are though, and it doesn’t, even as it makes us go “oooh”.
It will all get better of course. Simulating one component of the unconscious at a time, we’re building more realistic AI. At some point though, they will stop looking like good AI and begin to look like bad actors.
(Hello, I am TK422, otherwise known as David Hayward. I’m going to be posting here alongside Toby. There may be some crossover between this and my own blog, Functional Autonomy, but overall I’ll be putting different content up on each).