Unreal Engine for Non-Games Applications

15 02 2008

Dallas Cowboys

Via Develop comes news of the Unreal Engine being licensed to create walkthroughs of stadiums. The link to Mark Rein’s February piece for Develop quotes them:

“It renders 30 frames per second instead of one per hour – that’s 3,000 times faster than traditional animation methods. This makes it a first (and only) in the architectural industry.”

(The image above shows a non-realtime, non-UEngine render of the new Dallas Cowboys stadium).

Epic, as well as a startup or two, really started pushing this stuff around 2002/2003. I’ve seen architecture students dismiss earlier versions of the engine as completely impractical for anything related to professional architecture, but realtime rendering, interactivity and smoother integration with third paty 3D packages was bound to push engines in this direction. It’s going to be an interesting transition to watch.





Games Stories Should Remain Simple?

23 08 2007

Coverage of the storytelling panel at Leipzig is on Gamesindustry. The overall outcome seems to be the conclusion that game stories shouldn’t be more complex, because games don’t live up to it:

For Rolston, when asked about whether games should try to be less linear and more complex with their storylines, it was “the worst idea I ever heard,” because “games aren’t any good at it.”

The reason for this is down to “our inability to pay off on all the choices that there should be available. It’s so difficult to make a genuinely complex dramatic choice.”

Bates echoed the sentiment, and compared the problem for videogames to that of a fiction novel.

“As an author of a story you have to push a character into doing things it wouldn’t want to do in order to grow the character. As a game designer it’s not fair to make the player have to do that.”

For Rolston the most effective method comes with the use of ambiguity – a history suggested by non-specific ruins or artefacts, for which the player is able to supply his or her own narrative.

But Bates was sceptical on that point, and was critical of most backstory execution today.

“What passes for story in most games is just revealed backstory, and that is really that. It can provide some context, but fundamentally it’s uninteresting. I want what’s in my mind to poison your mind, and that’s not going to happen with ambiguity.”

All interesting stuff, and perfectly valid criticism of the state of games right now, but what they’re pointing to with the idea more advanced stories seems to be a very different type of game to anything we’ve so far known and loved as “games”. That’s not a bad thing at all, but when you look at just how poor us humans are at evaluating probabilities, as well as our tendency to post-rationalise any kind of bad but costly decisions, it points to a form of game that could be a baffling and introspective experience.

Certainly closer to “art” than “game”, and it’s probably not the kind of entertainment anyone would want to grab hold of for five minutes on the tube, or an hour after the kids have gone to bed.





Quantic Ambitions

10 08 2007

This interview with Quantic Dreams passed us by in the run up to the Develop Conference, but it’s worth reading. They have plenty to say about games maturity as a medium:

Commented Cage: “Our industry is now ready to reach a more grown-up and wider audience. We will continue to produce games for teenagers, but not all people want to shoot and drive. It is now time for our media to reach maturity. We need to become more ambitious, more creative; we need to dare to explore new directions and to take risks. I am convinced that great rewards will be found following that route.”

It also has an interesting detail about their business model:

Having a mocap service is an interesting way to offset the long-term development business with a short-term contract one working for other companies. Was that the intention when you started offering motion capture to third parties?

GDF: The only way to reach the level of quality we were targeting with mocap and to produce the massive volumes of data required by our format for a reasonable cost was to own a system. It gave us time to work on R&D, try different solutions, and most of all, to integrate mocap in our pipeline properly, which very few studios in the world can do. Very quickly, the expertise we obtained interested other companies, and we started working for movies, commercials or games, which in turn contributed to our R&D by supporting our financial effort.





Uncanny

8 03 2007

I’m seeing the uncanny valley everywhere right now. It’s something I wrote about in the context of visual aesthetics 18 months ago, but since then I’ve realised it has a much wider scope. Watch the beginning of this video (Ignore the Havok physics title, it’s Euphoria by NaturalMotion):

A self-righting Indy with gripping hands impresses me… but it doesn’t fool me.

Similarly, Image Metrics have been showing some astounding work recently. Looking at the CGI head on the latest Develop cover, I am once more impressed, but once again not fooled. There are all sorts of subtle mismatches in texture and light that add up to it not quite being human.

Likewise, when set against speech bubbles, the body language of these avatars in PSHome is reminiscent of stereotypical cult members.

Everything that might be wrong with a digital representation of a human is also potentially wrong with a digital representation of anything else. The difference is in us: a significant amount of the human brain is specialised for evaluating human appearance and behaviour.

As Scott McCloud pointed out in Understanding Comics, we can infer a face from a circle, two dots and a line. Put a circle with a big dot in it inside any random shape, and it will suddenly tend to resolve into a face. We can recognise faces made of less than 100 pixels.

We intrinsically have higher standards in evaluating humans as real compared to, say, rocks and trees. If we weren’t human, the level NaturalMotion and Image Metrics are at might fool us. We are though, and it doesn’t, even as it makes us go “oooh”.

It will all get better of course. Simulating one component of the unconscious at a time, we’re building more realistic AI. At some point though, they will stop looking like good AI and begin to look like bad actors.

(Hello, I am TK422, otherwise known as David Hayward. I’m going to be posting here alongside Toby. There may be some crossover between this and my own blog, Functional Autonomy, but overall I’ll be putting different content up on each).