Ubisoft: Game Campus

13 08 2007

Yves Guillemot speaking at the Edinburgh Interactive Festival today:

To make sure we can make blockbuster products we are recruiting about 500 people are year, and plan to go to 600 to 900 in the next few years. We can’t recruit this many people, so we have to develop them. We have a campus in Montreal, and will open campuses elsewhere (china etc) to allow us to develop people so that they can go straight to developing games. We need to have the possibility to take risks and to fail. The more risk you take the more chance you have to develop better products.

We are creating a CGI studio in Montreal and elsewhere. [Talks about cross-pollination of CGI movie professionals and games industry]

Later on audiences:

We think people will learn 5 times faster than a book, using a Nintendo DS. If we ensure that the content is interesting and the right match for the user.

Liveblogged from EIF at Ascii Dreams.





Quantic Ambitions

10 08 2007

This interview with Quantic Dreams passed us by in the run up to the Develop Conference, but it’s worth reading. They have plenty to say about games maturity as a medium:

Commented Cage: “Our industry is now ready to reach a more grown-up and wider audience. We will continue to produce games for teenagers, but not all people want to shoot and drive. It is now time for our media to reach maturity. We need to become more ambitious, more creative; we need to dare to explore new directions and to take risks. I am convinced that great rewards will be found following that route.”

It also has an interesting detail about their business model:

Having a mocap service is an interesting way to offset the long-term development business with a short-term contract one working for other companies. Was that the intention when you started offering motion capture to third parties?

GDF: The only way to reach the level of quality we were targeting with mocap and to produce the massive volumes of data required by our format for a reasonable cost was to own a system. It gave us time to work on R&D, try different solutions, and most of all, to integrate mocap in our pipeline properly, which very few studios in the world can do. Very quickly, the expertise we obtained interested other companies, and we started working for movies, commercials or games, which in turn contributed to our R&D by supporting our financial effort.





Uncanny

8 03 2007

I’m seeing the uncanny valley everywhere right now. It’s something I wrote about in the context of visual aesthetics 18 months ago, but since then I’ve realised it has a much wider scope. Watch the beginning of this video (Ignore the Havok physics title, it’s Euphoria by NaturalMotion):

A self-righting Indy with gripping hands impresses me… but it doesn’t fool me.

Similarly, Image Metrics have been showing some astounding work recently. Looking at the CGI head on the latest Develop cover, I am once more impressed, but once again not fooled. There are all sorts of subtle mismatches in texture and light that add up to it not quite being human.

Likewise, when set against speech bubbles, the body language of these avatars in PSHome is reminiscent of stereotypical cult members.

Everything that might be wrong with a digital representation of a human is also potentially wrong with a digital representation of anything else. The difference is in us: a significant amount of the human brain is specialised for evaluating human appearance and behaviour.

As Scott McCloud pointed out in Understanding Comics, we can infer a face from a circle, two dots and a line. Put a circle with a big dot in it inside any random shape, and it will suddenly tend to resolve into a face. We can recognise faces made of less than 100 pixels.

We intrinsically have higher standards in evaluating humans as real compared to, say, rocks and trees. If we weren’t human, the level NaturalMotion and Image Metrics are at might fool us. We are though, and it doesn’t, even as it makes us go “oooh”.

It will all get better of course. Simulating one component of the unconscious at a time, we’re building more realistic AI. At some point though, they will stop looking like good AI and begin to look like bad actors.

(Hello, I am TK422, otherwise known as David Hayward. I’m going to be posting here alongside Toby. There may be some crossover between this and my own blog, Functional Autonomy, but overall I’ll be putting different content up on each).