Emote Get £600K

2 10 2008

Midlands games developers seem to be well ahead of the rest of the country in utilising funding offered by the Technology Strategy Board. As well as Monumental being funded recently, Emote just announced £600K from them as part of a £1.3M project to improve AI in networked environments.

They are also apparently partnering with Imperial College London for the project. Not only is this a more intelligent approach than most studios take, Imperial are a heavyweight university for CS, with well respected courses that make students work hard. Kudos to Emote, who are only about 25 people strong.

(via Develop)

(CC image for this post by Nadya Peek, of a rather interesting AI project involving networked Darth Vader helmets apparently composing music. Youtube links in the Flickr comments).

Advertisements




Kynogon Head autodesk Games Group

26 03 2008

Chess

This is quite an interesting tidbit on convergence and the games industry: Autodesk are buying AI middleware developers Kynogon, but it’s the latter who’ll be taking the lead on games technology. Quote:

He says this as he and fellow founder Pierre Pontevia will head up Autodesk’s new games technology group with ‘to bridge the gap between Autodesk’s expertise in 3D asset creation and real-time engines’.

This is part of a growing trend for gaming and middleware companies to be bought by technology companies. The opening paragraph of the full interview:

Intel (annual revenue $38 billion) has Havok and the Project Offset game engine. Nvidia (annual revenue $3.8 billion) has Ageia and, if you cast your mind back a couple of years, graphics optimisation specialist Hybrid too. It should be no surprise that big technology corporations are interested in game middleware.

As this ars technica article on Intel and Project Offset points out, acquiring gaming technology allows them to have a good old poke at the boundaries:

Intel has remained mum its plans for Project Offset. It’s apparent, however, that the company is taking an interest in all of the separate technologies that drive 3D gaming and game development. This probably has less to do with gaming, per se, and more to do with the difficulty of extracting parallelism from one set of instructions while an entirely separate set of calculations is already taking place. With the number of cores-per-CPU continuing to grow, Intel faces the challenge of keeping all those cores busy—and that’s before we consider Larrabee, which is designed around its own multicore architecture. While we probably won’t see “The Adventures of Pentia” on the market anytime soon, the research driven by acquisitions like this could lead to significant advancements in parallel programing and, by extension, multicore efficiency.

(CC chess image by gabork)





Geomerics

13 02 2008

Geomerics

There they go, chipping away a little more at the uncanny valley. Not long now until we start climbing the other side, but all those photo-real, ersatz humans are going to act in a downright spooky way unless we can bootstrap AI to match the standards of their appearance.





AI in Games Network

22 01 2008

Procedural content

Great post from Mark Morris of Introversion, about the inaugural meeting of the AI in Games Network. So far it involves three UK universities meeting a load of game developers. It’s difficult to pick out a highlight from the post, really, the whole thing is worth reading, but I especially like this part where he doesn’t mince words about the current state of industry-academic collaboration:

So why aren’t we seeing all these wonderful techniques in games now? Is it the fault of the men in T-shirts or those in white coats and what do we need to do to ease the passage of research from the lab to production-level video games? Having listened to the arguments from both sides, I’m going to take the blame and say it is industry that is providing the barriers. We constantly criticise the academics and say that their techniques would not work on real games, yet when they ask for some source code we tell them that there are “IPR” issues or that we do not have time to work with them. The very nature of research demands that most attempts will fail (or will almost certainly take longer than expected) and so we tell them that there is too much risk to let them near our precious games.

I’d like to see this change and I have a plan. Once we have finished and shipped a game, we send the source code across to the boffins with a list of super-hard problems that we failed to solve. They then use science to fix those problems and come back to us with a demonstration of their technique working. We then compare the new version of the game with the old and see if the AI has genuinely made an improvement. If it has, then we consider implementing it in the next game. It sounds simple and of course there will be details to work out, but that’s why we have an AI in Games Network.

Oh, and if anyone say’s it’ll never work – tell them the DEFCON AI is currently being turned into HAL 3000 by separate teams at both Imperial and Bradford.

Kudos to Introversion for thinking beyond product, and letting academics tinker with their code. I look forward to seeing what the AI in Games Network results in.

(Header image is made from Chris Delay’s procedural content generation write up).





Artifical Intelligence in the Uncanny Valley

31 05 2007

Gamasutra just posted a feature I wrote for them: Uncanny AI: Artificial Intelligence In The Uncanny Valley

There are lots of things across all media that can already fool us. The crucial question, though, is how well do they do it? Distance and brevity obscure all manner of flaws, but at some point in a game, the player can always get closer or look for longer.





“Good AI is what you see, not how it works.”

27 04 2007

… quoth Peter Molyneux in this Guardian piece on AI:

That said, Molyneux doesn’t believe AI can be solely responsible for intense, dynamic emotional experiences; they need to be married with what he calls “smoke and mirrors”.

“You have to define what games developers call AI,” he says, “as opposed to academic AI. There’s actually very little true, academic AI in games. If I go along to universities and talk to professors of AI, they sort of laugh at us and our crude attempts at real-world AI. But my promise has always been, ‘Well, good AI is what you see, not how it works.’ Whether that’s a mixture of true AI and an illusion is neither here nor there, because it’s really about what it brings to the game.”

Steve Grand chips in:

“AI isn’t so much unappreciated as nonexistent,” he says. “Most of what counts as AI in the games industry is actually a bunch of ‘IF/THEN’ statements. If a computer character doesn’t learn something for itself then the programmer must have told it what to do, and anything that does exactly what it’s told and nothing else is not intelligent. This is changing, and neural networks and other learning systems are beginning to creep in. But games programmers tend to devalue the phrase ‘artificial intelligence’.

“This is mostly because the importance of AI in computer games is now widely recognised, and hence any attempt to implement it – including Creatures – gets hyped up pretty quickly. As graphics have improved, the behaviour of characters has got more and more embarrassing. When characters looked cartoon-like, any vaguely lifelike behaviour was impressive, but now that characters have fluid movements, realistic textures and complex facial expressions, they tend to engage different circuits in the players’ brains. The better the graphics become, the worse the behaviour looks. So the need for good AI is well-appreciated. The snag is that none of us knows how to make it work yet.”

Emphasis ours. The article is a good snapshot of where game AI is at right now, and many similar points were included in Adam Russel’s Skillset talk on procedural AI last month.