View Full Version : Distributed Computing & Neural Network Trainning Togethe

06-06-2006, 05:43 AM
You know I've been thinking about this idea not too long ago where you would mix new Neural Net techniques together with the recent Distributed Computing projects of the past few years.

If you don't know what Distributed Computing is check out this article (http://en.wikipedia.org/wiki/Distributed_computing) on Wikipedia.
A great example of a distributed computing project done right is SETI@Home (http://setiathome.berkeley.edu/)!

I mean, if serious computer professionals will run a program in the background to search for alien signals on the slim chance they'll find something among all that noise, then why on Earth wouldn't gamers run run a small app in the same way to train for competative enemy bots and opponents?

It seems like such a great fit. Take, for an example, a first person shooter like Quake or Unreal. And lets assume that this game engine uses trained NNs to 'run' the enemy bots. Well it's going to take ages to make more and more challenging bots s you will have to consistantly train them in several scenarios over and over and over again...

Ever play a game where you play it enough and the enemy becomes so pretictable in it's behaviour that you don't want to play it anymore, just based on that? But you love the game's genre so you look for other games like that one, but same thing... preticable after a point. What if there was some system or mechanism that kept the enemy learning and adapting to scenarios.

You can even apply this last idea to bots that are on your own 'team'. This can act as an exciting benefit where some bots may have more experience in some trainned areas than others and sometime even you, if the game engine tracks such stats about your own player.

Now some of these ideas can be incorporated using non-Distributed Computing systyems, but imagine the increased variety of using all that extra processing time to mix things up a bit. Variety is a game developers and game player's mutual best friend because for the player, this adds more user experiences, and more user experiences means more replay value. This is a great aspect of this concept as it pumps more value into your game design.

06-06-2006, 08:09 AM
Ive been looking for the last 10min for a few links that talks about some of the thing you wrote.. want able to find it :( I know is somewhere in the NEAT forum (http://groups.yahoo.com/group/neat/)

I wanted to create for last years PGD contest a game that you train the fighter pilots to act differently => you get an enemy wing and you need to be able to adapt in order to win (the idea was a bit more complex but that should have been the cool feature).

06-06-2006, 11:50 AM
Well what I'm talking about is nowhere to be found. :) It doesn't exist yet.

However if you just want to find out about using Neural Nets in your AI systems there are a whole bunch of places to look. It is a rather large topic in it's self, nevermind the level that I'm talking about using it here.

One of the first places that made it 'click' for me was the AI Junkie site (http://www.ai-junkie.com/). Matt Buckland (aka Fupster) even wrote a couple of books (http://www.ai-junkie.com/books/index.html) on the subject called 'Programming Game AI by Example' and 'AI Techniques for Game Programming'.

Both are a good read, but with C++ code. However I believe the one you'd find the most interesting is the second one which directly talks about using Genetic Algorithms to train Neural Networks. It even has some information about NEAT in the last chapter, 'Evolving Neural Net Topologies'.

06-06-2006, 12:49 PM
I knwo about AI Junkie site its even in my "(mini)Resource center" thread :wink:

Can you elaborate on your idea, focusing on a specific application.

06-06-2006, 07:49 PM
My Professor at Uni was looking into using Generic Algoritms to breed neural nets that solved particular problems, the neural nets were delivered to dictributed PC and run against the problem, the results were then fed back to the "main" server and the next population was generated until a fixed limit was reached.

From what I saw it worked really well, on one test they took over about 500 PC's, now that was a sight to see :D