# Thread: New ideas for AI in games?

1. Originally Posted by Eric
I'm a fan of PID controllers in AIs (http://en.wikipedia.org/wiki/PID_controller) for a whole lot of behavioral aspects.
Seems interesting. Though the math on the wiki page does not make it seem simple.

2. Hmm, those PIDs do look interesting indeed, especially for difficulty manipulation... The concept is a lot like my first 'AIs' - a separate thread/timed loop every cycle that ran a similar sort of 'algorithm' per se.

3. Originally Posted by de_jean_7777
Seems interesting. Though the math on the wiki page does not make it seem simple.
Yeah, but don't be fooled by the sigmas and other fancy math symbols, there is a pseudo-code snippet that is probably a lot simpler to understand (for non-mathematician), and if you restrict yourself to the PI variant (drop the "derivative" and its use in "output" line), it becomes very trivial.

The pseudo-code doesn't contain it, but in practice you'll want to either clamp the integral term or dissipate it over time (dissipation works well IME for game AI purposes), and assuming your game loop already has so kind of fixed timesteps and you compute the output only once per frame, you can use something like that:

Code:
```function TControllerPI.Output(setPoint, feedBack : Float) : Float;
var
error : Float;
begin
error := setPoint - feedBack;
FIntegral := FIntegral * FKdissipation + error;
Result := FKp * error + FKi * FIntegral;
end;```
setPoint is the value you desire (an angle, a speed, a position...)
feedBack is the current value (or delayed value) in the game/simulation
FKp & Fki are proportional and integral gains (to be tuned)
FKdissipation is the integral dissipation gain (between 0 and 1, start from 0.9 and tune up or down)

And output is the command to send to the simulation (steering, pressure on accelerator pedal, etc.).
If things go in the wrong direction, just negate the output or your gains.

Alternatively you can also use

Code:
`Result := FKp * (error + FKi * FIntegral);`
Some people find that variant easier to tune.

For tuning, the wikipedia article gives good info in http://en.wikipedia.org/wiki/PID_con...#Manual_tuning and http://en.wikipedia.org/wiki/PID_con...Nichols_method

Though for games AI, you don't need very good tuning (or it looks robotic)

You can cascade PID controllers, or have PID controllers feed into other PID controllers gains. That allows them to handle more complex control situations (like when there is a lot of inertia), or can be used to simulate an AI that is getting "angry" over time (by ramping up the gains, you can get jerky or twitchy-looking movements).

4. In a number of ways, I think the more complex you make an AI, often the less 'real' if feels. To borrow from Terry Goodkind, you often have to keep the "Wizard's First Rule" in mind: People are stupid. Sometimes the simpler your logic, the more intelligent it feels to the player.

A great example of this is the logic of the ghost in Pac Man (something I studied a good deal in my clean-room implementation of it in 3/8ths scale) -- it's simple, but they often seem quite smart... When it's really a DUMB algorithm made up of a few simple rules.

1) ghosts cannot normally reverse direction.

2) ghost cannot turn 'upwards' in four specific locations

3) ghosts have three modes, scatter, follow and flee.

4) When switching between most modes they immediately reverse course. (the only exception to rule #1).

5) Exiting 'flee' does NOT change their direction.

6) in scatter, each ghost tries to go to it's home corner.

7a) Red Ghost aims for player. At 2/3 pellets remaining speed +5%, at 1/3 speed +10% and remains 'stuck' in pursuit. (fast pursuit blinky is called a "Cruise Elroy" -- nobody knows why)

7b) Pink Ghost aims 4 tiles in front of the player, an overflow bug in the code makes it to that if the player is facing up, It actually aims up 4 and left 4.

7c) Cyan Ghost is a bit more complex. It's target tile is calculated by taking a point 2 tiles in front of the player, making a vector from the red ghosts position to that point, and then using the exact opposite as the result.

7d) Orange Ghost is more than 8 tiles from player, aim for player. Less than 8 tiles, return to home corner. He's fickle. This actually makes the bottom left corner one of the hardest because if you're within 8 tiles of that corner, 'clyde' is gonna stay there.

... and that's really all there is to it. My own Pac Man ripoff mixes it up by adding a random 1 in 6 of ignoring rule 2, and a random 1 in 6 of ignoring it's normal pursuit rules at intersections. (ok, I've got a thing for 1d6).

That's actually pretty simple... and so good, that every time a new game goes and uses the same basic logic, it's 'praised' for it's AI. Take the original F.E.A.R. -- it's 5 mode (pursue, lead, cover, flee and scatter) using remarkably similar logic in terms of following or getting ahead of the player.

Scripted logic feels intelligent -- because it has a plan -- throw in just a hair of randomness to make it less predictable, and you're golden.

You also have to work off people's perceptions -- the smartest "lag pursuit" will often feel dumb as a rock because it can't actually hit you, while the dumbest lead pursuit AI can be really challenging because it's always aiming not for where you are, but where you're going. Learned a lot about that in ACM school at Eglin. If you were making a combat flight sim that actually had physics (sorry Airblast, but... no) having the AI ride the throttle to maintain airspeed in turns, keep it's energy high while maintaining corner velocity, using altitude to trade kinetic energy for potential and vice versa, turning not at the player but lead pursuit -- these aren't 'complex' concepts; easy to implement and could easily make serious threats to the human player.

Though a lot of times it comes down to NOT trying to hollywood it; because blazing along as fast as possible in a dogfight will keep you alive -- much less idiotic nonsense like "Bang the brakes, he'll fly right by"...

In a number of ways, I think the more complex you make an AI, often the less 'real' if feels. To borrow from Terry Goodkind, you often have to keep the "Wizard's First Rule" in mind: People are stupid. Sometimes the simpler your logic, the more intelligent it feels to the player.
Actually, I think if you invest too much on AI, it will feel predictable or "wired". In real life, many things are based on very simple rules, or at least "simple" rules have major influence for an external observer (e.g. flowers open during the day and close for the night, etc.), so if you keep AI simple yet flexible, it will be quite joyful to play with.

This is why in some schools the entire "AI" theme was dropped and replaced by "Intelligent Systems", which lately have been displaced by database and math related courses, since in many cases the "classical" AI is either application of random theory or one or more search algorithms.

Scripted logic feels intelligent -- because it has a plan -- throw in just a hair of randomness to make it less predictable, and you're golden.
This also happens elsewhere: when you are reading a book, which you can think of as a "script", you are imagining the entire scene, characters and so on. I think, this is because typical situations that we can think of, can be scripted, and when reproduced, it is quite easy to mistake the scripted part with the actual reality.

In a number of ways, I think the more complex you make an AI, often the less 'real' if feels.
I'd say that one of the problems in having an AI that reacts to a lot of factors is proritizing them and predicting all the exceptions. An AI that contains a clause "if you have less than X hp, flee, else attack" seems a good idea - but then, this is clearly absurd if the unit is totally surrounded - it will get killed anyway, and if it continued to attack, instead of trying to run away, it would have at least dealt some damage (maybe even killed someone).

7. Originally Posted by Super Vegeta
I'd say that one of the problems in having an AI that reacts to a lot of factors is proritizing them and predicting all the exceptions. An AI that contains a clause "if you have less than X hp, flee, else attack" seems a good idea - but then, this is clearly absurd if the unit is totally surrounded - it will get killed anyway, and if it continued to attack, instead of trying to run away, it would have at least dealt some damage (maybe even killed someone).
That's where that bit of randomness thrown in can really help in terms of ignoring triggers. The majority of people when it comes to fight or flight, pick flight -- to incorrectly quote Heraclitus (actual attribution unknown):

"Out of every one-hundred men, ten shouldn't even be there, eighty are just targets, nine are the real fighters, and we are lucky to have them for they make the battle. Ah, but the one, one is a warrior, and he will bring the others back"

Or the wisdom of Zathras:
Ah, but for the one. No, not The One. Draal gave Zathras list of things not to say. This was one. No. Um, not good. Not supposed to mention One or The One. Oh. Uh, you never heard that. The One leads us. The One tells us to go, we go. We live for The One. We would die for The One.

(you know it's bad when you can hear a character talking an uppercase The)

Taking queues from NPC systems in RPG's can also help -- it's part of why CRPG's are popular. A 'fear' scale for example that's weighted/rolled against a characters willpower... a critical success being the equivalent to Patton's "Bravery is just fear holding on a second longer". Normal successes on willpower could add to a 'rage' scale that enhances the next roll... basically fail the roll, go into flee or 'cower' if cut off. Pass the roll, they hold. Hold until relieved... hold... until relieved..., critical success, they do a balls to the wall charge, no matter how stupid it is. Critical failure... well... Shell shock/Battle Fatigue/Operational Exhaustion/PTSD/Patton is gonna slap you silly back at the aid station. You do it on say a scale of... 2..12 via 2d6, so that critical success (12) and critical failure (2) are 1 in 36 apiece odds-wise, and you have a very simple system that turns a handful of behavioral states into a unpredictable but believable system.

I think most anyone looking for inspiration on AI and player behaviors can learn a lot from a pen and paper RPG system, especially since a computer can make tracking all those numbers and 'rolling' for results simple -- often simpler than the convoluted systems I've seen in some folks code for handling what enemy units do. Studying statistics is also a great idea, as, well, take that quote above abnout 'out of every 100 men'... You could implement that quite easily; giving you 100 guys with the same basic stats, but varying behaviors.

8. Originally Posted by Lifepower
AI has been previously a very popular topic and in certain occasions there was even a degree that you can obtain in institutions. However, it didn't live up to the expectations and later was renamed to "Intelligent Systems" and now is an area of science that has limited applications, being more like a meta-science itself.
I think the problem that the concept over-hypes itself already in the name. But I consider "AI" and "game AI" to be very different.

I wrote two whole chapters about game AI in "Tricks of the Mac Game Programming Gurus" in 1995, probably one of the longer texts about the subject at the time. But I didn't call it "game AI" because "AI" was such a hyped concept, I called it "behavior" and "environment". But it was about the usual stuff, basic behaviors (hunter, evader, patrol), path finding, FSMs, game state analysis...

Today I am teaching graphics and game programming, I have similar material in my textbooks, but now I call it "game AI" and I actually consider the concept a nice contrast to AI. Game AI is well known to be limited, so we don't expect wonders, which is a good thing.

My favorite game AI techniques are flocking and influence maps. They can really produce interesting and convincing behaviors. But do spice it up with some randomness.

9. I'd love to have a PGD Challenge that is all about AI.

The thing I love about the whole topic, in some ways Ingemar is right (it's a bit overhyped), you can hack at it and as long as you make it look like it's doing something intelligent, or behaving intelligent, then you have a pretty good game AI. It's something that can be easy to get into, but can be hard to master.

I think such a challenge would have your players go against and try to "beat" it OR create non-player AI character/bots, etc that have to work with the player in some way.

Designing the challenge rules might be as interesting as designing the games that go into the challenge it's self.

10. Speaking of AI challenges, one just started on monday. Sadly it's hosted in finnish only. There's been many similar no-reward contests throughout the years, lots fun. They make up new game and rules, and let people use whatever programming language they want. You send source code only for the console application, and hoster makes binary of it on Linux test machine. Then they make them play against eachother and count points.

Currently up is: http://www.ohjelmointiputka.net/kilp...nnus=valtapeli
Not sure if google translates the page well, but i'll explain idea shortly. 16x16 board, where players fill 1 empty cell per turn. At the end all solid areas are calculated, and you get exponentially more points the bigger the area is. So 1 big connected area would give most. Other AI/player is only harassing, trying to disconnect the areas of other.

Page 2 of 4 First 1234 Last

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•