The problem is simply that if you maximize the window, a lot more drawing must take place. This is a bottleneck for the VCL (which is crappy), assuming that you're using that (if you are, think about using a better API such as DirectDraw, SDL, OpenGL, DelphiX...). This causes your game to slow down - which will be noticeable, since less frames will be drawn. What you want is time based movement. If you use, for example, "move 3 pixels for each game frame drawn", then your program will have the problem noted above - it will run at different frame rates depending on the amount of time it takes to draw frames. Instead of doing this, you want to base the movement on *time* rather than *frame*. For example, "move 20 pixels per second (not per frame)". This means that your game will run the same speed regardless of the frames per second. A machine that runs slowly will *move the characters at the same speed, only with more jerkiness*. The faster the computer, the smoother the animation -- but both will end up in the same place at the same time!
Here's a quick example. Say you had a character that you wanted to move twenty pixels per second. Imagine that we have two computers - one running at ten frames per second, the other two frames per second (yes, they're very slow
. Each update, we would calculate how far to move the object based on the current time - this is pretty simple, really (in this case, it'd be (time passed in milliseconds) / 1000, since there are 1000 milliseconds per second). This would give you a 'percent' of how far it should move. This value is then multiplied by the speed of the object. For the fast machine, the updates might take 100 milliseconds (ten frames per second here...). This would result in the character moving (100 / 1000) * 20 each update, equal to 2000 / 1000 = 2 pixels. If we have 10 updates at this speed, the character will move 20 pixels! This holds true for the slower machine: each update will move (500 / 1000) * 20, meaning 10000 / 1000 = 10 pixels. If we have two updates, it will move 2 * 10 = 20 pixels (the same value as the faster computer)! You can see how this effects things, too: the faster computer will display the character more often as it moves across the screen, which will mean smoother movement. The other computer displays only two frames, which means the results will be jerky but accurate.
Remember to use singles or doubles instead of integers!
The basic idea:
Declare two variables:
[pascal]var
OldTime: DWORD; { or Int64 if you plan on using queryperf... }
NewTime: DWORD; { likewise, Int64 for queryperf...}
{ do this once, before your main game starts (maybe at the end of FormCreate or whatever }
OldTime := TimeGetTime; { or queryperformancecounter, or whatever }
(inside the timer)
procedure TForm1.YourTimer(Sender: TObject);
const
OneMSec: Single = 1 / 1000; { see notes if you want queryperf... }
var
NewTime: DWORD;
UpdateSpeed: Single;
i: Integer;
begin
NewTime := TimeGetTime;
UpdateSpeed := (NewTime - OldTime) * OneMSec; { get the speed to update objects }
OldTime := NewTime; { store this value for the next update }
{ update all your game objects }
for i := 0 to High(AllGameObjects) do
begin
AllGameObjects[i].x := AllGameObjects[i].x + (UpdateSpeed * AllGameObjects[i].X_PixelsPerSec);
AllGameObjects[i].y := AllGameObjects[i].y + (UpdateSpeed * AllGameObjects[i].Y_PixelsPerSec);
end;
end;[/pascal]
Quick thing to test - is the "OldTime := NewTime" line better off before or after updating the game objects? I haven't checked yet, so be sure to try both.
Now, a few comments about the above. First of all, it's pseudocode - obvious, but I'd better point that out since it might somehow be wrong. Note that the objects are assumed to have their speed in pixels per second. We're timing our updates in msecs, not seconds - this means that any value from newtime-oldtime must be adjusted (divided by 1000, getting it in terms of how long in seconds). This is doing the same thing I explained in the example. Note, btw, that you'd have to adjust this if you wanted to use QueryPerformanceCounter for ultra-good timing. If that was the case, you'd get the value for the high performance timer using QueryPerformanceFrequency. You'd store that value as 1 / (performance_Freq_value) (again, you have to use a real, not an integer!). This is equivalent to the one_msec constant in the above code - for timeGetTime, you need to know how many milliseconds are in a second (giving the value 1/1000 for the constant). With QueryPerformanceCounter, there are QueryPerformanceFrequency updates per second, giving the time for each as (1/QueryPerformanceFrequency).
I think that it might be a good idea to store the time difference for the last few frames (picking a number, 8, maybe 16 frames?...) and using the average. This means that the frames per second won't jump about as quickly - not sure if that's a good plan, but I think it probably is. Again, this is something for you to investigate.
You shouldn't really have to resort to limiting the frame rate. You might, but it only means that faster machines will waste their time, while slower machines will *still* be slow. However, it can simply things a bit...
This dude has some nice articles (the code's in C++, though, so I don't know if it'll help much...)
http://www.mvps.org/directx/articles...ady_motion.htm
http://www.mvps.org/directx/articles..._functions.htm
I also found this *very* interesting article. Be sure to check it out:
http://www.flipcode.com/tfiles/steven03.shtml
Bookmarks