My fellow board member MrShoor from gamedev.ru wrote an interesting article in Russian https://habr.com/post/308980/
It investigates input lag - a subject extremely touchy for first person shooters and fighting games.
There he shows that, Vsync or no Vsync, GPU at high loads can sit rendering up to 4 frames concurrently, resulting in a frame arriving at the screen up to 4 1/60s intervals later than you sent it to the video card.
He then shows this using detailed graphs and says that even companies making AAA games often neglect this problem and no API has a built-in solution to it despite the problem being 10 years old.
He then provides 3 different hacks to achieve the same goal: nip this problem in the bud by forcing a CPU/GPU sync. The one that is available in any graphical API, including OpenGL, is generating a 2x2 mip of the render buffer on the GPU, then reading it by CPU.
The test executable (including sources for Lazarus) could be found at https://github.com/MrShoor/InputLagR...leases/tag/0.0
Usage: increase GPU workload until FPS drops to 15..20, then try rotating the cube using the right mouse button
Compare responsiveness without lag reducing and with the 3 methods provided.
I tried it and the difference was *radical* on my i5/HD3000 (a laptop with integrated graphics). Not only that, but chip temperature stayed at 68C with lag reducing on but started growing with it off and reached 72C before the cooling fan switched to a higher speed with a strained whine.
So not only are you making your game more responsive by forcing a CPU/GPU sync each frame, you make it environmentally friendly too.
P.S. I miscommunicated. One of the graphs (an overloaded GPU) shows the true horror: GPU renders one frame in 4 vsync intervals while command buffer contains data for more than 3 frames. What you send to the GPU now would reach the screen 16..20 Vsync intervals later. That's one third of a second worth of input lag! And that's horrible!
Bookmarks