PDA

View Full Version : OpenGL Accelerated Pixel Formats - How to find one?



AthenaOfDelphi
01-06-2011, 08:55 PM
Hi guys,

I'm hoping someone will read this and go 'Athena you silly moo you... this is how you do it'... or words to that effect ;-)

The situation I have is that I'm now seriously looking at writing a game... I have the concept... I have the look... what I don't have however is the ability to select a hardware accelerated pixel format with OpenGL.

I've pulled together some code which basically switches display resolution and/or creates a window of that size and then uses 'describePixelFormat' to go through the list of available pixel formats. I'm looking for the PFD_GENERIC_ACCELERATED bit being set in the dwFlags field of the returned PIXELFORMATDESCRIPTOR record.

I have an nVidia 7600GT which I would have thought supported hardware acceleration with at least one combination of 640x480, 800x600 and 1024x768, with 16, 24 or 32bpp running in windows or full screen mode, but I cannot find a single accelerated pixel format.

As a consequence, my simple renderer (by that I mean, it's not doing a whole lot at the moment) runs a bit like a slug (and that's being kind).

Can anyone provide some example code which is guaranteed to force hardware acceleration, and/or offer advice as to how I might figure out why I can't use accelerated formats and possibly provide some hints as to how I might go about fixing the problem.

I'm running RAD Studio 2009, Windows XP Professional SP3. My machine has an nVidia 7600GT. If you need more info, let me know and I'll post more.


Thanks

paul_nicholls
01-06-2011, 10:38 PM
Hi Christina :)

I found an explanation and some C code that you might be able to adapt to apparently find a hardware accelerated opengl pixel format:

http://www.wischik.com/lu/programmer/wingl.html#accelerated


How do I choose an accelerated pixel format under Windows?

Note: many consumer graphics cards cannot accelerate when the display is 24bpp, and many cannot accelerate when the desktop is at 32bpp in high-resolution. I always change to 800x600 x16bpp for my full-screen games. That ensures that the graphics card will have enough memory.

Normally, you call ChoosePixelFormat to choose a pixel format. But it's hard to know whether this will give you an accelerated pixel format. For us gamers, acceleration is the most important thing: we'd be happy to settle for a 16bpp accelerated surface, rather than a 32bpp unaccelerated surface.

The following code uses a gamer's heuristics to choose a suitable pixel format. Call it like this:


int bpp=-1; // don't care. (or a positive integer)
int depth=-1; // don't care. (or a positive integer)
int dbl=1; // we want double-buffering. (or -1 for 'don't care', or 0 for 'none')
int acc=1; // we want acceleration. (or -1 or 0)
int pf=ChoosePixelFormatEx(hdc,&bpp,&depth,&dbl,&acc);
The function will return, in those variables, the pixel format that it choose.


int ChoosePixelFormatEx(HDC hdc,int *p_bpp,int *p_depth,int *p_dbl,int *p_acc)
{ int wbpp; if (p_bpp==NULL) wbpp=-1; else wbpp=*p_bpp;
int wdepth; if (p_depth==NULL) wdepth=16; else wdepth=*p_depth;
int wdbl; if (p_dbl==NULL) wdbl=-1; else wdbl=*p_dbl;
int wacc; if (p_acc==NULL) wacc=1; else wacc=*p_acc;
PIXELFORMATDESCRIPTOR pfd; ZeroMemory(&pfd,sizeof(pfd)); pfd.nSize=sizeof(pfd); pfd.nVersion=1;
int num=DescribePixelFormat(hdc,1,sizeof(pfd),&pfd);
if (num==0) return 0;
unsigned int maxqual=0; int maxindex=0;
int max_bpp, max_depth, max_dbl, max_acc;
for (int i=1; i<=num; i++)
{ ZeroMemory(&pfd,sizeof(pfd)); pfd.nSize=sizeof(pfd); pfd.nVersion=1;
DescribePixelFormat(hdc,i,sizeof(pfd),&pfd);
int bpp=pfd.cColorBits;
int depth=pfd.cDepthBits;
bool pal=(pfd.iPixelType==PFD_TYPE_COLORINDEX);
bool mcd=((pfd.dwFlags & PFD_GENERIC_FORMAT) && (pfd.dwFlags & PFD_GENERIC_ACCELERATED));
bool soft=((pfd.dwFlags & PFD_GENERIC_FORMAT) && !(pfd.dwFlags & PFD_GENERIC_ACCELERATED));
bool icd=(!(pfd.dwFlags & PFD_GENERIC_FORMAT) && !(pfd.dwFlags & PFD_GENERIC_ACCELERATED));
bool opengl=(pfd.dwFlags & PFD_SUPPORT_OPENGL);
bool window=(pfd.dwFlags & PFD_DRAW_TO_WINDOW);
bool bitmap=(pfd.dwFlags & PFD_DRAW_TO_BITMAP);
bool dbuff=(pfd.dwFlags & PFD_DOUBLEBUFFER);
//
unsigned int q=0;
if (opengl && window) q=q+0x8000;
if (wdepth==-1 || (wdepth>0 && depth>0)) q=q+0x4000;
if (wdbl==-1 || (wdbl==0 && !dbuff) || (wdbl==1 && dbuff)) q=q+0x2000;
if (wacc==-1 || (wacc==0 && soft) || (wacc==1 && (mcd || icd))) q=q+0x1000;
if (mcd || icd) q=q+0x0040; if (icd) q=q+0x0002;
if (wbpp==-1 || (wbpp==bpp)) q=q+0x0800;
if (bpp>=16) q=q+0x0020; if (bpp==16) q=q+0x0008;
if (wdepth==-1 || (wdepth==depth)) q=q+0x0400;
if (depth>=16) q=q+0x0010; if (depth==16) q=q+0x0004;
if (!pal) q=q+0x0080;
if (bitmap) q=q+0x0001;
if (q>maxqual) {maxqual=q; maxindex=i;max_bpp=bpp; max_depth=depth; max_dbl=dbuff?1:0; max_acc=soft?0:1;}
}
if (maxindex==0) return maxindex;
if (p_bpp!=NULL) *p_bpp=max_bpp;
if (p_depth!=NULL) *p_depth=max_depth;
if (p_dbl!=NULL) *p_dbl=max_dbl;
if (p_acc!=NULL) *p_acc=max_acc;
return maxindex;
}

cheers,
Paul

code_glitch
02-06-2011, 01:37 AM
The 7600GT, the 256mb version with the 560mhz clock yes? Man, thats one nice card, used to sport the 7600GS 512MB version, slightly higher clock.... good ol days. Speaking of which, is in a box, because a driver (or two) decided to not play ball on the new motherboard setup and it ran like a GMA chip covered in jam...

What drivers are running? I'd say, try the latest forceware implementations, there's some nice code in there that gives a nice performance boost over the default drivers.

Oh, and cheers for that code paul, I will definitely try and put that to use. Any ideas on how it will play with some GMA 4500/X3150 chipsets? my OpenGl code runs slow on those too. :(

Time for some openarena and my HD4330... ahhh... 4GB of DDR3 system ram - my comfort zone :)

Andru
02-06-2011, 06:09 AM
First of all, one stupid question, which I often ask - did you install official NVIDIA drivers? :) Because with standard Windows driver OpenGL won't work in most cases.



I'm looking for the PFD_GENERIC_ACCELERATED bit being set in the dwFlags field of the returned PIXELFORMATDESCRIPTOR record.

I don't know why you need this, e.g. ZenGL trying to initialize context in this way(before switching to another with more options):


var
i : Integer;
pixelFormat : Integer;

oglContext : HGLRC;
oglFormatDesc : TPixelFormatDescriptor;

begin

FillChar( oglFormatDesc, SizeOf( TPixelFormatDescriptor ), 0 );
with oglFormatDesc do
begin
nSize := SizeOf( TPIXELFORMATDESCRIPTOR );
nVersion := 1;
dwFlags := PFD_DRAW_TO_WINDOW or PFD_SUPPORT_OPENGL or PFD_DOUBLEBUFFER;
iPixelType := PFD_TYPE_RGBA;
cColorBits := 24;
cAlphabits := 8;
cDepthBits := 24;
cStencilBits := oglStencil;
iLayerType := PFD_MAIN_PLANE;
end;

pixelFormat := ChoosePixelFormat( wndDC, @oglFormatDesc );
if pixelFormat = 0 Then
begin
u_Error( 'Cannot choose pixel format' );
exit;
end;

if not SetPixelFormat( wndDC, pixelFormat, @oglFormatDesc ) Then
begin
u_Error( 'Cannot set pixel format' );
exit;
end;

oglContext := wglCreateContext( wndDC );
if ( oglContext = 0 ) Then
begin
u_Error( 'Cannot create OpenGL context' );
exit;
end;

if not wglMakeCurrent( wndDC, oglContext ) Then
begin
u_Error( 'Cannot set current OpenGL context' );
exit;
end;


So scheme is simple - you just trying to ChoosePixelFormat with simple options, and if it doesn't work, then videocard is a piece of <something not good>, or just without proper drivers :)

chronozphere
02-06-2011, 08:45 AM
I also use the simple approach similar to Andru's code. I believe I took it from some tutorial and it always worked smoothly here. I suggest you look at the drivers. :)

I your videocard is allright. I guess it performs almost as good as my 8600GT, which still does a good job.

User137
02-06-2011, 10:05 AM
Lazarus has its own solution to OpenGL window, a OpenGLContext library. It is a simple component one can drag on the form and should be working on any platform :) Reminds DXDraw from DelphiX just all it does is full customizable sizable window.

AthenaOfDelphi
02-06-2011, 01:22 PM
Wow, lots of questions.... to answer as many as I can remember from reading through the posts...

Yes, I'm using official drivers from Nvidia, but I could potentially update them but I've always had a problem getting decent performance out of OpenGL with Delphi.

I used to use the simple solution suggested by Andru, but what I've found is that as I increase window size, it slows down... I'm only plotting about 400 lines, so there is no reason I can see that the software should slow down that much. Hell, it runs faster on my laptop than it does on my desktop machine :-/

Based on this rough estimate, any other suggestions as to why this may be slow? I'm using display lists to create the rendered output.

Why do I want to do this? I wrote a program to look through available pixel formats trying to find accelerated formats so I could be sure to setup the desktop/context etc. correctly to get maximum performance and what I've found is that I don't appear to be able to get any accelerated formats. Hence my question.

Anyhow, I'm at work in a meeting with customers, so if I've not answered something I'll drop another message a little later :-)

code_glitch
02-06-2011, 09:06 PM
It runs faster on the laptop hm..... Now thats strange at best; I assume the laptop in question runs a GMA chip from intel, the 7600 series should demolish that. And 400 lines should be pie to draw. Sounds like something funny going on there. Although thanks for getting me off my lazy a** and looking through the forceware drivers again. The latest release affects me ^^ so I get another small boost hopefully. Wonder what the effects on that 8600M are. Come on amazon... I need that new hard drive noooooooow. :) I say drivers, Andru's approach should work and if thats how you usually do it then there is not much more to do I don't think. And IMHO whether opengl code is written in delphi or pascal, I doubt there should be too much a performance difference.

Other suggestions: Like how many fps are you getting? limited to 60/70 exactly by any chance - then its vsync.
Oh and the larger window = slower relationship might point to memory. The card might be dumping into system ram which is a lot slower.

AthenaOfDelphi
03-06-2011, 11:57 PM
Well, thanks for all the input guys, I'm not sure I've solved it, but I have reverted back to the simple approach outline by Andru. This is the default in the NeHe tutorials I've been looking at. I did notice a marked improvement though when I switched to 800x600 16bits, so for now, I'm going to stick with that. I'm basically going to write the guts of my game and then worry about tarting it up after I've gotten it all coded up :)

Thanks for your help... doubtless I'll be asking more questions