Results 1 to 10 of 39

Thread: OpenGL GLSL - Text rendering query

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    PGD Community Manager AthenaOfDelphi's Avatar
    Join Date
    Dec 2004
    Location
    South Wales, UK
    Posts
    1,245
    Blog Entries
    2
    And, in the end, I've just found out what I was doing wrong

    So this code works

    Code:
      glClearColor(0,0,0,1.0);
      glClear(GL_COLOR_BUFFER_BIT or GL_DEPTH_BUFFER_BIT);
      glViewport(0,0,fRenderWidth,fRenderHeight);
      glLoadIdentity;
      gluOrtho2D(0,fRenderWidth,fRenderHeight,0);
      glEnable(GL_TEXTURE_2D);
      glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
      glBindTexture(GL_TEXTURE_2D,fCharacterSets[0].texture);
    
      for y:=0 to 15 do //5 do
      begin
        ty:=y*24;
        for x:=0 to 15 do //5 do
        begin
          tx:=x*24;
    
          idx:=x+(Y*16);
    
          glDisable(GL_BLEND);
    
          glBegin(GL_QUADS);
            glColor4f(fGLPaletteColors[0][idx,0],fGLPaletteColors[0][idx,1],fGLPaletteColors[0][idx,2],1.0);
            glVertex2i(tx,ty);
            glVertex2i(tx+16,ty);
            glVertex2i(tx+16,ty+16);
            glVertex2i(tx,ty+16);
          glEnd;
    
          // Render the font
          glColor4f(fGLPaletteColors[0][255-idx,0],fGLPaletteColors[0][255-idx,1],fGLPaletteColors[0][255-idx,2],1.0);
    
          glEnable(GL_BLEND);
    
          glBegin(GL_QUADS);
            glTexCoord2f(x*ONE_16TH,y*ONE_16TH);
            glVertex2i(tx,ty);
    
            glTexCoord2f(x*ONE_16TH+ONE_16TH,y*ONE_16TH);
            glVertex2i(tx+16,ty);
    
            glTexCoord2f(x*ONE_16TH+ONE_16TH,y*ONE_16TH+ONE_16TH);
            glVertex2i(tx+16,ty+16);
    
            glTexCoord2f(x*ONE_16TH,y*ONE_16TH+ONE_16TH);
            glVertex2i(tx,ty+16);
          glEnd;
    
        end;
      end;
    And this is the output....

    CorrectResults.PNG

    So what I was doing wrong, was binding the palette texture to the vertices of the background colour. Instead, this code simply uses glColor4f to set the colour, then creates a quad, specifies the points for the background. Turns on blending, changes the colour and creates a quad, specifying texture coordinates corresponding to the required character.

    Interestingly though, I had a problem that was really odd involving the second column (from the left) and the bottom row, being darker. No matter what I tried I couldn't stop this and it was like this was something to do with one of the textures. Originally I was loading my textures and setting these:-

    Code:
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
    With those commented out, that problem went away.

    And that, as they say is that... now I can crack on and write some bad code to get something working.
    :: AthenaOfDelphi :: My Blog :: My Software ::

  2. #2
    PGD Community Manager AthenaOfDelphi's Avatar
    Join Date
    Dec 2004
    Location
    South Wales, UK
    Posts
    1,245
    Blog Entries
    2
    Forgot to add... thanks for all the suggestions chaps
    :: AthenaOfDelphi :: My Blog :: My Software ::

  3. #3
    I'm glad you got it working, and welcome back to coding

    cheers,
    Paul

  4. #4
    And this is the output....

    I'm glad you got it working!
    Try experimenting with glDisable(GL_BLEND) and see if it still works (unless your intention was blending the background with what was already rendered there).
    I think my advice earlier was wrong and you only need the test, without blending.

    you didn't see the link I've posted
    Sorry. You posted a link to a page with one paragraph of title text and some embedded rectangle that doesn't work without JavaScript. Okay, I temporarily enabled JS for slideshare.net and slidesharecdn.com. And what I see?

    A veritable wall of sites that suddenly want to run their JavaScript in my browser.
    How about NO?
    I alvays ignore such dumps of websites full of suspicious third party scripts.

    Added this to my hosts file:
    Code:
    127.0.0.1 www.slideshare.net
    127.0.0.1 slideshare.net
    127.0.0.1 slidesharecdn.com
    127.0.0.1 public.slidesharecdn.com
    so that that dump is always blocked.

    Besides, judging by the title that was something a nVidia dev said 9 years ago. Lots happened since then and I am more interested in the situation of today.

    On most desktop hardware we've done testing happens actually the opposite - glBegin/glEnd is close to performance to glDrawArrays
    That's a very interesting result and I can't wait until I'm done with my rehaul to do some more benchmarking.
    There must be some interesting dependency on driver version, operating system, the way you initialize GL or even if your app is 32 or 64-bit.
    Because I got what I got, both on Win7/HD3000/i5 and Win10/GTX460/Phenom II.

  5. #5
    Quote Originally Posted by Chebmaster View Post

    Sorry. You posted a link to a page with one paragraph of title text and some embedded rectangle that doesn't work without JavaScript. Okay, I temporarily enabled JS for slideshare.net and slidesharecdn.com. And what I see?
    A veritable wall of sites that suddenly want to run their JavaScript in my browser.
    How about NO?
    I alvays ignore such dumps of websites full of suspicious third party scripts.
    It's a web site that hosts a PowerPoint presentation made by aforementioned Nvidia person. I also use a JavaScript blocker in FireFox myself, but for occasional viewing open links in Chromium (Edge on Windows), which is something you can do for such exceptional cases. It's too bad you didn't care to check the presentation, especially for yourself as your attitude was a classical example of Dunning-Krueger effect: you have barely learned shader basics (as you've said yourself), so being a beginner you try to give advice whether to use a particular technology or not, while not being an expert on this topic; and you fear to see a presentation without enabling JavaScript, so can't actually learn something new. Please don't do that, such attitude hinders the real talent you may have.

    Here's a quote from that Nvidia presentation:
    NVIDIA values OpenGL API backward compatibility
    - We don't take API functionality away from you
    - We aren't going to foce you to re-write apps
    Does deprecated functionality "stay fast"?
    - Yes, of course - and stays fully tested
    - Bottom-line: Old & new features run fast
    Quote Originally Posted by Chebmaster View Post
    Besides, judging by the title that was something a nVidia dev said 9 years ago. Lots happened since then and I am more interested in the situation of today.
    "You have to know the past to understand the present." This Nvidia presentation talks about OpenGL 3.2, which is exact moment the whole deprecation thing happened. I obviously can't speak for graphics vendors, but I believe their commitment to continue supporting all OpenGL features in their graphics drivers is driven both by video games industry (there are a lot of old games that use legacy code; have you played Starcraft 1 recently? It still works and that's DirectDraw with direct primary surface access ) and enterprise sector, where you may find large code bases dating back to 90s.

    So it boils down to the same thing people said in Pascal vs C thread: use the tool that you find most appropriate for your current task.

  6. #6
    PGD Community Manager AthenaOfDelphi's Avatar
    Join Date
    Dec 2004
    Location
    South Wales, UK
    Posts
    1,245
    Blog Entries
    2
    Well whilst we're talking about OpenGL.... would someone like to suggest a reason why glGetUniformLocation doesn't appear to be initialised in dglOpenGL. As best as I can tell, when it's loading the address of the routine, it always gets NIL. I've tried using normal means of starting up (i.e. not specifying a library) and I've also tried to force NVOGL32.DLL. In both cases, it doesn't appear to know about glGetUniformLocation. Either that, or it is somehow getting initialised properly and it can't find the uniform, which is a 2D sampler.

    Any thoughts?
    :: AthenaOfDelphi :: My Blog :: My Software ::

  7. #7
    have you played Starcraft 1 recently?
    I have not, but Open Arena takes up to 60% of my gaming time.
    It runs perfectly.

    NVIDIA values OpenGL API backward compatibility
    That they do, and I believe glBegin is there to stay forever, BUT with a nasty catch: these technologies are only kept good enough to keep old games running. There is no need to make them efficient.

    It's a web site that hosts a PowerPoint presentation
    Oh!
    So it was working with only originating domain scripts enabled! I just was expecting a video.

    On most desktop hardware we've done testing happens actually the opposite - glBegin/glEnd is close to performance to glDrawArrays
    Were you measuring overall frame time or just the time of calling the functions?

    You see, I found the experimental way that driver sort of stores commands you issue somewhere inside itself and the real work begins (usually) only after you call SwapBuffers (e.g. wglSwapBuffers, glxSwapBuffers, eglSwapBuffers). So to see what is really going on you have to measure how long the SwapBuffers call takes, with vSync disabled, of course.

    Preferably, with Aero desktop composition disabled as any semi-transparent effect overlapping your window usually adds extra 8..9ms

    I found that *vast majority* of my thread's time is usually spent inside that call, exceptions being FBO creation and GLSL program linking.
    And it is where the cost of glBegin is paid.
    This was true for all platforms I tried, including various Windows, Linux and wine.

    My game engine has a built-in profiler and displays thread time charts along the fps counter. I watch them like a hawk and it draws SwapBuffers time in bright red.

  8. #8
    , or it is somehow getting initialised properly and it can't find the uniform, which is a 2D sampler.
    1. It was so NICE of them to declare GLcharARB = Char; PGLcharARB = ^GLcharARB; when Char could be UnicodeChar and OpenGL *only* understands 8-bit strings.

    2. It's capricious. Try
    Code:
    glGetUniformLocation(<program>,  PAnsiChar(RawByteString('my_uniform_name'#0)));
    3. It can return -1 if your uniform was not used and the GLSL compiler eliminated it.

    P.S. I always play it overly safe using wrappers like this one:
    Code:
    class function TGAPI.SafeGetUniformLocation(prog: GLuint; name: RawByteString): GLint;
    var
      error: UnicodeString;
    begin
      try
        case  Mother^.GAPI.Mode of
         {$ifndef glesonly}
          gapi_GL21,
         {$endif glesonly}
          gapi_GLES2: begin
            Result:= glGetUniformLocation(prog, PAnsiChar(name + #0));
            CheckGLError(true);
          end;
        else
          DieUnsupportedGLMode;
        end;
      except
        Die(RuEn(
          'Не удалось получить расположение постоянной %1 для программы %0',
          'Failed to get uniform %1 location for program %0'
          ), [prog, name]);
      end;
    end;
    where

    Code:
    procedure CheckGlError(DieAnyway: boolean);
    var ec: GLenum;
    begin
      {$if defined(cpuarm) and not defined(debug)}
        //Raspberry Pi
        //some shit of a driver spams console with "glGetError 0x500"
        // thus bringing FPS to its knees
        if not DieAnyway then Exit;
      {$endif}
    
      ec:= glGetError();
    
      if ec <> 0 then
        if DieAnyway or Mother^.Debug.StopOnOpenGlErrors
          then Die(RuEn(
              'Ошибка OpenGL, %0',
              'OpenGL error, %0'
            ),
            [GlErrorCodeToString(ec)])
          else
            if Mother^.Debug.Verbose then AddLog(RuEn(
                'Ошибка OpenGL, %0',
                'OpenGL error, %0'
              ),
              [GlErrorCodeToString(ec)]);
    end;
    -- flying over paranoiac's nest
    Last edited by Chebmaster; 21-01-2018 at 04:38 PM.

  9. #9
    Quote Originally Posted by Chebmaster View Post
    I have not, but Open Arena takes up to 60% of my gaming time.
    That they do, and I believe glBegin is there to stay forever, BUT with a nasty catch: these technologies are only kept good enough to keep old games running. There is no need to make them efficient.
    I would advocate it slightly different: yes, they (Nvidia, AMD, etc.) need to keep old API interface and for that, they are likely providing a fixed-function pipeline wrapper on top of programmable pipeline. However, since they know their own architecture very well, it is very likely they are using most optimal approach to implement such wrapper. In contrast, when you jump directly to programmable pipeline and try to build an engine of your own, it is much more difficult for you to achieve at least *the same* level of performance as the legacy fixed-function pipeline built as wrapper, because you have to optimize it to many architectures, vendors, drivers, OS versions, etc.

    Granted, if you use programmable pipeline properly, you can do much more than you could do with FFP: in a new framework that we're in process of publishing, you can use Compute shaders to produce a 3D volume surface, which is then processed by Geometry/Tessellation shaders - the whole beautiful 3D scene is built and rendered without sending a single vertex to GPU! Nevertheless, it doesn't mean you can't start learning with FFP, it is just as good as any other solution when what you need is to render something simple in 2D or 3D using GPU.

    Besides, you never know, maybe some day the whole OpenGL will be made as a wrapper on top of Vulkan API, who knows...

    Quote Originally Posted by Chebmaster View Post
    Were you measuring overall frame time or just the time of calling the functions?
    Either use external profiling tools (Visual Studio has GPU profiler, also Nvidia/GPU also provide their own tools) or at least measure the average frame latency (not frame rate).

    Quote Originally Posted by Chebmaster View Post
    You see, I found the experimental way that driver sort of stores commands you issue somewhere inside itself and the real work begins (usually) only after you call SwapBuffers (e.g. wglSwapBuffers, glxSwapBuffers, eglSwapBuffers). So to see what is really going on you have to measure how long the SwapBuffers call takes, with vSync disabled, of course.
    This is driver-dependent, so they are free to choose whatever approach is more efficient, but commonly you may expect that the work begins (GPU works in parallel) immediately when you issue a draw call. "Swap Buffers" has to wait for all GPU work to finish before swapping surfaces, which is why you feel it taking most of the time.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •