Results 1 to 10 of 48

Thread: Cheb's project will be here.

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Quote Originally Posted by Chebmaster View Post
    P.S. More elaboration on dropping Win98 support (after investing so much effort, sniff).

    Alas, the said support became an unsustainable effort sink. All for a vanishingly small fraction of hardware that
    1. is compatible with Win98
    2. has Directx9 - class video card
    3. has OpenGL 2.0 drivers for Win98 for said video card

    I should have done this much, much earlier. It was Ok while my target was OpenGL 1.2 or even when I updated that to 1.4. But when I mage my absolute minimum GLES2 and its desktop substitute GL 2.1 it was time to bury the venerated dead.

    The last nail into coffin was my decision to drop support for video cards without NPOT2 support (I have one, GeForce FX5200). Incidentally, FX5200 was the only video card I have that has Win98 OpenGL 2.0 drivers.

    As a result I am free to drop supporting Free Pascal 2.6.4 (hello, generics, I missed you so much)

    Rest assured, I am still supporting WinXP. I cannot find reasons not to.
    Curious: I'm doing exactly the opposite. Too soon to announce though...
    No signature provided yet.

  2. #2
    In the end it all depends on system requirements. When I learned GLSL and realized how much freedom it gives I was firmly set on using it. I also implemented physics in a separate thread so multi-core CPUs became a great bonus if not outright requirement.

    This resulted in my target hardware being a Core 2 Duo + a DirectX9c class video card. That's around 2007. While Windows 98 with drivers available to it barely reaches 2004.

  3. #3
    It seems COVID did not simply damage my brain making me tire out rapidly. It also dislodged thought processes long ossified in their infinite loops.
    I looked back at my code and was horrified. Those paths led me astray, into deep madness and unsustainable effort, already feeding a downward spiral of disheartening and procrastination.
    The best excuse I could come with was "Tzinch had beguiled me".

    So I am now performing a feature cut with all the energy of an industrial blender. Returning the paradigm back to the simplicity it had in the early 2010s while salvaging the few real good things I coded since.

    * switching modules in-engine has to go. Every module (game, tool) will have its own mother executable with release version of game logic built in.
    * only one thread for logic, ever. No ""clever hacks"" to utilize more cores by loading several copies of the same dll. My beard is half gray already and i am still nowhere. The crawling featurisis has to stop *now*.
    * the reloadable-on-the-fly DLL is for debugging only. No stripping. The complicated mechanism of storing and processing debug info for it has to go.
    * the GUI dealing with logic crashes must have no supoport for the debugging DLL crashing and resolving it. The simple console will do.
    * the mother GUI need not have support for the DLL loading progress. The simple console will do.
    * the debugging DLL uses the mother executable's memory manager, GREATLY simplifying the mother API. No more need for converting arrays to pointers and strings to PChar and back on the other side.
    * the debugging DLL is always version-synced with the mother executable. No more need for complicated version checks and compatibility mechanisms.
    * all assets are owned by the mother executable (hello, TMap specialization) thus turning the asset juggling phtagn of old into a complex but manageable mechanism. In the same vien directories and pk3 files are also owned by the mother executable. Need to refresh assets? Restart the exe.

    And a major new feature
    * all calls to the GAPI (GLES2 or GL 2.1 with extension) are made via new abstraction layer that borrows heavily from Vulkan.

    All in all i hope to have the rotating cube back this autumn And then, finally, MOVE FORWARD for the fist time since 2014.

  4. #4
    Sounds like a good plan. SImplify and move forward. I look forward to see the rotating cube

  5. #5
    Me too Can't wait ...

    I cut one more unnecessary thing. My unconventional developer mode is revolutionary (for 2008 when it was conceived it would have been world-changing) BUT it is also *horrifyingly* costly in man-hours to get it up and running on each platform.

    By abandoning it for *all* platforms except win32, I make completing the current refactoring possible during my lifetime.

    I spent around a year of my life building foundations for the megahack that worked around exception handling not working in DLLs in fpc 2.6.4 -- but that bug was closed in 3.2 requiring NO such effort. Still no luck in Linux, but then my future Linux and win64 versions won't have any DLLs at all. Only one release build in one executable using one thread for logic.

    Enough that I *could* have moved forward with full RPi support as far back as 2016 -- if not for the fact that fpc 2.6.4 for arm was unable to generate working DLLs and I stalled waiting for 3.0x, then procrastinated, slowing down... Had I made this reasonable decision back then... I'd probably have a working game now (even if simple asteroids test). And would not have had that close brush with depression, too.

    The quote of my favorite writer applies, "a bullet wonderfully clears your brain even when hits you in the ass" but why did I have to eat covid to realize such simple things?
    Trying to learn being more flexible.

    On a positive note I finally wrangled the .BAT syntax into submission and redid my entire build.bat for the new paradigm. Short story: use SETLOCAL ENABLEDELAYEDEXPANSION and !MYVAR! instead of %MYVAR% lest woe betide you.

  6. #6
    I haven't used DLLs in ages. In fact DLL troubles was one reason motivating me to switch from Visual Basic to Pascal (Delphi) back in the 90ies. I So far I haven't had to use any for the Pascal. But I've only done simple stuff.

    Anyway.. getting rid of them DLLs sounds good. And concentrating efforts on fewer and perhaps simpler areas sounds good too. Though that RPi is an interesting and promising platform. And from my point of view win32 is the platform I use the least nowadays.

    Anyway I look forward to see more Cheb stuff in the future. Keep up the good work. I'll try to get my act together and create some new programs too. Though I'm afraid it tends to get delayed.

  7. #7
    I am currently amidst an immense re-haul that changes the very architecture. Hopefully by the end of this year (2023) it would be over and I could move on with creating my first game.

    Previously:
    My "killer feature", as envisioned back in 2005, was what has since shrunk to "developer mode": all relevant code resides in a DLL that could be re-compiled and re-loaded without re-starting the engine and re-loading assets.

    I invested about 4 years total into my database engine 2006-2008 and the asset management (2012-2013) that linked game code with assets stored in the "mother executable".

    The common parts of architecture that will remain as is:
    - the mother executable has an API - a monolithic record of fields and functions in procedural variables serving the game code's gateway into the engine. Includes configs, window manager, and a dual-layer wrapper allowing the DLL using mother executable's streams as a TStream of its own.
    - the database works on "save a snapshot to TStream" principle, with perfect reproduction of the logic on load.
    - assets are identified by an unique hash (was 256 bits, reduced to 12, either randomly generated or a md5 of file name.

    The old architecture:
    the DLL had a logic thread for the object database (single-threaded by design) with assets being classes of it. The DLL managed background tasks, the logic classes had access to graphical API (OpenGL/GL ES) and had methods for rendering in the main thread. Locking was employed to prevent database from crashing while the render routine was executing an a thread not of its own. On unloading, all assets were counted and packed into a separate mother stream, which for the mother executable was a banal TMemoryStream. On loading, logic had to retrieve that list (which could be empty, if it was the first start, or containing mismatching assets from a different run, in case of switching the session). Each asset object then had to employ convoluted algorithm of devouring its stored counterpart, absorbing properties and OpenGL handles or discarding them. Which, in case of hierarchical multi-part assets like FBOs, was turning into a nightmare.

    It's no surprise that me development stalled and my phtagn asset manager was plagued by bugs very hard to catch (as everything was split into inter-dependent tasks running in background threads).

    The new architecture (I'm cutting and cutting it down):
    - it's not 2005 anymore, I am developing from a SSD.
    - no more "universal" mother that can run any of the games/tools. There is one mother executable per each game/tool (one release, one debug with assertions on).
    - the DLL is only used in the "developer mode" only available for x86 Win32. The normal mode of operation, and all other platforms, is logic built in into the main executable. No more agony of building DLLs for Linux.
    - the DLL runs in the logic thread created by the mother and that's all. The DLL never uses any other threads.
    - my new rendering architecture Kakan: the logic fills a command list in its logic thread, abstracted from any APIs, then passes it for execution and forgets it. The rendering in the main thread is done by Kakan. The logic loses all access to OpenGL.
    - assets are mother executable's classes, accessible to the logic as untyped pointers. Any specific details are exposed via pointers to T<XXX>Innards records shared between the mother and the DLL. Mother API's ExposeInnards method returns an untyped pointer. The logic's job is to type-cast it to correct P<XXX>Innards. Ugly, but I saw no other way to make it simple enough.
    - logic has its own classes for linking to assets, derived from TAbstractAssetLink. All begin with a pair (pointer + hash), where pointer (Mother's asset class instance) is never saved with the snapshot, always nil after de-serialization, and hash duplicates mother asset class's hash.
    - mother manages assets *and* their lifetime, organized in a specialized fcl-stl map addressed by hashes. Assets are reference-counted, all refcounts are reset to zero after the logic unloads.
    - most assets' actualization is handled by the mother in the render phase, employing background tasks if necessary.
    - mother owns background threads and can run background tasks, including cpu-side animation.
    - the loading screen with its fancy progress indicator was dropped in its entirety. The logic remains frozen until first successful render but keeps sending render jobs. The render jobs fail with un-actualized assets, causing some assets to actualize each frame, and replacing themselves with a console render job. So "loading screen" is the console with, maybe, a low-res background image.
    - the error recovery screens were dropped, application displays console with BSOD background and "Press Esc or Back to exit".
    - Kakan manages jobs opaquely to the logic. It sorts jobs by render targets automatically, calculating their order based on where that texture is a texture and where it is a target.
    - Depth/stencil are managed by Kakan opaquely, targets could only be textures. Reason: targeting Mali 400 as the minimum, so depth/stencil do not actually exist and cannot be reused with another color attachment. Need a depth pass? Stuff its output into a RGBA8. Preferably in 128x72 resolution.


    The design document for my first planned game has no English translation yet, also my websites are down due to unsuccessful hardware upgrade (the venerated SATA controller dated 2006 finally gave up the ghost, sees my Samsung HD204UI drives as "ASSMNU GDH02U4 I" glitches with random-generated capacity)


    P.S. See this nightmare:
    SNIPERS: A Nightmare for Developers and Players https://www.youtube.com/watch?v=lOebGm_jMLY
    - and that's why my planned game does not have hitscal weapons at all.
    "Sniper" will be one of ninja's load-outs, heavily influenced by the TF2 "Lucksman" (sniper's bow that fires arrow projectiles).

    P.P.S. When playing competitive first person shooters, no one wants "serious". What people want is slapstick rumble. So any foolish developers who try "serious" style soon give up under players' pressure, their artsy black ops noir degenerating into slapstick comedy. Compare to the wisdom of Valve who made TF2 slapstick from the start (and also reaped immense profit on cosmetics and taunts).
    So, the further away from a mil-sim, the better. More. More distance. Make spells, not weapons. Use in-universe reason for player avatars being something like shadow clones, so that they dispel or unravel with zero blood.

    P.P.P.S. My solution to the problem highlighted in the video above: make the snooper rifle shoot on release, like bows in Mount & Blade. Like a mini-game. The need to lead your vic-- ahem, target is already there. Combine that with firing in the appropriate time window... Otherwise suffering outrageous penalties to accuracy. So that a zero-time instant shot goes wide most of the time and holding LMB for too long adds increasing sway.
    Last edited by Chebmaster; 19-02-2023 at 03:07 PM.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •