Results 1 to 10 of 33

Thread: Gumberoo - Making a new learning language intepreter using FPC

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    That's a rather interesting project. But speaking frankly: lables and jumps are considered as bad programming, right? Shall in nowadays people start programming the "old" style (GOTO)?
    I also think that functions are elementary of a learning computer language. Even Pascal was invented to teach programming; see wikipedia article to Pascal:
    Initially, Pascal was largely, but not exclusively, intended to teach students structured programming.
    What about Lua?
    Just my two cents.
    Best regards,
    Cybermonkey

  2. #2
    Sounds like a very interesting project! Looking forward to more

  3. #3
    Quote Originally Posted by Cybermonkey View Post
    That's a rather interesting project. But speaking frankly: lables and jumps are considered as bad programming, right?
    Well, keep in mind I consider Assembly/Machine language to be simpler than C... and in ASM you only have jump or call. The difference between CALL and a function is... uhm... yeah. At most the difference being passing a result in a register or on the stack. Just be glad I rejected the idea I had of making conditionals work like machine language.

    While I've loved pascal from the day I first learned it on a DEC Rainbow, a lot of it's concepts and methods of working, like a number of more modern languages I feel require more time investment to do anything useful -- to that end there's python, but that has it's own host of issues and isn't entirely suited to what I want the projects focus to be.

    A lot of this comes from my recent putterings on the Apple II and VIC 20... Pascal had a number of conceptual hurdles I never had to deal with in line numbered BASIC... hurdles I didn't really make it past until I was a teenager. I'm trying to find a middle-ground between the two, without treading into the noodle-doodle land that Python does with classes.

    Again, I'm hoping to make it simple enough that with a decent manual a ten year old/4th grader could use it without adult intervention; which is why I reject python or even pascal. They make assumptions of knowledge and require certain amounts of programming theory, that honestly I glazed over at that age and gave up on before grasping them... Hell, I know adults today who glaze over on endless pages of theory going "where's the beef?".

    It's a big bun. A big fluffy bun. It's a very big bun...
    WHERE'S THE BEEF?

    In a lot of ways, I want to make what LOGO wanted to be but never was.
    Last edited by deathshadow; 16-04-2012 at 01:40 AM. Reason: added some clarification.
    The accessibility of a website from time to time must be refreshed with the blood of designers and owners. It is its natural manure

  4. #4
    Quote Originally Posted by deathshadow View Post
    ...the noodle-doodle land that Python does with classes.
    hahaha!! nice description

  5. #5
    There is something that I can't seem to coperhand. Why would you limit numbers to 64 bit. Isn't this a waste of memory? I meand how often do you use nimbers greatear than 32 bit Integer in your games? Also won't using 64 bit integers considerably slow down all mathematical calculations when programs would be running on 32 bit operating systems.
    Also wich type of string would you use (Ansi, UTF-8, Unicode)? Having only ansi support will make your rograming language les interesting for any programer coming from Easten Europe, Asia, or any other country whic uses some aditional characters wich are not present in Ansi charset.

  6. #6
    Quote Originally Posted by SilverWarior View Post
    I meand how often do you use nimbers greatear than 32 bit Integer in your games?
    Since I'm planning on it being openGL aware, 32 bit floating point is likely the best minimum once rotations, gravity, momentum and drag are in the equation, much less the notion of resolution independent rendering. I considered single precision since that's what glFloat is guaranteed to be, but the max value limitation when dealing with non-integer values worried me and, well.. It plays to your second question about it:

    Quote Originally Posted by SilverWarior View Post
    Also won't using 64 bit integers considerably slow down all mathematical calculations when programs would be running on 32 bit operating systems.
    The speed concern did worry me, but given that interpreted languages were fast enough by the 386 era to make simple sprite based games, I'm really not all that worried about the speed of floats when the minimum target is a 700mhz ARM 8 or a multi-ghz machine with SSE operations available.

    ... and remember, even a 8087 math-co could handle 80 bit "extended" at a decent speed even at clocks below 8mhz... I should know, I have one in my Tandy 1000SX next to the NEC V20 running at 7.16mhz.

    Though that could just be I got used to targeting 4.77mhz on a 16 bit processor with a 8 bit data path that doesn't even have hardware sprites, I may be overestimating the capabilities of a 32 bit processor at almost 150 times that speed with blitting offloaded to the GPU.

    I don't want typecasting to get in the way of actually using the language, and if that means a bit of slowdown, so be it. It's something that pissed me off back on things like Apple Integer Basic where you didn't even have fractions... With 4th graders as the target, it's complex enough without confusing them on the difference between ordinal and real... much less the dozen or so integer widths, or half dozen real types. By the fourth grade they should be learning long division, so explaining why they get 5/2=2 is something to avoid until they put on the big boy pants and move to something like C, Pascal or Python.

    BASIC on the Coco only had 'numbers' -- and they were 32 bit numbers with 8 bits for the floating point on a 8 bit processor. (admittedly the semi-16 bit floating point monster 6809) -- and handled things just fine. If the 'slowdown' of 64 bit numbers when you have a math coprocessor DESIGNED to handle numbers that size is an issue, you're probably doing something wrong.

    Or am I really off-base with that type of thinking?

    I'm still thinking I might switch to 80 bit extended, though I'm going to test 32 bit single precision too; which is why I've defined my own "tReal" type so I can change it program-wide as needed.

    Quote Originally Posted by SilverWarior View Post
    Also wich type of string would you use (Ansi, UTF-8, Unicode)? Having only ansi support will make your rograming language les interesting for any programer coming from Easten Europe, Asia, or any other country whic uses some aditional characters wich are not present in Ansi charset.
    I'm arguing with myself over that because to be honest, I hate complex character sets; they are a needlessly complex mess that until recently (relatively speaking) weren't even involved on computers. I often feel that if we just restricted ourselves to 7 bit ASCII we wouldn't have a lot of the headaches that crop up on websites... and since my bread and butter the past decade has been websites; it's only further made me HATE languages or even normal text that requires anything more complex than that.

    Honestly, I'm tempted to restrict it to the character set used by a Apple IIe.

    BUT -- you are right, such ethnocentric views could limit the potential audience; something I'd like to avoid. At the same time, I'd be far, far more worried about the overhead of string processing UTF-8 into a raster based font system like my GLKernedFont method; which will be what it's going to run since being cross platform I can't rely on any specific engine being present, and freetype looks like ass and kerns text like a sweetly retarded crack addict.

    Also, converting even one complete UTF-8 font set to raster and making a kerning table for it doesn't rank all that high up on my to-do list either; Maybe if I got my auto-generator for the kerning table completed... THOUGH...

    I could at least start adding the code hooks to add extended codepage support; that way other people who want it elsewhere could add that support themselves once I go public with the first release and codebase. That might be the best approach since I'm not planning on this being a one man operation forever... just until I hit Beta.

    Partly to prove to myself I can still do this sort of thing. Past six years I've been retired due to failing health, and slowly weaning my clients off support... starting to feel like the mind is slipping, and this project isn't just about helping others, but also about proving something to myself.

    Well, that and I have some really weird ideas on how an interpreter should work - and want to test said ideas without getting led off-track.
    The accessibility of a website from time to time must be refreshed with the blood of designers and owners. It is its natural manure

  7. #7
    Quote Originally Posted by deathshadow View Post
    Since I'm planning on it being openGL aware, 32 bit floating point is likely the best minimum once rotations, gravity, momentum and drag are in the equation, much less the notion of resolution independent rendering. I considered single precision since that's what glFloat is guaranteed to be
    OK I understand that when you are dealing with graphics and physics having 32 bit or 64 bit numbers could come in handy. But what about other cases. For instance if you are making some shooter game you wil define you units health with integer I presume. So what will be max health for your units? Several milions hitpoint? I don't think so. You wil probably define you units max heath in the range of few hundred hitpoints. So why do you need 64 bit integer for this again?
    I know that having several different iteger types can be quite confusiong for beginers. Infact the concept of the integer itself could be confusing.
    What I had in mind was support for 8 bit, 16 bit, 32 bit, etc numbers but the programing language itself is taking care about wich of theese is being used (self optimization)

    Quote Originally Posted by deathshadow View Post
    If the 'slowdown' of 64 bit numbers when you have a math coprocessor DESIGNED to handle numbers that size is an issue, you're probably doing something wrong.
    I was refering of using 64 bit integers on 32 bit procesors (many platforms today still use 32 bit processing). Ofcourse you can stil do 64 bit math procesing on 32 bit procesors but for that you need to do two math calculation for one (first you calculate fir 32 bits, then you calculate nex 32 bits, and finaly you join result from these two functions into final 64 bit result). So making 64 bit calculations on 32 bit procesor takes atleast tvice as long.

    Quote Originally Posted by deathshadow View Post
    I'm arguing with myself over that because to be honest, I hate complex character sets; they are a needlessly complex mess that until recently (relatively speaking) weren't even involved on computers. I often feel that if we just restricted ourselves to 7 bit ASCII we wouldn't have a lot of the headaches that crop up on websites... and since my bread and butter the past decade has been websites; it's only further made me HATE languages or even normal text that requires anything more complex than that.
    I do understand you point of wiew. More complex charsets do cause much more complexity. But how would you feel if you had a developing tool or programing language wich doesnt alow you to show some specific characters used in your language?

    Quote Originally Posted by deathshadow View Post
    I'd be far, far more worried about the overhead of string processing UTF-8 into a raster based font system like my GLKernedFont method; which will be what it's going to run since being cross platform I can't rely on any specific engine being present, and freetype looks like ass and kerns text like a sweetly retarded crack addict.
    I don't seem how it would be difficult rendering true type fonts. I will learns shortly becouse I intend to make functions wich will alow me to do that in Aspyre graphic engine. If it will work well I do intend on sharing it source code.

    Quote Originally Posted by deathshadow View Post
    Well, that and I have some really weird ideas on how an interpreter should work - and want to test said ideas without getting led off-track.
    Testing weird ideas isn't bad at all. Infact every invention was at firsth thought as a weird idea.

  8. #8
    Quote Originally Posted by SilverWarior View Post
    OK I understand that when you are dealing with graphics and physics having 32 bit or 64 bit numbers could come in handy. But what about other cases. For instance if you are making some shooter game you wil define you units health with integer I presume. So what will be max health for your units? Several milions hitpoint? I don't think so. You wil probably define you units max heath in the range of few hundred hitpoints. So why do you need 64 bit integer for this again?
    Remember this is an interpreter, not a compiler, handling data types could add as much if not more overhead as well... php comes to mind with it's total lack of strict typecasting but at the same time internally having types; all those data conversions on every access/assignment can end up just as long as calling the FPU... being it ARM's VPF, legacy x87, or SSE.

    Spending time on an interpreter, even after tokenizing on automatically selecting the optimal type for a value or range of values can end up just as big and slow as just operating on a single fixed type. All those branches, calls and conditionals add up quick... certainly faster than the 2 to 8 clock difference between a 32 bit integer operation on the cpu and 64 bit one on the FPU. (at least on x86... still learning ARM)

    Quote Originally Posted by SilverWarior View Post
    I was refering of using 64 bit integers on 32 bit procesors (many platforms today still use 32 bit processing). Ofcourse you can stil do 64 bit math procesing on 32 bit procesors but for that you need to do two math calculation for one (first you calculate fir 32 bits, then you calculate nex 32 bits, and finaly you join result from these two functions into final 64 bit result). So making 64 bit calculations on 32 bit procesor takes atleast tvice as long.
    I get that, and in a way it's part of why I'm not bothering even having integer types.

    By going straight to the math-co, on x87 that's simply a FLD, the operation (FMUL,FADD,FSUB,FDIV), then FSTP -- not really any more or less code than mov eax,mem; mov ebx,mem; mul ebx; mov mem,eax

    At most a FPU double multiply (for example) on anything x87 pentium/newer is 12 bus clocks memory, 12 bus clocks code fetch and 6 cpu clocks execution (including setting up the FPU memory pointer)... A 32 bit integer multiply on same might be only 6 bus clocks memory, but it's 20 bus clocks code fetch and 4 cpu clocks.... so they may look like they take the same amount of time, but remember the bus isn't as fast as the cpu; as such on modern computers it is often FASTER to do a 64 bit floating point multiplication than it is a 32 bit integer one... just because 386 instructions are that extra byte in length meaning a longer wait for it to fetch.

    Of course, if you can optimize the assembly to put everything into proper registers you can shift that back around, but that's more the type of thing for a compiler to do, not an interpreter.

    ... and while that's for the wintel world of doing things, you also have to remember that ARM lacks a integer divide; while the various VFP/VFE/SIMD/NEON whatever they want to optionally include this week do tend to provide it. Of course, there is the issue of not being able to rely on which FPU extensions are even available (if any) on ARM, and if FPC even bothers trying to include code for them -- that is a concern I'm going to have to play with in QEMU. I know the Cortex A8 provides NEON, which uses 64 bit registers despite being hooked to a 32 bit CPU.

    After all, that's why SIMD and it's kine exist, and why the x87 was a big deal back in the day... since a 8087 was basically having a memory oriented 80 bit FPU sitting next to a 16 bit processor.

    It is a good point though that I should 'check it'... I'll probably toss together a synthetic bench tomorrow to gauge the speed differences, if any... though constant looping will likely trigger the various caches, so I'll probably have to make a version that puts a few hundred k of NOP's in place to cache-flush between operations.

    I'm also used to thinking x86 where the 'integer optimization' for coding hasn't really been true since pentium dropped... I've really got a lot of studying of ARM to do -- and the code FPC makes for ARM. I mean, does it even try to use SIMD/FPV if available?

    Quote Originally Posted by SilverWarior View Post
    I don't seem how it would be difficult rendering true type fonts. I will learns shortly becouse I intend to make functions wich will alow me to do that in Aspyre graphic engine. If it will work well I do intend on sharing it source code.
    Wait until you try using the train wreck known as freetype -- it's rubbish, pure and simple... There's a reason so many SDL and OpenGL programs don't even bother and use raster fonts instead... The rendering is ugly, inconsistent, painfully slow and the code interfaces are the worst type of tripe this side of trying to write a device driver for linux.

    I was thinking I could use monospace and/or monokerned fonts instead of true kerning; that would make it simpler/faster and since it's going to have an editor, it will have a monospaced fonts anyways. Vector fonts are a high resolution luxury that I don't think translate well to composite-scale resolutions in the first place; see the old vector fonts from the BGI at CGA resolutions.

    I may also keep the editor strictly SDL, leaving openGL for when a program is running in the interpreter. Still need to play with the idea. Nowhere near working on that part of it yet as my first order of business is getting the tokenizer and bytecode interpreter complete to where it can at least make a console program. THEN I'll worry about the IDE, graphics, fonts, etc...
    Last edited by deathshadow; 16-04-2012 at 09:59 AM.
    The accessibility of a website from time to time must be refreshed with the blood of designers and owners. It is its natural manure

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •