Results 1 to 6 of 6

Thread: Dllama - Local LLM Inference

  1. #1

    Dllama - Local LLM Inference

    logo.png

    A simple and easy to use library for doing local LLM inference directly from your favorite programming language.

    Download
    tinyBigGAMES/Dllama: Local LLM inference Library (github.com)

    Simple example
    Code:
    uses
      System.SysUtils,
      Dllama;
    
    
    begin
      // init
      if not Dllama_Init('config.json', nil) then
        Exit;
      try
        // add message
        Dllama_AddMessage(ROLE_SYSTEM, 'You are a helpful AI assistant');
        Dllama_AddMessage(ROLE_USER, 'What is AI?');
        
        // do inference
        if Dllama_Inference('phi3', 1024, nil, nil, nil) then
        begin
          // success
        end
      else
        begin
          // error
        end;
      finally
        Dllama_Quit();
      end;
    end.
    Media
    Attached Images Attached Images

  2. #2
    I just tried this project of yours and was not impressed. While it seemed that some text was being written by the AI the output was incoherent mess of various words. I don't have enough knowledge on this topic to even begin to guess what might be wrong.

  3. #3
    I'm afraid I didn't even try the project. So much to do, so little time. Besides I'm no github user. So can't really comment at all other than the general point that it's good to see some pascal activity. Keep up the good work.

  4. #4
    I just tried this project of yours and was not impressed. While it seemed that some text was being written by the AI the output was incoherent mess of various words. I don't have enough knowledge on this topic to even begin to guess what might be wrong.
    Hmm, which Test did you run? What type of output did you get? Which model did you use? Did you change the template format in config.json? If it's not correct for example, you will not get correct output. Are you running in GPU or CPU mode, etc.

    The fact that local LLM inference was not even possible 2-3 years ago on consumer hardware is impressive in itself! Amazing how fast AI is advancing.

    If you can give some info about the problem, i'm sure we can get to the bottom of it. I want everyone to be able enjoy AI.
    Last edited by drezgames; Today at 06:24 PM.

  5. #5
    Quote Originally Posted by SilverWarior View Post
    I just tried this project of yours and was not impressed. While it seemed that some text was being written by the AI the output was incoherent mess of various words. I don't have enough knowledge on this topic to even begin to guess what might be wrong.
    Thx!

  6. #6
    Quote Originally Posted by drezgames View Post
    Hmm, which Test did you run? What type of output did you get? Which model did you use? Did you change the template format in config.json? Are you running in GPU or CPU mode, etc.
    I did the only non commented test from Example project. I used the exact same model the Example project seems to be set up to use. It is hard to describe the output. On the other hand it seems as if it only contains only random number but on the other hand it also looked as it was outputting some commands which doesn't seem to be intended output. I used the existing prompt from the Example project by commenting and uncommenting specific lines that store prompts examples.
    The AI supposed to run in AMD Vulkan mode.

    When existing example didn't work I then tried to change the model to GPT2 which required me to modify the config.json. Trying the same prompts defined in constants resulted in empty responses except for one (don't remember which one) which returned "What are you doing?" as a response.

    I will try again tomorrow if I find time. This time I intent to modify the example project so that it will save console output into a text file which I could then share with you for easier debugging.

    PS: On first run the program stated it has done some extracting of GGUF data. Where is this extracted data being stored? I haven't seen any new files created. I may have to delete it in order to repeat the extraction process in case if something went wrong the first time. I did not get any error message.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •