Results 1 to 9 of 9

Thread: Dllama - Local LLM Inference

Threaded View

Previous Post Previous Post   Next Post Next Post
  1. #7
    I just tried this project of yours and was not impressed. While it seemed that some text was being written by the AI the output was incoherent mess of various words. I don't have enough knowledge on this topic to even begin to guess what might be wrong.
    Hmm, which Test did you run? What type of output did you get? Which model did you use? Did you change the template format in config.json? If it's not correct for example, you will not get correct output. Are you running in GPU or CPU mode, etc.

    The fact that local LLM inference was not even possible 2-3 years ago on consumer hardware is impressive in itself! Amazing how fast AI is advancing.

    If you can give some info about the problem, i'm sure we can get to the bottom of it. I want everyone to be able enjoy AI.
    Last edited by drezgames; 07-05-2024 at 06:24 PM.

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •