About 50 results
Open links in new tab
  1. ollama - Reddit

    r/ollama How good is Ollama on Windows? I have a 4070Ti 16GB card, Ryzen 5 5600X, 32GB RAM. I want to run Stable Diffusion (already installed and working), Ollama with some 7B models, maybe a …

  2. Ollama GPU Support : r/ollama - Reddit

    I've just installed Ollama in my system and chatted with it a little. Unfortunately, the response time is very slow even for lightweight models like…

  3. How to make Ollama faster with an integrated GPU? : r/ollama - Reddit

    Mar 8, 2024 · How to make Ollama faster with an integrated GPU? I decided to try out ollama after watching a youtube video. The ability to run LLMs locally and which could give output faster amused …

  4. Ollama is making entry into the LLM world so simple that even ... - Reddit

    I took time to write this post to thank ollama.ai for making entry into the world of LLMs this simple for non techies like me. Edit: A lot of kind users have pointed out that it is unsafe to execute the bash file to …

  5. Options for running LLMs on laptop - better than ollama - Reddit

    Jan 15, 2024 · I currently use ollama with ollama-webui (which has a look and feel like ChatGPT). It works really well for the most part though can be glitchy at times. There are a lot of features in the …

  6. Local Ollama Text to Speech? : r/robotics - Reddit

    Apr 8, 2024 · Yes, I was able to run it on a RPi. Ollama works great. Mistral, and some of the smaller models work. Llava takes a bit of time, but works. For text to speech, you’ll have to run an API from …

  7. Ollama running on Ubuntu 24.04 : r/ollama - Reddit

    Ollama running on Ubuntu 24.04 I have an Nvidia 4060ti running on Ubuntu 24.04 and can’t get ollama to leverage my Gpu. I can confirm it because running the Nvidia-smi does not show gpu. I’ve google …

  8. How does Ollama handle not having enough Vram? - Reddit

    How does Ollama handle not having enough Vram? I have been running phi3:3.8b on my GTX 1650 4GB and it's been great. I was just wondering if I were to use a more complex model, let's say …

  9. Ollama iOS mobile app (open source) : r/LocalLLaMA - Reddit

    Dec 22, 2023 · OLLAMA_HOST=your.ip.address.here ollama serve Ollama will run and bind to that IP instead of localhost and the Ollama server can be accessed on your local network (ex: within your …

  10. How to manually install a model? : r/ollama - Reddit

    Apr 11, 2024 · I'm currently downloading Mixtral 8x22b via torrent. Until now, I've always ran ollama run somemodel:xb (or pull). So once those >200GB of glorious…