Run the newest LLM's locally! No GPU needed, no configuration, fast and stable LLM's!

Published on:

Views: 0

Likes: 0

Tags:

LLM
Large Language Models
Ollama
Homelab
Local LLM's
AI
AI Chatbot
Llama2
Orca2
Vicuna
Watch video on Youtube

This is crazy, it can run LLM's without needing a gpu at all, and it runs it fast enough that it is usable! Setup your own AI chatbots, AI coder, AI medical bot, AI creative writer, and more! Install on Linux or Windows Subsystem for Linux 2 curl https://ollama.ai/install.sh | sh Install on Mac: https://ollama.ai/download/mac Pull and run a model ollama run [modelname] Pull and run a 13b model ollama run [modelname]:13b Exit out from running ollama /bye Ollama website: https://ollama.ai/ Ollama Github https://github.com/jmorganca/ollama Start Ollama if it is not running sudo systemctl start ollama Stop Ollama if you wanted to for some reason sudo systemctl stop ollama Stop ollama from booting up on system boot sudo systemctl disable ollama