Install a local AI: why and how

There are quite some scenarios where you would want to install a local AI engine. Privacy, for one, but also customization and performance. Some references:

There are quite some AIs you can install. Some lists:

The mentioned AI engines include:

  • Mistral
  • Ollama
  • GPT4All
  • Qwen
  • Whisper Tiny
  • DeepSeek
  • LLaMa
  • LMStudio
  • Phi Mini

(I don’t include version numbers, since that would very rapidly break the references.)

And besides, there is:

  • LocalAI – «The free, OpenAI, Anthropic alternative. Your All-in-One Complete AI Stack»
    localai.io/
  • LM Studio – Local AI on your computer – «Run local LLMs like gpt-oss, Qwen3, Gemma3, DeepSeek and many more on your computer, privately and for free.»
    lmstudio.ai/
  • Locally AI – Run AI models locally on your iPhone, iPad, and Mac. – «Run Llama, Gemma, Qwen, DeepSeek, and more locally on your iPhone, iPad, and Mac. Offline. Private. No login. Optimized for Apple Silicon.»
    locallyai.app/
  • Ollama – «Start building with open models»
    ollama.com/
  • Alibaba acaba de robarle la cartera a todas las Big Tech: lo ha hecho con prometedores modelos de IA de bolsillo
    Ejecutar modelos de IA en local realmente decentes en tu móvil está ya al alcance (casi) de cualquiera
    www.xataka.com/robotica-e-ia/alibaba-acaba-robarle-cartera-a-todas-big-tech-ha-hecho-prometedores-modelos-ia-bolsillo