September 19, 2024

Easily Install and Run Ollama and Llama3.1 LLM on Ubuntu Linux from the Command Line

In this tutorial, we explain how to correctly install Ollama and Large Language Models (LLMs) on Ubuntu Linux from scratch. The YouTube tutorial accompanying this webpage is given below.

Ollama is an interface for running Large Language Models (LLMs) in Windows, Linux, and Mac computers. You can integrate Ollama with a number of LLMs, such as llama3.1, gemma2, mistral-nemo, mistral-large, etc. Furthermore, you can integrate Ollama with RAGFlow and create an advanced personal assistant and a chat assistant that help you with your work, research, or daily tasks.

In this tutorial, we explain

  1. How to correctly install Ollama in Linux Ubuntu.
  2. How to install and use LLMs with Ollama by using a Linux Ubuntu command line.

First, we have to make sure that our computer allows for inbound connections on port 11434. To do that, open a terminal and type

sudo ufw allow 11434/tcp

To install Ollama in Linux Ubuntu, open a terminal and type

curl -fsSL https://ollama.com/install.sh | sh

Once we have installed Ollama, we can verify that Ollama is running by opening a web browser, and in the address bar of the web browser, we need to type

localhost:11434

You should get a message: “Ollama is running”. Next, we explain how to install and use LLMs in Ollama. To see a list of LLMs that can be used with Ollama, go to this website, and select a model. In our case, we will use Llama 3.1. It is an open-source LLM released by Meta. Click on the model, or directly go to this webpage. Then, select your model. In our case, we will use 8B model, and consequently, we have to execute this command to install and run llama3.1:8b.

ollama run llama3.1:8b

The first time you run this command, the model will be downloaded. Next time you run the command, the model will only be started. That is, to start a downloaded model, you need to type the following

ollama run <model name>

Another approach for downloading the model is:

ollama pull llama3.1:8b

This will just download the model and it will not run the model.

To display model information, you need to type

ollama show llama3.1

To remove the model

ollama rm llama3.1:8b

To list the models on the computer, type

ollama list