Fusion of Engineering, Control, Coding, Machine Learning, and Science

How To Install Llama 3.3 Large Language Model Locally on Linux Ubuntu

– In this tutorial, we explain how to install and run Llama 3.3 70B Large Language Model (LLM) locally on Linux Ubuntu. To install Llama 3.3 we will use Ollama. Ollama is one of the most simplest command-line tools and frameworks for running LLMs locally. Furthermore, it is simple to install Ollama, and we can run different LLMs from the command line.

The YouTube tutorial is given below.

Installation Instructions

The first step is to install Ollama. First, we have to make sure that our computer allows for inbound connections on port 11434. To do that, open a Linux Ubuntu terminal and type

sudo ufw allow 11434/tcp

Then, install curl

udo apt update && sudo apt upgrade
sudo apt install curl
curl --version

Then, to install Ollama, type this:

curl -fsSL https://ollama.com/install.sh | sh

Once we have installed Ollama, we can verify that Ollama is running by opening a web browser, and in the address bar of the web browser, we need to type

localhost:11434

If Ollama is installed, you should see the message “Ollama is running”. Then, to download the model, type this

ollama pull llama3.3

After the model is downloaded, we can run the model. To run the model, type this

ollama run llama3.3
Exit mobile version