Fusion of Engineering, Control, Coding, Machine Learning, and Science

Tutorial on How to Run Locally DeepSeek-R1 Model in Python and Linux

The YouTube tutorial is given below.

DeepSeek-R1 Locally in Python: Install and Run Locally DeepSeek-R1 Model in Python and Linux Ubuntu

In this tutorial, we explain how to install a distilled version of DeepSeek-R1. Consequently, it is instructive to first explain what are distilled models.

Distilled Models:

Installation Instructions

The first step is to install the Ollama framework for running large language models. But, first we need to install curl and open a certain port such that Ollama can run locally. To install curl, open a Linux terminal and execute these commands

sudo apt update && sudo apt upgrade
sudo apt install curl
curl --version

Then, in order to install Ollama, we need to first allow for inbound connections at a port. To do that execute this command

sudo ufw allow 11434/tcp

Next, we need to install Ollama. To do that, go the Ollama website

https://www.ollama.com

Then, click on download, and the following screen will appear

Then, you need to copy the installation command, and execute this command in the terminal window

curl -fsSL https://ollama.com/install.sh | sh

This will install Ollama. Once Ollama is installed, in the terminal execute

ollama

If Ollama is installed, the response shown in the figure below will appear.

The next step is to install the DeepSeek-R1 model. To do that, go to the Ollama website and search for DeepSeek-R1

click on the model, and then select the model 14b

Then, copy and execute the installation command in the terminal

ollama run deepseek-r1:14b

This command will install and run the model. Test the model by asking a basic question. Next, exit the model.

The next step is to create a Python script for running the model. First, let us verify that Python is installed on the system. Open a terminal and type

which python3
python3 --version

The output should show the path to the Python installation folder and the current Python version. In our case the Python version is 3.12.3. Next, we need to create a workspace folder. To do that, type

cd ~
mkdir testModel
cd testModel

Next, we need to install a command for creating Python virtual environment. Since our Python version is 3.12, the installation command is

sudo apt install python3.12-venv

In the above command, instead of 3.12, insert your Python version (only the first two numbers separated by the period).

Next, let us create and activate the Python virtual environment

python3 -m venv env1
source env1/bin/activate

Next, we need to install the Ollama Python library. To do that, type this

pip install ollama

Finally, let us create a Python script. To create a Python script, we use a simple Linux editor called nano. Consequently, execute this

sudo nano test.py

The test.py file should look like this

import ollama
desiredModel='deepseek-r1:14b'
questionToAsk='How to solve a quadratic equation x^2+5*x+6=0'

response = ollama.chat(model=desiredModel, messages=[
  {
    'role': 'user',
    'content': questionToAsk,
  },
])

OllamaResponse=response['message']['content']

print(OllamaResponse)

with open("OutputOllama.txt", "w", encoding="utf-8") as text_file:
    text_file.write(OllamaResponse)

This file will load the Ollama model called deepseek-r1:14b. The name of this model can be found by opening a new terminal and by typing

ollama list

This command will list all the models and their names. You should insert the name on the second line of the Python code give above. The Python script will call the model, forward the question stored in the string variable questionToAsk. The output will be printed on the screen and stored in the text file OutputOllama.txt.

Exit mobile version