Fusion of Engineering, Control, Coding, Machine Learning, and Science

Install OpenThinker Large Language Model (LLM) on Linux Ubuntu- LLM Better than DeepSeek-R1 Distilled Models

In this Large Language Model (LLM) tutorial, we explain how to install and run locally OpenThinker LLM on a Linux Ubuntu computer. OpenThinker is a family of LLMs that are obtained by the process of fine-tuning Qwen2.5 by using the OpenThoughts-114k dataset. OpenThinker comes in two sizes: 7B and 32B models and according to the authors of OpenThinker, these models outperform the corresponding DeepSeek-R1 7B and 32B models. On the other hand, the OpenThoughts-114k dataset consists of around 114,000 coding and puzzle problems and solutions. In this tutorial, we explain how to install and run locally OpenThinker by using Ollama, and by using Open WebUI to generate a Graphics User Interface (GUI). The YouTube tutorial is given below.

Run OpenThinker LLM on Ubuntu - Better Model than Distilled DeepSeek-R1

How to install OpenThinker Large Language Model on Linux Ubuntu

The installation instructions given below work for Linux Ubuntu 24.04 and Python 12.X (any number after 12). If you are using some other version of Python or Linux Ubuntu, you can easily modify the commands given below. First of all, we need to install Curl. For that purpose, open a terminal and type

sudo apt update && sudo apt upgrade
sudo apt install curl
curl --version

Then, we need to install Ollama. To install Ollama, in the terminal type

curl -fsSL https://ollama.com/install.sh | sh

This will install Ollama. Next, we need to download and install models. OpenThinker comes in two flavors, a smaller 7B model and a larger 32B model. Depending on your hardware and computer resources, you should select the model. In our case, we were able to run both models on NVIDIA 3090 GPU with 24GB RAM. To download the smaller model type this

ollama pull openthinker:7b

To download the larger model, type this

ollama pull openthinker:32b

You can test these models in a terminal by typing this:

ollama run openthinker:7b

or

ollama run openthinker:32b

Then, the next step is to generate a GUI for running the model. To do that, open a terminal and type

cd ~
mkdir testOpen
cd testOpen

To check the Python version, type this

python3 --version

Then, we need to create and activate the Python virtual environment. To do that, type this

sudo apt install python3.12-venv
python3 -m venv env1
source env1/bin/activate

To install and run Open WebUI in the Python virtual environment, type this

pip install open-webui
open-webui serve

This will start the Open-WebUI. To start the GUI, type in the web browser the following address

http://localhost:8080

For more details see the YouTube tutorial given above.

Additional Resources

Exit mobile version