Fusion of Engineering, Control, Coding, Machine Learning, and Science

Install Locally DeepScaleR-1.5B Mathematics Problem Solving Large Language Model (LLM) – Trained on MATH Olympiad Problems  

The YouTube tutorial is given below.

Install and Run Locally Math Genius LLM DeepScaleR-1.5B - Model Trained on MATH Olympiad Problems

Background Information about DeepScaleR-1.5B

This model fine-tuned on 40,000 problem-answer pairs from:

Installation Instructions for DeepScaleR-1.5B

Here, we present the main installation instructions for DeepScaleR-1.5B. For thorough explanation of all installation steps see the YouTube tutorial.

First, you have to make sure that you have Microsoft Visual Studio C++ Compilers installed on your system. To do that, go to the official website and install the Microsoft Visual Studio C++. Then, make sure that you install the NVIDIA CUDA Toolkit by following the instructions given here. You also need to have Python installed on your system. The installation instructions on this website apply to Windows 10 or 11.

First, create workspace folder, and then, create and activate the Python virtual environments:

cd\
mkdir testModel
cd testModel
python -m venv env1
env1\Scripts\activate.bat

Then, make sure that you install the PyTorch by using the official website given here. The selection table on the website will produce this installation command:

pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126

Then, install transformers, accelerate, and huggingface_hub Python packages and libraries:

pip install transformers 
pip install accelerate 
pip install huggingface_hub

Then, execute this Python script that will download all the model files from the remote Hugginface repository:

from huggingface_hub import snapshot_download

snapshot_download(repo_id="agentica-org/DeepScaleR-1.5B-Preview",
                  local_dir="C:\\testModel\\")

This code will download all the model files. The next step is to write a test code. The code is given below.

import torch
import transformers
model_id="C:\\testModel\\"
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": "auto"},
    device_map="auto",
)
problem="How to solve the equation sin(2x)=0.1x-0.2?"
messages = [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": problem},
]

outputs = pipeline(messages, max_new_tokens=2024)
print(outputs[0]["generated_text"][-1])
Exit mobile version