Deploy GGUF quants of Deepseek R1 671B on Serverless GPUs
Deploy GGUF quants of Deepseek R1 671B param model using Tensorfuse
Deepseek-R1 is an advanced large language model designed to handle a wide range of conversational and generative tasks. It has proven capabilities in various benchmarks and excels in complex reasoning. Deepseek R1 was trained in fp8 precision and has 671B parameters. In fp8 precision each parameter is quantized to 8 bits bringing the memory required to run the DeepSeek to ~700GB.
700GB worth of GPU RAM can only be achieved on a single node of 8xH200 GPUs. Even 8xH100 GPU nodes have a combined GPU memory of 640 GB. Hence, GPU memory becomes a bottleneck for deploying the entire model.
Our friends at Unsloth AI released GGUF quants of the DeepSeek Model which require only 140 GB of GPU memory. Their quants have an average quantisation of 1.58 bit (some parameters are 4-bit, some are 8 bit , some are 1 bit). This allows us to deploy our model on variety of GPU combinations that can fit on a single node. In this guide we will walk you through deploying GGUF quants of DeepSeek R1 671B on your cloud account using Tensorfuse.
We will deploy the 140 GB model on a combination of 4xL40S GPUs. This combination has a GPU memory of 192 GB giving us sufficient space to run the model, and also have some headroom for KV cache and long enough context length.
You can deploy other GGUF quant models by modifying the entrypoint.sh
script below. You can also tinker with the number of GPUs and GPU type to deploy on other GPU combinations.
If you have more GPU memory than 192 GB, I would also recommend playing around with the --ctx-size
parameter.
Prerequisites
Before you begin, ensure you have configured Tensorfuse on your AWS account. If you haven’t done that yet, follow the Getting Started guide.
Deploying Deepseek-R1-671B with Tensorfuse
Each Tensorkube deployment requires:
- Your environment (as a Dockerfile).
- Your code (in this example, the entrypoint.sh script).
- A deployment configuration (
deployment.yaml
).
Step 1: Prepare the Dockerfile
We will use the official llama.cpp image as our base image. This image comes with all the necessary dependencies to run llama.cpp. The image is present on the Github Registry as gerganov/llama.cpp.
We will then set the environment variables required to run the model and install the necessary huggingface dependencies to download the model.
We will then copy our code and set the permissions for the entrypoint script.
We’ve configured the vLLM server with numerous CLI flags tailored to our specific use case. A comprehensive list of all other vLLM flags is available for further reference, and if you have questions about selecting flags for production, the Tensorfuse Community is an excellent place to seek guidance.
Step 2: Prepare the entrypoint script
In this step, we first download the GGUF model using snapshot_download
from huggingface_hub. We then start the llama server with the necessary flags to run the model.
You can deploy other GGUF quant models by modifying the entrypoint.sh
script below. You will have to change the repo_id
and local_dir
flag in the
snapshot_download
parameters and change the --model
flag in the llama-server command.
We’ve configured the vLLM server with numerous CLI flags tailored to our specific use case. A comprehensive list of all
other llama-server
flags is available for further reference, and if you have questions about selecting flags for production, the Tensorfuse Community is an excellent place to seek guidance.
Step 3: Deployment config
Although you can deploy tensorfuse apps using command line, it is always recommended to have a config file so that you can follow a GitOps approach to deployment.
Don’t forget the readiness
endpoint in your config. Tensorfuse uses this endpoint to ensure that your service is healthy.
llama-server
exposes readiness by default on the /health
endpoint. Remember that we have set port to 8080
in deployment.yaml as llama-server
runs on that port and we have exposed 8080
in the Dockerfile.
If no readiness
endpoint is configured, Tensorfuse tries the /readiness
path on port 80 by default which can cause issues if your app is not listening on that path.
Now you can deploy your service using the following command:
Step 4: Accessing the deployed app
Voila! Your autoscaling production llama.cpp service is ready.
Once the deployment is successful, you can see the status of your app by running:
And that’s it! You have successfully deployed the world’s strongest Open Source Reasoning Model in a quantised format.
Remember to configure a TLS endpoint with a custom domain before going to production.
To test it out, replace YOUR_APP_URL
with the endpoint shown in the output of the above command and run:
Because llama-server
is compatible with the OpenAI API, you can use OpenAI’s client libraries
as well. Here’s a sample snippet using Python:
Click here to get started with Tensorfuse.
You can also directly use the Deepseek-R1 GitHub repository for more details and updates on these Dockerfiles.