Deepseek-R1 is an advanced large language model designed to handle a wide range of conversational and generative tasks. It has proven capabilities in various benchmarks and excels in complex reasoning. In this guide, we will walk you through deploying the Deepseek-R1 671B parameter model on your cloud account using Tensorfuse. We will be using H100 GPUs for this example, however it is super easy to deploy on other GPUs as well (see Tip).

Although this guide focuses on deploying the 671B param model, you can easily adapt the instructions to deploy any distilled version of Deepseek-R1, including 2B, 7B, and 70B param models. See the table at the end of the guide or visit the github repo.

Why Build with Deepseek-R1

Deepseek-R1 offers:

  • High Performance on Evaluations: Achieves strong results on industry-standard benchmarks.
  • Advanced Reasoning: Handles multi-step logical reasoning tasks with minimal context.
  • Multilingual Support: Pretrained on diverse linguistic data, making it adept at multilingual understanding.
  • Scalable Distilled Models: Smaller distilled variants (2B, 7B, 32B, 70B) offer cheaper options without compromising on cost.

Below is a quick snapshot of benchmark scores for Deepseek-R1:

BenchmarkDeepseek-R1 (671B)Remarks
MMLU90.8%Near state-of-the-art
AIME 2024 (Pass@1)79.8%Mathematical and reasoning abilities
LiveCodeBench (Pass@1-COT)65.9%Excels at multi-step reasoning

The combination of these strengths makes Deepseek-R1 an excellent choice for production-ready applications, from chatbots to enterprise-level data analytics.


Prerequisites

Before you begin, ensure you have configured Tensorfuse on your AWS account. If you haven’t done that yet, follow the Getting Started guide.

Deploying Deepseek-R1-671B with Tensorfuse

Each Tensorkube deployment requires:

  1. Your code (in this example, vLLM API server code is used from the Docker image).
  2. Your environment (as a Dockerfile).
  3. A deployment configuration (deployment.yaml).

We will also add token-based authentication to our service, compatible with OpenAI client libraries. We will store the authentication token (VLLM_API_KEY) as a Tensorfuse secret. Unlike some other models, Deepseek-R1 671B does not require a separate Hugging Face token, so we can skip that step.

Step 1: Set your API authentication token

Generate a random string that will be used as your API authentication token. Store it as a secret in Tensorfuse using the command below. For the purpose of this demo, we will be using vllm-key as your API key.

tensorkube secret create vllm-token VLLM_API_KEY=vllm-key --env default

Ensure that in production you use a randomly generated token. You can quickly generate one using openssl rand -base64 32 and remember to keep it safe as Tensorfuse secrets are opaque.

Step 2: Prepare the Dockerfile

We will use the official vLLM Openai image as our base image. This image comes with all the necessary dependencies to run vLLM. The image is present on DockerHub as vllm/vllm-openai.

Dockerfile

# Dockerfile for Deepseek-R1-671B

FROM vllm/vllm-openai:latest

# Enable HF Hub Transfer
ENV HF_HUB_ENABLE_HF_TRANSFER 1

# Expose port 80
EXPOSE 80

# Entrypoint with API key
ENTRYPOINT ["python3", "-m", "vllm.entrypoints.openai.api_server", \
            # name of the model
           "--model", "deepseek-ai/DeepSeek-R1", \
           # set the data type to bfloat16 - requires ~1400GB GPU memory
           "--dtype", "bfloat16", \
           "--trust-remote-code", \
           # below runs the model on 8 GPUs
           "--tensor-parallel-size","8", \
           # Maximum number of tokens, can lead to OOM if overestimated
           "--max-model-len", "4096", \
           # Port on which to run the vLLM server
           "--port", "80", \
           # CPU offload in GB. Need this as 8 H100s are not sufficient
           "--cpu-offload-gb", "80", \
           "--gpu-memory-utilization", "0.95", \
           # API key for authentication to the server stored in Tensorfuse secrets
           "--api-key", "${VLLM_API_KEY}"]

We’ve configured the vLLM server with numerous CLI flags tailored to our specific use case. A comprehensive list of all other vLLM flags is available for further reference, and if you have questions about selecting flags for production, the Tensorfuse Community is an excellent place to seek guidance.

Step 3: Deployment config

Although you can deploy tensorfuse apps using command line, it is always recommended to have a config file so that you can follow a GitOps approach to deployment.

deployment.yaml
# deployment.yaml for Deepseek-R1-671B

gpus: 8
gpu_type: h100
secret:
  - vllm-token
min-scale: 1
readiness:
  httpGet:
    path: /health
    port: 80

Don’t forget the readiness endpoint in your config. Tensorfuse uses this endpoint to ensure that your service is healthy.

If no readiness endpoint is configured, Tensorfuse tries the /readiness path on port 80 by default which can cause issues if your app is not listening on that path.

Now you can deploy your service using the following command:

tensorkube deploy --config-file ./deployment.yaml

Step 4: Accessing the deployed app

Voila! Your autoscaling production LLM service is ready. Only authenticated requests will be served by your endpoint.

Once the deployment is successful, you can see the status of your app by running:

tensorkube deployment list

And that’s it! You have successfully deployed the world’s strongest Open Source Reasoning Model

Remember to configure a TLS endpoint with a custom domain before going to production.

To test it out, replace YOUR_APP_URL with the endpoint shown in the output of the above command and run:

curl --request POST \
  --url YOUR_APP_URL/v1/completions \
  --header 'Content-Type: application/json' \
  --header 'Authorization: Bearer vllm-key' \
  --data '{
    "model": "deepseek-ai/DeepSeek-R1",
    "prompt": "Earth to Robotland. What's up?",
    "max_tokens": 200
}'

Because vllm is compatible with the OpenAI API, you can useOpenAI’s client libraries as well. Here’s a sample snippet using Python:

import openai

# Replace with your actual URL and token
base_url = "YOUR_APP_URL/v1"
api_key = "vllm-key"

openai.api_base = base_url
openai.api_key = api_key

response = openai.Completion.create(
    model="deepseek-ai/DeepSeek-R1",
    prompt="Hello, Deepseek R1! How are you today?",
    max_tokens=200
)

print(response)

Deploying other versions of Deepseek-R1

Although this guide has focused on Deepseek-R1 671B, there are smaller distilled variants available. Each variant changes primarily in: • Model name in the Dockerfile (--model flag). • GPU resources in deployment.yaml. • (Optional) --tensor-parallel-size depending on your hardware.

Below is a table summarizing the key changes for each variant:

Model VariantDockerfile Model NameGPU TypeNum GPUs / Tensor parallel size
DeepSeek-R1 2Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-1.5BA10G1
DeepSeek-R1 7Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-7BA10G1
DeepSeek-R1 8Bdeepseek-ai/DeepSeek-R1-Distill-Llama-8BA10G1
DeepSeek-R1 14Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-14BL40S1
DeepSeek-R1 32Bdeepseek-ai/DeepSeek-R1-Distill-Qwen-32BL44
DeepSeek-R1 70Bdeepseek-ai/DeepSeek-R1-Distill-Llama-70BL40S4
DeepSeek-R1 671Bdeepseek-ai/DeepSeek-R1H1008

Click here to get started with Tensorfuse.

You can also directly use the Deepseek-R1 GitHub repository for more details and updates on these Dockerfiles.