Deploy GGUF quants of Deepseek R1 671B param model using Tensorfuse
entrypoint.sh
script below. You can also tinker with the number of GPUs and GPU type to deploy on other GPU combinations.
If you have more GPU memory than 192 GB, I would also recommend playing around with the --ctx-size
parameter.deployment.yaml
).snapshot_download
from huggingface_hub. We then start the llama server with the necessary flags to run the model.
entrypoint.sh
script below. You will have to change the repo_id
and local_dir
flag in the
snapshot_download
parameters and change the --model
flag in the llama-server command.llama-server
flags is available for further reference, and if you have questions about selecting flags for production, the Tensorfuse Community is an excellent place to seek guidance.
readiness
endpoint in your config. Tensorfuse uses this endpoint to ensure that your service is healthy.
llama-server
exposes readiness by default on the /health
endpoint. Remember that we have set port to 8080
in deployment.yaml as llama-server
runs on that port and we have exposed 8080
in the Dockerfile.
readiness
endpoint is configured, Tensorfuse tries the /readiness
path on port 80 by default which can cause issues if your app is not listening on that path.YOUR_APP_URL
with the endpoint shown in the output of the above command and run:
llama-server
is compatible with the OpenAI API, you can use OpenAI’s client libraries
as well. Here’s a sample snippet using Python: