Built with developer experience in mind, Tensorkube simplifies the process of deploying serverless GPU apps. Sometimes for advanced debugging use-cases you might want to know the underlying details of the deployment. This cheatsheet will equip you with the necessary commands to get the details of your deployment.

Deploy command with all parameters

tensorkube deploy --gpus 1 --gpu-type a10g --cpu 2500 --memory 12000 --min-scale 1 --max-scale 10 --env staging --secret aws-secret --secret huggingface-secret

  • --cpu is in millicores, 1000 = 1vCPU
  • --memory is in MB, 1024 = 1GB

List all deployments

Use tensorkube list deployments --all to get the details of your deployed apps including their names, status and their HTTP endpoints. Use tensorkube list deployments --env <env_name> for apps in an environment.

Logs of a pod

Use tensorkube deployment logs <deployment-name> --env <env-name> to get the logs of a deployment/pod. This will show you the logs of the pod and help you debug any issues that might be present in the pod.

SSH into a deployment / pod

Use tensorkube deployment ssh <deployment-name> --env <env-name> to ssh into a pod. This will help you debug any issues that might be present in the pod.

Get details of a deployment

tensorkube deployment describe <deployment-name> --env <env-name>

List all running pods

Use kubectl get pods -n <env-name> to get the details of all the running pods across all your deployments in an environment.

Status of pod before being created (only use when necessary)

Use kubectl describe pod <pod-name> to get the status of a pod before it is created. This will show you all the steps that Tensorkube takes to create a pod. It will also give you the time each of those steps take and whether there is any scope of optimisation or not.