tensorkube deploy

Deploy your containerized application on Tensorkube. This command requires a dockerfile to be present in the current directory.

Usage:

tensorkube deploy [OPTIONS]

Options:

  • --gpus INTEGER [default: 0]: Number of GPUs needed for the service.

  • --gpu-type Choice(['V100', 'A10G', 'T4', 'L4', 'L40S']): Type of GPU.

  • --cpu FLOAT [default: 100]: Number of CPU millicores. 1000 = 1 CPU

  • --memory FLOAT [default: 200]: Amount of RAM in megabytes.

  • --min-scale INTEGER [default: 0]: Minimum number of pods to run.

  • --max-scale INTEGER [default: 3]: Maximum number of pods to run.

  • --env TEXT: Environment to deploy the service to.

  • --github-actions BOOLEAN [default: False]: Deploying from Github Actions.

  • --secret TEXT: Secret to use for the deployment.

  • --efs BOOLEAN [default: False]: Flag to use EFS for the deployment.

  • --concurrency INTEGER [default: 100]: Number of concurrent requests for a single pod to handle at a time. Pods Scale between min and max scale.

  • --config-file TEXT: Path to the config.yaml file. you can use this file instead of cli flags. Keys in config.yaml take precedence than cli flags.

  • --domain-name TEXT: Domain to use for the deployment.

  • --help BOOL [default: false]: Show this message and exit.