Changes made to the Tensorkube CLI
tensorkube upgrade
.
We thank you for your understanding and patience as we progress towards a stable release of the CLI.
tensorkube configure
.tensorkube configure --domain-name <domain>
: Set up a wildcard SSL certificate for your domain.tensorkube domain get-validation-records --domain-name <domain>
: Retrieve DNS records for validation.tensorkube domain attach --domain-name <domain>
: Bind load balancer to ACM certificate for TLS termination.tensorkube deploy --domain-name <sub.domain>
: Deploy your application with the configured custom domain.tensorkube give-cluster-access
: Command to grant access permissions for a specific user or team member to the Tensorkube cluster.tensorkube get-principal-arn
: Command to obtain the ARN for a role that assumes access, enabling seamless integration for Identity Center Users.tensorkube datasets create --dataset-id my_dataset --path data.jsonl
: Creates a new dataset.tensorkube datasets list
: Lists all datasets.tensorkube datasets delete --dataset-id my_dataset
: Deletes a specified dataset.tensorkube train list
command, allowing users to monitor ongoing and completed jobs easily.tensorkube train logs --job-id <JOB_ID>
, providing detailed insights into the training process for debugging and analysis.--cloud lambda
when starting a dev container to utilize Lambda Labs for your development workflows. This enhancement offers more flexibility and potentially cost-effective solutions for GPU-enabled development.
tensorkube dev --cloud lambda start --gpu-type v100 --port 8080
dshm
(dynamic shared memory) size in Lorax deployments. This change allows larger models to efficiently share memory space during execution, enhancing performance, especially in memory-intensive applications.tensorkube deploy --gpu-type l40s --gpus 1
tensorkube dev --cloud <provider> start --gpu-type <type> --port <port>
: Starts a Dev Container instance with hot-reloading enabled.tensorkube dev --cloud <provider> stop
: Stops the running Dev Container instance while retaining its state.tensorkube dev --cloud <provider> list
: Lists all running Dev Container instances.tensorkube dev --cloud <provider> delete
: Completely deletes the Dev Container instance and associated resources.config.yaml
files.tensorkube train create
command with the --axolotl
flag to initiate the fine-tuning process.LLaMA-3.1-8B
on an SQL dataset.tensorkube deploy --config-file <path-to-config-file>
to use config files for deployment. Their use however is completely optional.tensorkube list deployments
command. AWS ELB URLs can be quite large and hence can cause display issues in smaller terminals. Now the URLs are displayed outside the status table so that they can be copied when wrapped.sslip.io
domains: Sslip domains are notorious for misconfigurations and outages. We have finally modified our stack to directly support your deployments using AWS load balancer urls. Using tensorkube list deployments
will give the new urls. However, the older way of using sslip.io
urls as host headers will continue to work. To view older header urls use —old flag: tensorkube list deployments --old
tensorkube login
just once.tensorkube deployment ssh <deployment-name>
command. This command will open an interactive shell in the pod and allow you to debug issues.tensorkube environment create --env-name <env-name>
to create a new environment and then use --env <env-name>
flag with all other command.tensorkube deployment logs <deployment-name>
command.tensorkube deployment describe <deployment-name>
command.snap
to install Tensorfuse prerequisites..dockerignore
file during the image build process.tensorkube upgrade
command to bring your tensorfuse runtime to the latest version--min-scale
and --max-scale
to control the number of pods running in your tensorfuse app. By default the minimum scale is 0 and the maximum scale is 3.tensorkube install-prerequisites
to install and check all the prerequisite packages required to run tensorkube before configuring it with tensorkube configure
.tensorkube configure
now resumes from the last installation checkpoint. You now don’t have to remove and reset the cluster in case your configuration runs into errors.tensorkube deploy
, a new version of your service is created and the older version is retired.tensorkube deploy
now accepts --cpu
and --memory
optional parameters which allow you to specify the number of CPU millicores and the amount of RAM you want your servers to run on.pip install tensorkube
tensorkube deploy
now supports --gpus
and --gpu-type
parameters. --gpus
defines the number of GPUs each pod requires and the --gpu-type
parameter defines the type of GPU you want your system to run on.tensorkube list deployments
command