- Opening a cloud instance (e.g., an EC2 instance).
- Copying your code to the instance.
- Installing dependencies, Docker, and the nvidia container runtime.
- Starting the container manually.
- Repeating these steps whenever you make changes to your code.
How dev containers work
Dev containers take as input the gpu type, your project files and aDockerfile
. They then start an instance with the specified gpu type and start a container. The container runs a hot-reloading server that watches for changes in your project files. When a change is detected, the server rebuilds the container and restarts it with the updated code.
Getting started with dev containers
To get started with dev containers, you need to have the Tensorfuse CLI installed on your machine. You can install the CLI using the following command:Configuration for AWS
You can run the following commands to setup AWS credentials on your machine:Configuration for lambda
You can run the following commands to setup Lambda credentials on your machine:Running the dev container
Let’s walk through the steps to start a dev container. You will need aDockerfile
and your project files in a directory. You can take the below example as a reference where we test a simple python server which returns the gpu type of the container.
Step 1: Get your project files ready
Create a directory with your project files and aDockerfile
. For this example, we will create a simple FastAPI server that returns the gpu type of the container and a Dockerfile
to define it’s environment. You can create a .dockerignore
file to exclude unnecessary files from the container.
main.py
Dockerfile
Make sure that the port in the Dockerfile is set to
80
as Tensorkube forwards this port by default. The --port
argument specifies
the port on which the dev container will be accessible from your local machine and cannot be used to change the port in the Dockerfile.Step 2: Start the dev container
Navigate to the directory containing your project files and run the following command:Arguments
--cloud aws
: The cloud provider where the dev container will be started. Currently, onlyaws
andlambda
are supported.--gpu-type A100
: The type of GPU to use for the dev container. You can choose fromH100
,A100
,V100
,L40S
,A10G
,L4
,T4
andV100
. SpecifyNone
if you don’t need a GPU.--port 8080
: The port on which you can access the devcontainer from your local machine.8080
is the default port.
- Automatically start an instance with the specified GPU type(V100 in this case) in the cloud.
- Create necessary VPCs, security groups, and key pairs for secure access.
- Sync your local codebase from the folder with this instance.
- Start a Docker container with hot-reloading enabled and forwarded to your
port
(8080 in this case) - Once started, any changes made locally will be reflected instantly in the running container without needing to restart it.
- You can access this container using
curl http://localhost:8080/gpus
.
http://localhost:<port>
. Try making changes to your main.py
file and see the changes reflected in real-time in the running container.
If you stop the process by pressing
Ctrl+C
, the dev container will be stopped but the instance will continue to run. To avoid incurring charges, make sure to stop or delete the dev container when you’re done testing.