Prerequisites
Before you begin, ensure you have configured Tensorkube on your AWS account. If you haven’t done that yet, follow the Getting Started guide.Deploying ComfyUI with Tensorfuse
Each tensorkube deployment requires two things - your code and your environment (as a Dockerfile). While deploying machine learning models, it is beneficial if your model is also a part of your container image. This reduces cold-start times by a significant margin. ComfyUI stands out as one of most flexible graphical user interface (GUI) for stable diffusion, complete with an API and backend architecture. You can use any model for deployment as given in examples for ComfyUI through tensorkube.Code files
We will use an nginx server to start our app. We will configure the /readiness endpoint to return a 200 status code. Remember that Tensorfuse uses this endpoint to check the health of your deployment. The Comfy UI will run as a web server at port 8000 and will use all the endpoints via nginx proxy at port 80.nginx.conf
Environment files (Dockerfile)
Next, create a Dockerfile. Given below is a simple Dockerfile that you can use:Dockerfile
download_model.py
that download all model files in the right place.
download_model.py
Deploying the app
ComfyUI is now ready to be deployed on Tensorkube. Navigate to your project root and run the following command:To use the app, you need to use a ComfyUI workflow, using nodes from the MochiWrapper. You can use one from here
and just open it using the ComfyUI interface.