Once you are satisfied with the channel that you have built in your local development environment, you can deploy the channel to the Rendered.ai Platform using the anatools SDK.
To obtain anatools, visit https://pypi.org/project/anatools/ where you’ll find instructions for pip install.
Anatools may be run from within a local python deployment on your development machine or by using a containerized python environment in a docker container, for example, as long as that environment can access the channel content that you intend to deploy.
The process for deploying a channel is:
Build the Channel Docker Image
Create a Channel on the Rendered.ai Platform
Deploy the Docker Image to the Rendered.ai Platform
This document will cover this process and highlight additional resources.
Step 1: Build the Channel Docker Image
Rendered.ai provides a build script in the ana repository to build your channel with one command. This script is found at <ana-repo>/ana/buildchannel.sh. The options for this script are -c for specifying the channel to build and -d for specifying to include the <ana-repo>/ana/data directory or not. For example, to build the Rendered.ai Example channel with the data included use:
./buildchannel.sh -c example -d
The channel name specified after -c must match your channel name in ana/channels/<channel-name> or this script will fail.
After successfully building your channel’s docker image, you are ready to deploy the channel to the platform.
Step 2: Create a Channel
From within python, you will use the create_managed_channel() SDK call to create the channel on the platform:
create_managed_channel(name, organizationId=None, volumes=None, instanceType=None, timeout=None)
The name parameter is the name of the channel as it will be displayed in the Rendered.ai web interface. Channel names must be unique per organization. The organizationId parameter will specify the organization that owns the channel, only members of this organization can deploy the channel and manage access to the channel. The volumes parameter is used to specify which volumes to attach to the data directory on deployment, see Volumes for more information. The instance type is the type of compute the channel uses, the default instance type is p2.xlarge. Consult Rendered.ai support if you believe you need another type of compute. The timeout parameter defines the maximum runtime of an Ana run in seconds before the job manager kills the run. For example, if you don’t want any of your runs to go over 10 minutes, this parameters should be set to 600.
So this could look like:
import anatools anaclient = anatools.AnaClient() channelId = anaclient.create_managed_channel('My Foo Channel') # String name does not have to be the same as the channel python package name
Once a channel is created, a valid docker image will need to be deployed before the channel is usable on the Rendered.ai Platform.
Step 3: Deploy the Channel to the Platform
The deploy_channel() SDK call is used to push the docker image to the Rendered.ai Platform repository.
deploy_managed_channel(channelId, image=None) # Omit 'image' if the channel name is going to be taken from the package
The channel parameter is the name of the channel, the alias parameter specifies whether or not to deploy the channel under another name. An example use-case of this is to deploy the Rendered.ai Example channel as example-dev to not disturb the current production Rendered.ai Example channel. This way you can test things out before deploying changes to a wider audience.
The deploy_managed_channel command can take awhile based on the size of the docker image and internet speeds.
For example, you might make the following call:
deploymentId = anaclient.deploy_managed_channel(channelId, image='foo') # In this case, 'foo' is the package name which is different from the published channel name 'My Foo Channel'