Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 120 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Rendered.ai User Documentation

Loading...

General Concepts

Loading...

Loading...

Loading...

Loading...

Loading...

Application User Guides

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Development Guides

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Open Source Channels

Loading...

Loading...

Release Notes

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Who Uses Rendered.ai?

Rendered.ai was designed with two users in mind, who often have separate but overlapping job functions: Synthetic Data Engineers and Computer Vision Engineers.

Data Scientists and Computer Vision Engineers: Data Scientists and Computer Vision Engineers approach their business problem with a specific algorithm and learning task in mind. These users focus on how AI can gain a high-level understanding from digital images or video of how the real world behaves. This practitioner is solving a particular AI/ML problem, understands the limitations of the particular algorithm, and designs synthetic data to stretch those limits.

Synthetic Data Engineer: The synthetic data engineer is a practitioner who applies the principles of synthetic data engineering to the design, development, maintenance, testing, and evaluation of synthetic data for consumption by AI/ML algorithms. Competence in this art is obtained through the creation of multiple datasets, with multiple variations, addressing multiple AI learning issues. The experience set tends to be horizontal across multiple engagements and this person has gained domain expertise in profound and nuanced changes on synthetic data and its likely effect on generalized algorithms.

The Platform provides the following capabilities, targeted to each of these users:

Benefits for Synthetic Data Engineers

Benefits for Data Scientists and Computer Vision Engineers

  • Secure, collaborative environment

  • Secure, collaborative environment

  • Configuration Management

  • No-code dataset generation that is easy to use and master

  • GPU acceleration, with Compute Management abstracted away

  • Dataset Library Management

  • Easy containerization

  • Analytics and comparison tools for datasets

  • User friendly web experience for testing configuration and job execution

  • Domain matching (Cycle GAN-based)

  • Analytic tools to compare two datasets and their AI/ML outcomes

  • Analytic tools to compare two datasets and their AI/ML outcomes

  • 3D asset generation and management

  • Automatic, flexible annotation generation

  • Example Channel/Application SDK

  • Rapid “what if” dataset creation

  • Cloud-based processing including asynchronous job configuration and execution

  • Consistency across projects

  • Easily integrated endpoints

  • Unlimited* content generation

Typical Rendered.ai workflows

Rendered.ai has been used for some of the following commercial and research applications:

  • Generating synthetic CV imagery to train detection algorithms for rare and unusual objects in satellite and aerial imagery

  • Generating simulated Synthetic Aperture Radar (SAR) datasets to test and evaluate SAR object detection algorithms

  • Embedding synthetic data as an OEM capability underneath a 3rd party x-ray detection system to allow end customers to test domain-specific detection of rare and unusual objects

  • Simulating microscopy videos of human oocyte development over time to train AI to recognize different developmental stages


*Unlimited refers to the Rendered.ai subscription licensing which allows a set number of users to generate datasets in a managed, hosted compute environment that does not limit the number of jobs run, pixels generated, or images produced.

TABLE OF CONTENTS

GENERAL CONCEPTS

APPLICATION USER GUIDES

DEVELOPMENT GUIDES

OPEN SOURCE CHANNELS

Overview
Introduction to Rendered.ai
The Rendered.ai Platform
Who Uses Rendered.ai?
Rendered.ai Licensing and Offerings
Overview
Quick Start Guide
Tutorials
Overview
Ana Software Architecture
Setting Up the Development Environment
Deploying a Channel
An Example Channel - Toybox
Toybox
RELEASE NOTES
SDK DOCUMENTATION

Introduction to Rendered.ai

What is Synthetic Data?

Data is critical for training AI. Most computer vision users rely on real data captured by physical sensors, but real data can have issues including bias, cost, and inaccurate labeling.

Synthetic data is artificial or engineered content that AI interprets as if it is real data. Synthetic data is used for training and validating artificial intelligence (AI) and machine learning (ML) systems and workflows. Engineered content may be derived from sampling or bootstrapping techniques using real world datasets or synthetic data may be generated by simulating real world scenarios in high resolution 3D.

The entertainment and computer graphics industries have created 3D synthetic environments for years for movies and games, training simulators, and educational materials. The use of engineered data for training and validating AI and ML systems extends the concept from simulating one environment or scenario to simulating many environments or scenarios to create large datasets of thousands of images or data points that can be used in AI and ML workflows.

Synthetic datasets can be made using configurable pipelines that effectively offer an unlimited range of scenarios or environments to generate data with known diversity and distribution of labelled assets, a critical part of using synthetic data for training AI.

Existing datasets have limits

Data is an essential ingredient in training and testing AI and ML systems. The quality, distribution, and completeness of data directly impacts the effectiveness of these systems when used in real world scenarios. AI has often been seen to fail when used with real data that may differ from limited training data. Issues with a particular AI system, such as bias and poor performance, often reflected in Average Precision (AP) scores, are directly driven by the quality of data that is used to train and test AI.

Typical problems that AI-focused organizations encounter are:

  • Bias and low precision: Datasets from real world scenarios and sensors can only capture the frequency of assets or events as they occur in reality. Rare objects and unusual events that are difficult to capture in real data sets will often cause algorithms to be biased or to have high error (low precision) when classifying particular entities.

  • Expense of data labeling: When using real world datasets, users require labelled or annotated datasets in which the source content is paired with information that indicates what the content contains with respect to specific asset, entity, or event characteristics or types. Labelling existing datasets is an expensive and error-prone process that may also lead to bias and precision issues when done poorly.

  • Unavailable data: In the case of attempting to build models for sensors or scenarios that don’t yet exist or that are hard to access, it may simply not be possible to obtain datasets to train AI.

  • High risk data: Use of some datasets may incur risks that an organization is unwilling to support such as datasets that are restricted because of personally identifiable information or for security reasons.

Benefits of adding synthetic data to AI workflows

Synthetic data is one of the tools that organizations are using to overcome the limitations and costs of using real world datasets. Synthetic data is controlled by the author or data engineer, can be designed to model physical sensor-based characteristics, account for statistical distributions and costs less because it is simulated.

Some of the opportunities of using synthetic data include:

  • Expanding and controlling the distribution of datasets: Engineered or synthetic data can be algorithmically controlled to produce datasets that match real world metadata but with distributions of entities and events that can be designed to both overcome and test for bias and precision issues in AL and ML systems.

  • Reducing labeling and data acquisition costs: Synthetic data is produced using techniques that enable data labels and image masks to be precisely determined for every piece of data without requiring any post-processing or human labeling effort.

  • Exploring and simulating new scenarios and sensors: In domains such as urban planning, future conditions may not exist to be captured in photogrammetry or lidar, but they can be simulated, presenting an opportunity for creation and use of synthetic data sets. Similarly, a hardware vendor who is planning new sensors or sensor platforms can use synthetic data to create representative content that is expected to be produced using digital models of proposed equipment.

  • Eliminating privacy and security concerns: Synthetic data can be produced using anonymized location and human models, removing security risks and any components of PII while providing plausible content that can be processed for AI workflows.

Read next

We recommend that you next read about the Rendered.ai platform and why we call it ‘a platform.’

The Rendered.ai Platform
Synthetic microscopy, x-ray, and aerial images generated by the Platform

Getting Started with the SDK

Follow the steps on PyPI to get the SDK installed on your machine using pip. You must have an active Rendered.ai account to use the SDK. Your Rendered.ai account credentials can be used both for the SDK and the Rendered.ai web interface.

Multiple ways to login

Login with Email

>>> import anatools
>>> ana = anatools.client()
'Enter your credentials for the Rendered.ai Platform.'
'email:' example@rendered.ai
'password:' ***************

Login with API Keys

You must first generate an API Key in order to log in with it. You can generate as many API keys as you desire with custom expiration dates in order to bypass the email login. The context for the login session via an API Key is pre-set to the Organization the key was created for.

Create keys via your email login

Run create_api_key with name, optional expiration date, and the Organization ID for the context. Make sure to save the resulting output that is your new API Key. This will only be shown once.

>>> ana.create_api_key(name='name', expires='mm-dd-yyyy', organizationId='OrgId')
'apikey-12345...'

Now you can log in with the API key in one of two ways:

1. API Key Param

Set APIKey to your API Key on client instantiation:

>>> import anatools
>>> ana = anatools.client(APIKey='API KEY')
Using provided APIKey to login ....

2. Environment Variable: RENDEREDAI_API_KEY

Export the key string to the RENDEREDAI_API_KEY environment variable in your active command line or save it in your OS path variables. If no key is detected, then you will get the prompt to enter your email/password.

export RENDEREDAI_API_KEY=API_KEY
python
>>> import anatools
>>> ana = anatools.client()
Using environment RENDEREDAI_API_KEY key to login ....

How to use the anatools SDK

After logging into Rendered.ai with the SDK, you can

  • Browse and Manage your Organizations and Workspaces

  • Deploy and Manage your Organization Channel(s)

    • Upload and Manage Volume Data required for your Channels

    • Set Default Graph for a Managed Channel

  • Get information about the Channels that are available to you

  • Create, Manage, and Download Staged Graph

  • Generate, Manage, and Download Synthetic Datasets

  • Generate, Manage, and Download Annotations locally or in the Platform Cloud

  • Generate and View Analytics

  • Generate GAN-based datasets

  • Generate UMAP comparison between datasets

is Rendered.ai’s SDK for connecting to the Rendered.ai Platform.

Execute the python command line, create a client, and login to . In this example we are instantiating a client with no workspace or environment variables, so it is setting our default workspace. To access the tool, you will need to use your email and password for .

Detailed documentation can be found in the .

Rendered.ai
https://deckard.rendered.ai
SDK Developer Guide
anatools

Overview

Step-by-Step Learning Path

Not sure where to start? Here’s a quick overview of where to go to learn more about synthetic data, get started with Rendered.ai, and learn how to add and build your own content.

What is Rendered.ai?

We have some great getting started content to help you learn about the basics of what we do, why, and what we offer.

What can I do with Rendered.ai?

Learn about Rendered.ai and why you need a PaaS for Synthetic Data:

Find out more about what comes with the Rendered.ai Platform:

Learn about workflows for synthetic data engineers and data scientists:

Use this guide for common terms:

Get started with a Content Code to explore a Synthetic Data application:

Find out what you can do with Graphs and Datasets using our web-based interface:

Explore our Development Guides to get started coding your own channels and adding content to generate synthetic data:

Introduction to Rendered.ai
The Rendered.ai Platform
Who uses Rendered.ai?
Terminology
Content Codes
Quick Start Guide
Development Guides

Organization and Workspace Resources

When managing content in Rendered.ai, it is important to keep in mind the distinction between Organization-level and Workspace-level resources. Resources are content that can be created, modified, deleted and even shared with other organizations. Examples of resources are:

  • Channels

  • Volumes

  • GAN Models

  • Annotation Maps

Organization-level resources are either owned and managed by the organization or the organization has been permitted to use the resource by another organization. Let’s take a look at the list of Channels that are available to a demo organization.

Notice that the list includes two channels:

  • satrgbv2cc - this channel is owned by the Content organization and has been shared with this organization. The organization cannot modify or delete the channel, they can only remove it from their organization.

  • example - this channel is owned by Rendered.ai and has been shared with this organization. The organization cannot modify or delete the channel, they can only remove it from their organization.

Both of these channels can be used to create graphs and synthetic datasets in this organization’s Workspaces. From this page you can also create a new channel by clicking on the New Channel button.

Fill in the fields as you’d like, the channel name and timeout are the only required field. The channel name must be unique to channels that your organization owns. For our example, we will create a new channel named Demo with a timeout of 300 seconds.

After creating the channel, we can see that we have different options for channels we own: Edit and Delete.

Now that we’ve created a new channel, lets see how it can be used.

Managing resources in a Workspace

Workspaces are like projects on the Rendered.ai platform, where you can gather the required resources to solve a problem or use-case. These resources can help solve particular problems like channels for generating synthetic data, volumes for storing and managing content, GAN models for bridging the domain gap and annotation maps for classifying objects correctly in annotations.

There are two ways to add and remove resources to a workspace:

The Workspaces table - From the landing page, you can select your organization Workspaces tab. click on the "list" icon to show the workspaces in a list format, In the list of workspaces, click on the three-button icon to the right of the workspace, then select Resources from the dropdown list.

The Workspace page - You can add and remove resources from a Workspace by clicking the three-button icon on top-left next to the workspace name, then selecting Resources from the dropdown list.

After selecting the Resources button, we land on the Workspace Resources dialog.

Once we have gotten to the Workspace Resources dialog, we can navigate between the different types of resources and modify the lists to either include or exclude a resource from that workspace. This is how Organization resources are exposed to a Workspace. Note that you must click the Save button for changes to take affect. After the workspace resources have been modified, the content will be available in the workspace to help generate synthetic data. In this example, we are adding the toybox-0.2 channel.

Now when we go to create a graph, the new Channel is available from the list to choose from.

Visit the following tutorials to learn more learn more about other resources:

Creating and Using Graphs

Graphs are a set of nodes and links that interconnect them, nodes can have a set of inputs and outputs. The graph can be thought of as parameters that define how a scene or objects in the scene are modified and rendered. This page describes how to create a graph, and how graphs are used to make datasets.

Create a Graph

A graph describes the processing steps necessary to generate a synthetic dataset. Graphs are stored in a workspace. To create a new graph, go to the main workspace page and click on the New Graph button. The new graph pop up will be displayed as shown below.

Enter a name for the graph, select a channel, and enter a description of the graph. Finally, click on the Create button. A new view will be opened with the default graph for that channel in it.

Canvas View

New graph views are pre-populated with a default graph composed of a few nodes and links. This graph can be edited in the canvas and ultimately staged, or saved for use in the jobs manager.

How to use the editor

The canvas can be panned by left clicking on an open area and dragging the canvas.

The canvas can be zoomed either by holding Ctrl and using the scroll wheel or by using the buttons on the lower right in the graph editor main viewport.

Multiple nodes can be selected by holding the holding Shift while clicking the left mouse button and dragging a rectangle over the nodes of interest.

Nodes and links

Adding nodes is done from the menu of nodes which is accessed via the circled plus icon on the left portion of the screen. Also, new nodes can be created by duplicating existing nodes on the graph.

Nodes can be deleted via the trash can icon that appears when a group of nodes is selected, or by pressing Delete or Backspace.

Links are added by mouse clicking the needed input/output ports. Links are deleted by first selecting the link and either pressing Delete or Backspace.

Note that, at this time, there is no ‘Undo’ capability. To back up graphs, it’s possible to duplicate a graph by clicking on the three-dot icon on the top-left of the Graph, then selecting Save As.

In-tool Help

Some channels provide in-tool help for the graph editor. This takes the form of thumbnails of node objects and help text that describes how a node will work.

Thumbnails show up in the node tray on the left of the GUI when you hover over a node name. Here is an example of a thumbnail image:

Node help text is displayed in a tray on the right side of the GUI when you click on the "i" icon of a node. Here is an example:

Preview

Channels can implement a preview capability that allows a sample image to be generated from the current graph state. Clicking the Preview button will bring up a “Generating preview…“ message, then when the image is ready it will display the image in the middle of the graph.

If the same graph is run multiple times in the workspace, preview caching will allow you to see the image right away. Preview can also be used to catch invalid graphs before staging. If a graph cannot be interpreted, the preview will result in an error message informing the user of the issue.

Staging

When you are satisfied with your graph and are ready to create a dataset, the graph needs to be staged. This will create an immutable version of the graph that can be reference later.

After clicking the Stage button in the upper right part of the screen, a new Staged Graph entry with the same name as the Graph will show up on the Jobs page. To see this we need to navigate to the Jobs page within the workspace. An example of this is captured below:

Jobs View

Once the graph is staged, a job can be run to generate a dataset. To do this, we will need to define the parameters for the job to create a dataset.

Staged Graphs

By selecting the dropdown arrow next of the staged graph entry, the run configuration can be set.

A description of the parameters for creating a dataset are shown below:

Parameter

Description

Dataset name

The new name for the dataset, this field is prepopulated with the Staged Graph name.

Description

A description for the dataset, this field is optional.

Priority

Low, Medium or High. This field is used to determine when a dataset job is run for an organization. If multiple jobs are queued, the highest priority job will start first.

Runs

The number of simulation runs used to generate synthetic dataset. This can equate to the total number of rendered images if the channel has a single sensor.

Seed

A seed used to feed the random generator for the channel.

Submitting Dataset Jobs

Once you have finished setting the parameters for the dataset job, pressing the Run button queues up the job.

There are several states a job can be in, it is important to understand what you can do at each state.

Job Status

Description

Queued

The job is ready to be executed, usually means it is waiting on compute to be available.

Running

Compute is available and the simulations are currently running.

Post-processing

All simulations are complete, generating thumbnails, creating a compressed dataset and updating databases.

Complete

The dataset has been created successfully.

Failed

The job has failed to create a dataset. This can happen for a number of reasons: configuration issues, channel issues or infrastructure / compute issues.

Jobs that are in the running state and have completed at least one simulation run can be Stopped, it’s status will move to Post-processing meaning that the dataset will be created with just the completed runs. Jobs in Queued, Running, Post-processing state can be deleted. Jobs in Complete or Failed state can be cleared so they no longer show up in the Jobs Queue.

Stopping or Cancelling a Dataset Job

The stop icon in the job window will stop the job. A stopped job can still create a synthetic dataset, or be deleted using the trash icon.

The trash icon in the job window will cancel and remove the job. It can also be used to clear completed or failed jobs from the Job Queue.

Dataset Job More Info and Logs

Users can get more information about the status of a job by looking at the More Info section that is hidden using the dropdown icon shown below.

The More Info section shares a few details about the Graph and Channel used to create the dataset, the User who kicked off the job and estimates around completion time. The More Info section also has details about the state of both Runs and Instances used to complete the job.

Clicking the View Logs button will bring up the Logs Manager. From here we can view the logs for runs of a Dataset Job that are either running or have already completed or failed.

We can specify a Run State or Run Number to filter by. These dataset logs can help us identify issues with failed runs and either fix the graph or channel code base. The logs can also be saved in a text file format for easier viewing or reporting.

Nodes can have inputs and outputs. Some inputs can appear as text fields or dropdown selectors, but all inputs have a port for linking a node. These links are how information is passed between processing steps. Node inputs can also be validated using a set of rules described by the channel developer in the schema, to learn more read .

Creating and Using Volumes
Understanding and Using Domain Adaptation
Dataset Annotations
Graph Validation

Content Codes

Content codes are the fastest way to get started on the Rendered.ai platform. By using a content code you will be given a new workspace that has predefined content that can include example graphs, datasets, and analysis. Using a content code is easy, and there are two ways to do it depending on whether or not you already have a Rendered.ai account.

Registering with a Content Code

After a brief demographics survey, you will receive an invitation email to join the platform. That email will redirect you to the Platform’s registration page that looks similar to the image below.

New users can specify a content code when filling in the registration information. In the example below we are using the TOYBOX content code which gives us access to a channel that simulates toys being dropped into a toybox.

After you login, you will see the content code workspace in your new organization.

Creating a new Workspace with a Content Code

The second way of using a content code is to create a new workspace with a content code. To do this, click on your organization and then click on the New Workspace button.

After clicking on the New Workspace button, you will be shown a new dialog where you can name the new workspace, select channels or enter a content code. For this example we will just name the new workspace and provide the same content code.

After clicking on the Create button, the new workspace will be created within the organization. It may take a bit for the new workspace to sync. Refresh the screen to see when it is done. The Workspaces table should now show the new content code workspace.

Now you can open up the workspace and start exploring!

Signup for an account here:

After filling in the information, click the Sign up button. You will receive an email for verification, after which you will be able to sign into the platform by navigating to and filling out the same email and password.

https://rendered.ai/free-trial/
https://deckard.rendered.ai/sign-in

Dataset Analytics

Rendered.ai provides a service for generating analytics so users can learn additional insights about their datasets. Today there are three types of dataset analytics supported: Mean Brightness, Object Metrics and Properties. In this tutorial we will describe how to generate these analytics and review the output from each type of analytics.

Generating Dataset Analytics

We start by navigating to the Dataset Library page in the workspace that contains the dataset. Select the dataset then click the + icon next to Analytics in that dataset.

Next, we just need to choose a type of analytics we’d like to run on the dataset. Click Create to create the new analytics job.

A new analytics job is started, it will show the time dial symbol while the job is still running.

As a reminder, all dataset services share the same symbols for job status:

No symbol means that the service job is complete and ready to use.

The sand dial symbol means that the job is running. It will remain this way until the job has either completed or failed.

The error symbol means that the job has an issue. You can click on the symbol to fetch a log of the service to help determine what caused the issue.

Once complete, the symbol underneath Status will disappear and we will be able to download or go-to the Analytics. By clicking on the go-to symbol, it navigates us to the Analyses library with that analytics job selected.

The same process can be done to generate the other types of analytics. Below we’ll dig into what each of these types of analytics provides us.

Mean Brightness

Mean Brightness generates a plot of the “brightness” density which can be helpful in comparing one or more datasets.

Object Metrics

Object Metrics generates some data on the types of objects in our imagery and two plots that indicate the size of bounding boxes and aspect ratio density of those bounding boxes.

Properties

The properties analytics type generates metrics on image counts, mean size and modes. It also provides helpful metrics on mean objects per image and annotation counts.

Rendered.ai Licensing and Offerings

Rendered.ai subscription licensing

New users who sign up at Rendered.ai are licensed a Developer Subscription for 30 days. Any data you create during that time is yours to keep according to our standard Terms of Service.

Rendered.ai has three subscription licensing tiers that provide customers with a range of cumulative entitlements and licensed access to different levels of compute capacity.

  • Developer: This subscription is intended for non-profit and academic research customers who want capacity and compute resources to build and test a complete synthetic data solution with Rendered.ai. The Developer Subscription is for non-production use. This subscription provides concurrent compute capacity at a level intended for experimentation and iterative development. This plan also enables research teams to collaborate and to explore integrating the Platform into AI pipelines through our APIs.

  • Professional: The Professional subscription is for customers who are generating synthetic data for training AI algorithms, typically on a focused project. The subscription supports collaborative access by a team, adequate storage for simulation content and created data, and compute resources sufficient for most individual projects. If you are working on CV models and can handle your own simulation design, this is the subscription for you.

  • Enterprise: The Enterprise Subscription includes Enhanced Support Credits that enable you to use Rendered.ai as your Synthetic Data Engineering team. The Enterprise Subscription entitles you to have access to a Technical Account Manager (TAM) who will conduct regular coordination meetings and manage an ongoing backlog. Enterprise Subscriptions include our max compute capacity, extra storage, and unlimited members within your organization.

For more information on specific entitlements in each subscription tier, please visit the Rendered.ai Terms of Service. Subscribing customers should always consult the appropriate membership agreement for their specific entitlements.

Upgrading a subscription

Customers who want to upgrade and are paying by credit card can simply follow the prompts in the billing pages in the Rendered.ai web interface to select the plan they want and enter their billing information.

Additional Rendered.ai licensing options and offerings

Some customers may not fit into our standardized subscription offerings. We can work with customers on the following types of relationships:

  • Custom Enterprise Agreements: Enterprise Agreements allow Rendered.ai to tailor an agreement with a customer to suit special needs such as unusual capacity, usage patterns, or even business models. Whenever possible, we will structure Custom Enterprise Agreements to either include or heavily rely on our subscription patterns.

  • Professional Services Agreements: Some of our customers request assistance for channel development, sensor model development, or integration into other enterprise systems. For cases that extend beyond basic product features or customer support, we offer expert professional services.

Customers who need special terms, a PO process, or to add subscriptions to an existing account should .

Customers who need a purchase order or receive an invoice should and we will help you upgrade to the appropriate plan.

In some cases a customer simply needs to add additional capacity, such as by purchasing additional plans. For the moment, we are requesting that you at your current plan.

OEM Agreements: In cases where a partner or customer wants to provide access to synthetic data generation capability as part of their business model, we request that you .

For more information on one of these agreement types, .

The Rendered.ai Platform

Rendered.ai is a PaaS that enables your organization to add synthetic data as an enterprise capability. With Rendered.ai you can create and access data that is engineered for your specific problem sets, as often and as much as you need, to help with AI and ML workflows for training, tuning, bias detection, and innovation.

What is Rendered.ai?

Rendered.ai is a Platform as a Service (PaaS) that enables data scientists, data engineers, and developers to create custom synthetic data applications, called channels, and to run them in the cloud to generate as much data as they want. Most of the data that is generated using Rendered.ai is imagery data that can be used for Computer Vision (CV) AI-based workflows. Other types of synthetic data can be generated by the Platform and that is completely customizable by the channel developer.

Benefits of a PaaS

A PaaS removes the need for a user to maintain their own hardware, infrastructure, and application execution environment to run software systems and custom code. In the case of Rendered.ai, we provide access to identity, cloud compute, data storage, and an SDK to execute your custom synthetic data pipeline in a cloud-based, high performance compute environment without requiring you to manage and maintain all of the infrastructure and software.

Why do I need a PaaS for synthetic data?

Many organizations who try synthetic data initially believe that they can generate or purchase single datasets for their AI workflows. However, these users soon discover that when they need more data for additional bias detection, to train for unexpected rare entities or scenarios, or when they want to apply AI to a whole new product, they will need to acquire more one-off datasets and may have lost domain knowledge or artifacts from their original investigations.

The Rendered.ai PaaS enables users to access synthetic data as a service. Users can create, use, and store their domain knowledge and techniques as content and channels in Rendered.ai , then update or branch them as their needs change. If a user then needs to create a new dataset for an entirely different AI problem set, they can do so, even incorporating access to datasets into automated workflows and remote systems through the SDK which allows access to the cloud-hosted PaaS.

Major components of the Rendered.ai PaaS

Rendered.ai has three main experiences for data scientists, data engineers, and developers:

  • Web-based experience for configuration, collaboration, job execution, and dataset management

  • A development environment and samples to create synthetic data pipelines

  • A SDK for remote or integrated job execution and automation

Web-based experience for configuration and job execution

The main interface for interacting with Rendered.ai to run and configure jobs is a web-based user interface that allows data scientists to run sample channels provided to emulate various sensors, configure graphs in a no-code experience to configure scenes and various parameters, then to run jobs for synthetic data generation and manage the output datasets.

Development environment and Example code

Rendered.ai provides a Docker-based development environment and example code to help data engineers and data scientists to customize synthetic data channels of their own.

An SDK for job execution and automation

Rendered.ai provides a SDK for developers to remotely create, execute, and access the output of synthetic data channels to accomplish batch processing and automated data provisioning to AI workflows.

Collaboration, account information, and billing

The same Rendered.ai web interface that is used to configure graphs and run jobs also has settings and configuration for users to set and monitor billing information, upgrade, and manage and invite collaborators who may be members of the organization or guest users with more limited access.

contact us
contact us
contact us to arrange purchasing additional subscriptions
contact us to discuss an OEM or private label agreement
please contact us

Quick Start Guide

Follow these steps to start your journey by interfacing with the Rendered.ai web interface to create an account, configure a graph, run a job, and download the resulting dataset. After registration and signing in for the first time, users will be taken to the Landing page. The Landing page is where you can view recent work across organizations and drill down into the the Organization-specific data quickly.

Clicking on the drop-down icon next to our organization will give us quick access to the Workspaces, Channels, Volumes, GAN Models and Annotation Maps that the organization has access to.

Setting up your Organization

New accounts with Rendered.ai will want to customize their Organization. The Organization is where all workspaces will be housed and where all members from your business will be able to share their work, contribute to channels, and generate new datasets. Initially the organization is named after the name of the user it was created for, to edit this name we will go to the Organization Setting page. We can get there one of two ways, the first is by clicking on the Organization name on the left hand side of the screen then clicking on the Settings button.

The second method is to click on the User Icon in the top-right, and click on the Organizations from the drop-down list.

The Organization Settings page has information about the organization: the name, organization ID and plan. It also contains information about limitations put on the organization and any workspace within the organization. We will edit our organization name by clicking on the pencil icon next to our organization name, entering the new name and clicking Save.

Now that we’ve customized our organization name, lets take a look at what you can do within the organization.

Workspaces

Every new organization that is created within Rendered.ai is given a workspace. The workspace in your organization will depend on if you entered a Content Code during the registration process or not. A workspace is a place to create graphs and generate datasets which may be shared within or outside your organization. You can create a new workspace by selecting the organization and clicking the green New Workspace button.

After creating the workspace, we can go into the workspace to generate synthetic data. Initially the workspace will be empty and look like the screenshot below. We will need to create a graph before we can generate the synthetic data.

Graphs within a Workspace

Provide a graph name, channel and optionally a description for the graph then click Create. We can also optionally upload a JSON or YAML representation of a graph that will place nodes and links for us rather than a default graph.

Clicking the Preview button in the top right of the screen renders a sample image of the provided graph configuration, allowing you to ensure your requirements are met before staging the graph for production.

The below is an image generated during preview of the default graph on the example channel.

Staging Graphs within the Jobs Manager

Staging a graph adds an item to the Staged Graphs section of the job manager.

Once the dataset job has started, you can view more detailed status of the job by pressing the downward-facing arrow on the right-hand side of the job.

This shows average time per image, and gives an estimated end time for the run. It also provides information about compute instances being used to process the job.

Dataset Library

Once the job is complete, you can find the dataset in the Datasets Library section of the workspace.

Next Steps

For this new workspace, we are going to name it Example and add the example channel to it. You could optionally specify a Content Code which could include additional data. To learn more about Content Codes, see our application user guide on .

For this workspace we have selected Rendered.ai’s Example channel. define the simulation architecture (3D objects, backgrounds, sensors, sensor platforms, etc.) required to generate synthetic data. The Rendered.ai Example channel serves as a generic channel (toys generated in a box) that allows you to experiment and learn the platform. This channel corresponds to the public codebase on Github: . New organizations are provided with the Example channel. Rendered.ai develops a number of other channels for various applications that are available on request. Channel customizations can be made directly to a cloned version of the Example codebase (see ), or provided as an Engineering service by Rendered.ai .

A is a visual diagram that is based on a channel’s codebase, allowing you to view the objects and connections that exist in that channel, modify them within the diagram view, and stage the modified channel to produce simulation sets. Within the workspace view, create a new graph by clicking the New Graph button on the left-hand side of the screen.

Once created, the graph can be viewed as a node-edge diagram in the graph viewer. You can learn more about and by following these links.

Once the graph is providing a satisfactory output, it can be by clicking the Stage button on the top right of the screen.

When configuring a dataset , a name and description can be given to the output , and you can specify the number of images to generate, as well as a specific seed to initialize the random number generators in order to match results across multiple runs if desired. You can also designate priority for the job if multiple jobs are being run at once. To access this information click the down arrow to the right of the staged graph name. Clicking the Run button will launch a new dataset job.

From here, you can view more information about the dataset. You can download a compressed version of the dataset and begin using the outputs to train your AI. To learn more about Datasets, reference our guide . You can also use services to learn more about the dataset, compare datasets or generate new annotations.

Now that you have set up a new workspace, staged a graph, and created a dataset, we recommend you learn about .

Content Codes
graph
Creating and Using Graphs
Graph Best Practices
Creating and Using Datasets
Organization and Workspace resources
Rendered-ai/example
Ana Software Architecture
Registration Page
Completing Registration
New Organization With Content Code
New Workspace
Creating Workspace With Content Code
Dataset Analytics
Create Analytics Dialog
Analytics Job Complete
Analytics Job in Analyses
Mean Brightness
Object Metrics
Properties
Landing Page
Landing Page Organization Settings
User Organizations
Editing the Organization Name
New Workspace Button
Creating a Workspace
Empty Workspace
New Graph Button
Creating a Graph
Toys in Boxes Graph
Preview Button
Preview Default Graph
Staged Graph Section of Jobs Page
Toys in Boxes Staged Graph
Running Dataset Job
Channels Table
Channels Table
Workspace Table
Workspace Page
Creating a Graph From the Workspace Page
Graph View
In-tool help - Node Thumbnail
In-tool help - Node Help
Preview
Preview Image
Staging a Graph
Staged Graph in Jobs page
Run Button
Dataset Job More Info
Channels
staged
job
dataset

Terminology

Here are a few definitions to help you understand the components and patterns used in Rendered.ai.

Fundamental concepts

Platform as a Service (PaaS)

A Platform as a Service (PaaS) is a category of cloud computing services that allows users to provision, instantiate, run, and manage a modular bundle comprising a computing platform and one or more applications, without the complexity of building and maintaining the infrastructure typically associated with developing and launching the application(s).

Rendered.ai is a Platform as a Service for data scientists, data engineers, and developers who need to create and deploy unlimited and customized synthetic data for pipelines for machine learning and artificial intelligence workflows. The benefits of the platform are: reducing expense, closing gaps, overcoming bias and driving better labeling, security, and privacy outcomes when compared with the use of real-world data.

Organization

An organization is a billable entity and a way of segmenting off work for collaboration or from collaboration for security purposes. The Organization is fundamentally a collaboration tool. All subscriptions to Rendered.ai typically grant the customer access to one Organization.

Workspace

A Workspace is a container for organizing work related to one set of projects or applications. Workspaces may be used as a collaboration device in that Guest users can be invited to a Workspace who with then not have access to any other part of your Organization. Your Workspace shows recent Graphs, recent Jobs, and recent Datasets you have worked on.

Channel

A Channel is a container for Graphs, Packages (sensors and application specific requirements) and code that is used to define the universe of possible synthetic output for a particular application. For example, Channels may represent synthetic data generation as diverse as video of microscopy or satellite based SAR data acquisition. All of the components of a Channel together define the set of capabilities that solve a specific synthetic generation use-case.

Graph

A Graph is a visual representation of the elements (capabilities, objects, modifiers) and their relationships that compose an application. Jobs are created from Staged Graphs to create synthetic data.

Through both coding and visualization in the Rendered.ai user interface, Graphs are designed as Node-Edge diagrams. These diagrams allow the user to engineer the linkages between objects, modifiers, and other Channel components to design synthetic datasets.

Nodes

Nodes are either objects, capabilities or modifiers and will appear in a Graph as boxes with a name and properties that can be set in the Graph interface.

Some Nodes may be described as ‘Objects,' indicating that they are simulated physical objects that will be used in synthetic images. Other nodes may be referred to as 'Modifiers,' indicating that they somehow change or impact image generation during processing of the Graph.

For example, Nodes may represent types of assets to be placed in a scene, a digital sensor model, a renderer, post-processing imagery filters, annotation generators, and much more. Node capabilities are channel dependent.

Edges

Edges are the term we use to describe the connectors between Nodes in a Graph. A connector is used in the visual interface to show that a particular parameter of one Node can be used to populate a parameter or for processing in another Node.

Staged Graph

A Staged Graph is a Graph that has been queued to enable Members of the Organization to run Jobs that generate synthetic data.

Job

A Job is a processing effort that generates a specific quantity of synthetic images or video that will be run on the Rendered.ai high performance compute environment.

Dataset

A Dataset is a variable collection of output images or video, masks, annotation, and other metadata that has been created by execution of a Job. Different Channels may contain different components depending on the specific application and problem domain of the Channel. Some sensor models, for example, may not lend themselves to easy creation of masks (images with pixel values capturing the location of scene assets).

Shared Data Libraries

Common functions useful across multiple channel.

Shared Data Volumes

Rendered.ai Volume store static assets for use in synthetic data generation.

Package Volumes, a.k.a. channel volumes, are deployed with Rendered.ai channels within the platform. These volumes are maintained by channel developers, tested and provided as ideal use cases for the channel's intent.

Workspace volumes are associated with user-managed workspaces and are created and maintained by users of Rendered.ai. These volumes are dynamically updated for increased fidelity of generated datasets.

The Rendered.ai Engine

The Engine is the underlying set of capabilities shared by all Channels and are accessible either through the SDK or the Rendered.ai web interface. The Engine executes the cloud compute management, configuration management and various other functions.

Microservices

Loosely coupled services that are available to be executed as part of a Channel: Preview, Annotations, Analytics and GAN are examples of the Platform’s microservices.

CycleGAN Microservice

The service trains two neural nets, one that translates the synthetic data to real data, and the other one that translates the real data to synthetic and it attempts to make it self-consistent and allows it to converge into a matched style

Developer concepts

Anatools SDK

Package

A Channel uses a domain specific sensor and other capabilities for supporting the application.

Application/App Container

An application collects the Channel elements in executable code in a Container to produce synthetic data for a specific end-user use case.

Application Specific Libraries

Code that lives in a docker container. Defines nodes, node capabilities, and node procedures and how they interconnect. These libraries may include packages that describe sensors and other components of the channel.

Domain Adaptation

Domain Adaptation can make synthetic data useful by bridging the domain gap between a digital twin camera and your sensor. Rendered.ai users train CycleGAN models for pixel level domain adaptation from raw synthetic data to domain adapted sythetic data. This tutorial will walk through the steps of uploading a GAN model to a workspace, and generating domain adapted datasets.

Read more about CycleGAN

Uploading a GAN Model

Rendered.ai supports PyTorch CycleGAN models for domain adaptation. Many of the models shown in the CycleGAN landing page linked above are available for download. For demonstration purposes, we will use the apple-to-orange pre-trained model available here:

To use the apple2orange.pth model, it must be added to your organization, and then associated with a workspace.

Add a Model to an Organization

Navigate to your organization’s GAN Models management space.

Under your organization (1), find the GAN Model section (2), and click the + New GAN Model button (3).

In the new GAN model upload dialog, give the model a name and set the inference flags. The pre-trained CycleGAN models were trained with the --no_dropout flag, so it is required for inference.

Add a Model to a Workspace

Each workspace has various resources, such as channels, annotation maps, and GAN models. To manage these resources, navagate to the organization’s workspaces.

From your workspaces manager, select list mode, select the three vertical dots next to the workspace you want to edit, and select the Resources option.

There are two models that come with the content workspace. Add the new model from the Excluded list to the Included list.

Once the new model is associated with the Example workspace, click Save.

Create a Domain Adapted Dataset

The Content workspace comes pre-loaded with some datasets, and the one named Custom Objects is made up of images containing apples. To use the Apple to Orange Transfiguration CycleGAN model, create a GAN dataset based on Custom Objects.

From the Datasets tab in the Content workspace, select the Custom Objects dataset, and open the wizard for creating a GAN dataset.

Give the new dataset a name, select the domain adaptation model, and, optionally, give it a description. When the job is complete take a look at the results.

Congratulations! This model can be used in other workspaces or on other datasets. To generate GAN datasets in your ML pipeline, please take a look at our SDK.

Overview

The items in green represent parts of the platform that are managed using the Rendered.ai web application. The following guides will discuss what these elements are, and how the can be used to build synthetic data that is tailored to your application.

Rendered.ai Platform

Get started with an overview of the Rendered.ai web interface to the platform and its components. Here you will learn about creating an organization, workspaces, graphs and datasets.

From there, dive into the details on graphs, datasets, and collaborating within the platform.

Dataset Comparison

Creating a UMAP Job

To start, we will need to navigate to the Dataset Library page of our workspace and select the datasets we would like to compare, then click the compare button.

In the next dialog we can name our UMAP comparison, determine which dataset will be used as the fit dataset and how many images to sample from each dataset for comparison.

Clicking on the Compare button will start a new UMAP job.

Once the UMAP page has loaded, it will show an interactive 3D plot. You can click on each data point to get information about the image.

Training and Inference

Computer vision models can be used to classify images, detect objects and segment classes in imagery and videos. The Rendered.ai platform gives users the ability to train computer vision models using datasets from their Datasets Library, and use these models to make predictions against other Datasets. Rendered.ai integrates models from NVIDIA TAO and Meta Facebook AI Research (FAIR). This tutorial will show you how to train and train and use FAIR's Detectron2 object detection model on the platform.

Train a Computer Vision Model

To train a computer vision model, first navigate to the Datasets Library and select the Dataset to be used for training the model. In this example we will be training the model using a synthetic dataset.

Click on the + icon next to Models in the right-side panel. Fill in the form then click the Create button to start training the model. Some parameters are specific to the model Architecture.

After the training parameters have been configured, click the Create button to create the training job. Training will begin once compute has been provisioned, and any preprocessing on the dataset is complete. Once the model training has begun, we can track the training status using the right-hand panel on the Models Library.

Once training is complete, the model is ready to be used for inference jobs, where we can make predictions against another dataset.

Computer Vision Inference on a Dataset

To run inference against the computer vision model, click the + icon next to Inferences in the right-hand panel on the Models Library.

Select the Dataset to run through the model and a class mapping if necessary. The class mapping will be used to map dataset classes to a new set of classes (ideally the same set of classes used by the model). Click the Create button to start the Inference job.

Once complete, we can view the results of the inferences in the Inference Library. If ground truth data existed in the input dataset, additional metrics are generated that can be seen in the right-hand panel of the Inference job, this includes:

  • Overall metrics and Class-specific metrics for Precision, Recall, F1 and mAP.

  • Plots for Confusion Matrix, PR Curve and ROC Curve.

Viewing Predictions

To view the predictions from Inference, navigate to the Dataset Image Viewer by clicking the Go-To icon at the top of the right-side panel. This will navigate you to the Dataset Image Viewer.

You can toggle on/off the ground truth and inference layers as desired, turn on/off labels including prediction confidence.

is Rendered.ai’s SDK for connecting to the Platform API. It is the toolkit for third-party and Rendered.ai developers to use in producing applications on the Platform.

SDK Documentation:

SDK Examples:

The Rendered.ai platform provides an end-to-end solution for generating physics-based synthetic data for training AI models. The diagram below shows the architecture of the platform. The items in blue represent pieces that require configuration of the open-source Ana codebase upon which a custom channel is built. For more information on how this is done, see our .

Get started with the Rendered.ai web interface:

Learn how to develop graphs within your workspace:

Understand the outputs of the process and how they are configured:

Work with others to share and improve your data:

Rendered.ai offers a service for comparing the imagery between two datasets using a dimensionality reduction technique. The way our service works is that we first gather features using a Feature Pyramid Network to determine what Object Detection models are seeing in images at different levels of feature size. Then we reduce those features using as a dimensionality reduction technique for visualizing datasets in a 2D or 3D space. We generate an interactive 3D plot where users can click on data points to view images. Using this, users can compare the imagery of two or more datasets and infer what is making them similar or different.

Field
Description
Fundamental concepts
Platform as a Service (PaaS)
Organization
Workspace
Channel
Graph
Nodes
Edges
Staged Graph
Job
Dataset
Shared Data Libraries
Shared Data Volumes
The Rendered.ai Engine
Microservices
CycleGAN Microservice
Developer concepts
Anatools SDK
Package
Application/App Container
Application Specific Libraries

Name

The name to give the computer vision model.

Description

The description to give the computer vision model.

Dataset

The dataset split into train and validation sets, used during training the computer vision model.

Class Mapping

Maps dataset classes to a new set of classes.

Architecture

The model architecture used for training the model.

Hyperparameters

A list of basic parameters that are specific to the model architecture, the list below are specific to Detectron2 - Object Detection: Training Weights - The starting weights for the model, this can be set to random weights, ImageNet or a model previously trained on the platform. Train/Validation Split - Percentage of the dataset used for training and validation. Alternatively, you can specify a validation dataset. Epochs - The number of training epochs to run. Base Learning Rate - The base learning rate used by the learning rate scheduler. Seed - The random seed setting for training. Extra Args - Additional arguments passed to the Detectron2 configuration.

New GAN Model Button
New GAN Model Dialog
GAN Model Manager for Content Workspace
Apple to Orange Transfiguration Model Included
Content Workspace Dataset
Create GAN Dataset Configuration
Compare Datasets
UMAP Dialog
UMAP Job Running
View Comparison
Dataset Library
Model Training Parameters Menu
Training Loss Plot
Create Inference Menu
Inference Table with Metrics
Dataset Image Viewer

Tutorials

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Index of /cyclegan/pretrained_models
https://sdk.rendered.ai
https://github.com/Rendered-ai/resources
Development Guides
Quick Start Guide
Creating and Using Graphs
Creating and Using Datasets
Collaboration
UMAP

Raw Synthetic Data Image

Domain Adapted Image

Graph Best Practices

The Rendered.ai platform exposes a rich interface to manipulate graphs for generating a wide variety of datasets. While generated images and videos have virtually infinite randomness, the graph parameters provide for the control one needs when addressing specific computer vision training and validation issues. These guidelines for graph editing help users get the most from Rendered.ai channels.

Graph Basics

A basic graph performs three functions - object creation, scene composition, and scene rendering. Most channels are built with a set of objects of interest so the graph will have one or more nodes that create those objects. Objects of interest are important because they are used for generating object annotations. Another node creates the scene and places objects of interest in it. The scene may require other components such as a background image and a sensor. Finally, a simulator node renders the output of the scene into a synthetic image.

The following is a basic graph that performs these three operations.

In the above graph, the Yo-yo node creates a yo-yo generator that is passed to the RandomPlacement node. The RandomPlacement node creates a scene and makes a cloud of yo-yos because it calls the Yo-yo generator 25 times and places the objects randomly in the scene. The drop object node takes the Yo-yo's and drops them into a container that is on a floor. The RenderNode then renders the scene.

Generators: Objects and Modifiers

Though not required, channels developed by Rendered.ai use generators for objects. An object generator is a factory that provides new instances of the objects as many times as the code requests one. The Ana generator class has a weight that can be used in a placement node. By using object generators instead of manually adding objects, a scene can be built procedurally based on the structure of the graph.

Object modifiers are generators with children and make changes to the children generators. For example, the ColorVariation modifier adds a call to the color method to it’s children.

Dataset Annotations

Annotations are a description of what is in the imagery, usually in a format that a machine learning algorithm can ingest. The annotations generated by Rendered.ai channels are a proprietary format that needs to be converted before being ingested by a machine learning algorithm. For this reason, Rendered.ai provides a service to convert our annotations to several common formats, including:

Format

Description

COCO

GEOCOCO

GEO COCO is a data format designed to store annotation details for applications dealing with geospatial data. This combination aims to broaden the capabilities of COCO while ensuring compatibility with COCO tools.

PASCAL

YOLO

KITTI

SageMaker OD

SageMaker SS

Mapping Annotations

During the conversion of annotations, we offer a way to map objects to specific classes. This can be helpful when you have several objects of one type in imagery that you want to be classified into a single class. For example, maybe you have imagery with Ford Focus, Honda Accord and Toyota Camry objects that you want to be classified as Car.

Mappings can do this for you, below is an example Annotation Map file that classifies the objects in the example channel to Cubes or Toys.

classes:
    0: [none, Cubes]
    99: [none, Toys]
properties:
    obj['type'] == 'YoYo': 99
    obj['type'] == 'BubbleBottle': 99
    obj['type'] == 'Skateboard': 99
    obj['type'] == 'Cube': 0
    obj['type'] == 'Mix Cube': 0
    obj['type'] == 'PlayDough': 99

The first section classes is a key, value pair where the key is an integer and the value is a list of classes. The reason it is a list, is because some annotation formats such as COCO can have an object hierarchy with super-classes.

The second section properties is also a key, value pair. The key in this case is a python eval string that can either be evaluated to True or False, the value is a class number specified in classes. The way it works, is we use the metadata file to specify properties about the objects, if the eval statement is true for that object it will get classified as the assigned class.

Lets say we have a Skateboard type object and are using the above Annotation Map to generate COCO annotations. The first two keys in the properties section will evaluate to False for the skateboard object, but the third key will evaluate to True – thus the skateboard object is assigned to class 99 (Toys).

Creating Annotation Maps

New annotation maps can be created on the platform by navigating to an organization’s Annotation Map tab and clicking the New Annotation Map button.

After clicking the button, you will be given a dialog where you need to enter the name of the Annotation Map and choose a file to upload. Remember your annotation map must be in YAML format with the classes and properties keys. You can optionally add a description.

After you are complete, click Create to create your new Annotation Map. The new Annotation Map will show up under your organization Annotation Maps table.

Creating Annotations

After creating the Annotation Map, we’ll need to add it to our workspace before using it when creating annotations for our dataset. To do this we’ll go back into our workspace, click the three-button icon and then Resources.

After this, we’ll add the Example Annotation Map to the Included column of our workspace resources and finally click Save.

Now we can go back to our dataset to generate the new annotations. In this example, we select our latest dataset and then Click the + icon next to Annotations in the dataset.

Next, we will select the annotation format and annotation map from a list. We start the annotation job by clicking the Create button.

After we click the Create button, a new entry will be shown in the dataset’s Annotations section. The new entry has a status symbol, COCO format and Example map.

When the status symbol goes away it means the job is done and we can download our COCO-formatted annotations. Click on the download icon next to the dataset’s Annotations entry to download the new annotations file(s).

Job Status

All dataset services share the same type of status symbols when a job is running, complete or failed.

No symbol means that the service job is complete and ready to use.

The sand dial symbol means that the job is running. It will remain this way until the job has either completed or failed.

The error symbol means that the job has an issue. You can click on the symbol to fetch a log of the service to help determine what caused the issue.

Graph Validation

Rendered.ai provides a way for channel developers to add validation rules to the inputs of their nodes. These rules are checked while Users are modifying the Graph and can alert users that an input is invalid in some way. It is important to note that Graph validation will not block a user from submitting a bad graph for preview, staging the graph to run a job or downloading the graph.

Below we’ll explore how these Graph validation errors are displayed to Users.

Bad Values

Values entered into an input can be checked for the correct type or a set of rules. If a value is entered that is not the correct type then the field is highlighted in red and an error icon is displayed to the right of the value. If you hover over this icon you will see an error message indicating what is wrong.

In the following example, the “Image Width (px)” input is defined to be an integer data type. Entering a text value results in an error. Hovering over the error icon displays the “Value must be type ‘integer'“ error message.

Bad values can also be values that fall outside of a valid range. In the example below, the value is the correct type (integer) but it falls outside of the possible bounds for the value and the error icon displays the “Value must be less than or equal to 1“ error message.

Bad Links

The number of links connected to an input can be checked. Inputs can be defined to have zero or more links, one link, one or more, etc. If a link validation error occurs then the input will be highlighted in red and an error icon will be displayed in the upper right of the field. Hovering over the icon will display the error message.

In the following example, the “Sensor” input requires exactly one link. Attaching a second link to the input generates an error. Hovering over the error icon displays the “Input requires exactly one link” error message.

Creating and Using Datasets

Datasets are the output from the Rendered.ai platform. Most of what you have learned so far has been focused on configuring or running simulations to produce datasets. Now lets learn about what datasets are and how they can be used on the platform.

Dataset Job

After a Dataset Job has completed on the Jobs page, a dataset will be available to download via the Download button on the Jobs page.

Dataset Library

The Dataset will also appear in the Datasets Library page. By navigating to the Dataset tab and selecting the Dataset, you can learn more about it on the right-hand side. By clicking the checkbox next to the Dataset name, you can download or delete the dataset from the Dataset Library.

Information about each dataset name and description, unique identifiers and parameters for generating the dataset are shown on the right. From here, you can also edit the name and description of a dataset using the pencil icon.

Additional Dataset Services

The platform has a number of additional services to help you learn more about, compare or adapt the synthetic dataset. See the following tutorials to learn more and to get the most out of your datasets.

Overview

Rendered.ai channel development requires expertise in Python and Blender. The diagram below is a general schematic of a channel’s components.

Creating a Custom Channel

Get started learning about the platform, including the relationship between graphs, channels, packages and volumes.

Then find out more about channel development for specific aspects of the life-cycle with the following support documents.

Get Started

anatools

Simple Graph
Rendered Scene
Color Variation Modifier
Image With Color Variation

The Common Objects in Context (COCO) dataset was created for object detection, segmentation and captioning. It has an data format described on , the Annotation service generates the Object Detection format.

The challenge provides standardized datasets for comparing model performance.

The (Yolo) object detection system.

The KITTI format for image classification, the input label file is described .

, the inputs are described in the section titled Train with the Image Format.

For use with .

New Annotation Map Button
Create Annotation Map Dialog
New Annotation Map
Workspace Resources
Workspace Resources Dialog
Creating New Annotations
Create Annotation Dialog
Completed Annotation Job

Platform Overview:

Start with a copy of our template code:

Add custom objects or sensors and common modifiers: and

Add a custom channel to Rendered.ai :

Get the code:

Contact us:

Bad Type
Bad Value
Download Button
Dataset Library
Edit Dataset
this page
PASCAL VOC
You Only Look Once
here
Object Detection using MXNet
emantic Segmentation Algorithm
Dataset Annotations
Dataset Analytics
Dataset Comparison
Dataset Best Practices
Understanding and Using Domain Adaptation
Ana Software Architecture
An Example Channel - Toybox
Add a Modifier Node
Add a Generator Node
Deploying a Channel
Rendered-ai/toybox

Creating and Using Volumes

Volumes contain data that is used by a channel such as 3D models and other content. Most channels include built-in volumes that provide basic content for graphs. Users can also create their own volumes so they can add custom content to their graphs.

Creating a New Volume

To create a new volume, navigate to your organization’s Volumes table then click on the New Volume button in the upper right corner of the page.

This will display the New Volume dialog where you can give your volume a name and description. The name is a required field, it must be unique in your organization.

Click Create to create the new volume. It should now show up in the list of volumes for your organization.

Adding a Volume to a Workspace

To add the new volume to a workspace, navigate to the Workspace, click the three-dot icon and select Resources. This will open the Workspace Resources dialog.

Next, click on the Volumes tab of the dialog. Move the newly created volume from the Excluded column to the Included column. Click the Save button to add the volume to the workspace.

Adding Content to a Volume

To add content to the new volume, click the Assets tab at the top of the Workspace, then click on the name of the new Volume. For this example, it is called Demo.

This should drop you into the Demo volume. From here we can add new files and manage the current files. Right now the volume is empty because it is newly created and we haven’t added any files.

To add new files, click the Upload File button and select a file to upload using the Click here to select file link, selecting a file, and then click the Upload button. The file will be uploaded to the volume.

The new file will show up as a card in the Volume window.

Next we can add this new file to one of our graphs to load the 3D model into a scene.

Adding Content From a Volume to a Graph

To add content from a Volume to a Graph, first create the graph and then click on plus sign in the upper left corner of the page.

This will display a pop-up with two tabs - Channel and Volumes. Click on the Volumes tab and it will show a list of all volumes you have assigned to the workspace.

You can drill down into your volume and select a file such as a blender file to be included in the graph. The display will look something like this for the NewCube.blend file we added above.

Clicking on the file will add a node to the graph that represents that file. What you can do with the file node and what you can connect it to depends on the type of file and is also channel specific. The Example channel supports blender files that represent objects to be dropped into the basket. We can hook the file node to Random Placement node as shown below.

Permissions

Initial volume creation will default to write permission. Once you have uploaded all items to your volume, you can set it to have read or view permissions through the Volumes Resources Table.

Permission

Description

write

Organization users are allowed to read, download, add, remove, edit objects in the volume.

read

Organization users are allowed to read and download objects in the volume.

view

Organization users are allowed to read volumes and only use them in the graph editor.

Mixing Datasets

Rendered.ai offers a Dataset Mixing service that allows users to combine images from one or more datasets by sampling a specified number of images from each dataset. This can be used to simply create larger datasets, combine synthetic and real datasets for experimentation, or filter classes out of datasets. In this tutorial, we'll show you how to mix two datasets together.

Dataset Selection

Navigate to the Datasets Library tab of a Workspace. Select the Datasets you want to mix together by clicking the checkbox on the Dataset's row.

Next, click the Mix icon in the Action Bar on the left.

Mix Dataset Settings

In the Mix Dataset modal, we have the ability to give our dataset a name and description, set tags for the dataset, and set a seed for the job. Each Dataset being mixed will have additional parameters that can be selected:

  • Maximum Sample Count - this sets the maximum number of samples that will be taken from the dataset based on the filters applied. By default it will sample from all images in the dataset.

  • Included Classes - this will only select images that have one or more of the classes selected when sampling from the dataset. By default this is left blank, meaning it will sample from all images in the dataset.

When the job parameters are configured, click the Mix button to start the mixing job.

Mix Results

Once the job is complete, we can see the new dataset that includes classes for submarine and the sailboat classes we specified from the Sailboats HDR dataset.

Notice that the total image count for the new dataset is 393 images. This is because we set a Maximum Sample Count of 10 images for the Submarine HDR dataset without any filter applied, giving us 10 images, and a Maximum Sample Count of 1000 images from the Sailboat HDR dataset with an included class filter applied, yielding us only 383 images.

Inpaint Service

The Inpaint service takes an annotated image, erases the annotated features from the image, and generates a new image or a Blender file with placement areas where those annotated features were originally located. The generated Blender file can be used as a background image in channels that support placement of objects on 2D backgrounds such as SATRGB.

The Inpaint service is accessed from the Assets tab of the GUI.

The source backgrounds need to be uploaded to a Workspace Volume. Click on the workspace volume from the volume list. A page showing the volume content will be displayed.

The Inpaint service processes all images in a directory so it is a good idea to create a new folder to hold the source backgrounds and their annotation files or image masks. Click on the New Folder button and give the folder an appropriate name.

Click on the Backgrounds working folder, then click the Upload File button. Add the background image along with the annotation files or mask images. The background image must be in TIFF, PNG, JPG, or JPEG format. Annotation files should be in one of the following formats: COCO, KITTI, Pascal, YOLO, or GeoJSON. Mask images must have a .mask.png, .mask.jpg, .mask.jpeg, or .mask.tiff extension.

Annotation files and mask images are not both required—only one is needed for all background images.

Click the Upload button to upload the files.

To run the Inpaint service click on the Create button. This brings up the Create dialog.

Click on the Inpaint button to bring up the Inpaint dialog

By default, the input location is set to the current directory, with the input type as MASK. The output will be stored in a subdirectory named "cleaned", followed by a timestamp. The output format is PNG, and the dilation value is set to 5. Dilation controls how much the mask area expands before inpainting the image.

Other available input types include GEOJSON, COCO, KITTI, PASCAL, and YOLO—choose the option that best fits your needs.

Other available output types include PNG, JPG, and SATRBG Background—choose the option that best fits your needs.

Click the Create button to accept these and start the Inpaint service. A "Create Jobs" window will pop up in the lower left. Click on the down arrow to see progress of the job.

When the job is complete, the hour glass icon will switch to a go to icon. Click on that icon and you will be switched to the output directory. There will be cleaned backgrounds for every background image that you processed.

Here are the original and cleaned background images.

Dataset Best Practices

A dataset is a collection of files produced by a dataset job. The structure and content of a dataset is channel specific, however Rendered.ai channels produce data in a common format which is described below. This structure is not required for channel developers to follow, but many of the services provided by the platform make assumptions about dataset structure or contents.

Common Directory Structure

Dataset files are stored in a directory structure, inside each dataset are the following subdirectories and files.

Note the annotation, metadata, and mask files associated with a given image file share the same filename properties and have similar names, e.g. 0000000000-0-visible.png, 000000000-0-visible-ana.json and 000000000-0-visible-meta.json, etc. This is due to each including a {run:10}-{frame}-{sensor} prefix.

  • run - the run number of the simulation being executed, for example if the user selects to generate a dataset with 100 runs, the first part of the prefix will start at 0000000000 and end at 0000000099.

  • frame - the frame number the image was rendered on. Some channels use physics or animations when rendering a scene, this frame number can also be used to combine individual images to video.

  • sensor - the given name for the sensor. Again this is usually channel dependent, but can be helpful when rendering a scene with multiple sensors at once.

Image Files

The format and resolution of an image file is channel specific. Most features of the platform are tested with either PNG or JPEG images, although the platform can support several other images types such as TIFF.

Annotation Files

An annotation file contains label information for objects of interest in the corresponding image file. Label information is specified in JSON format as follows. Note “…” indicates a list of numeric values.

The following list describes the meaning behind each value:

  • filename - the name of the corresponding image file in the dataset.

  • annotations - a list of labels. There is one label per object of interest in the image file.

  • id - the numeric identifier for the object of interest where each object has a unique id.

  • bbox - is the rectangular bounding box for the object of interest, specified as a list of pixel coordinates with the origin (0,0) being the top-left of the image. The format of this list is as follows: [top-left X, top-left Y, width, height].

  • segmentation - a list of line segment pixel coordinates (x, y) that form a polygon that bounds the object. This forms a list of lists such as [[X0, Y0, X1, Y1…]..].

  • bbox3d - a list of vertex coordinates for a 3D cube that bounds the object. This is also a list of [[X0,Y0,Z0], [X1,Y1,Z1],…[X7,Y7,Z7]] where Z is the distance of the vertex from the sensor.

  • centroid - the x, y coordinate of the middle of the object.

  • distance - distance between the centroid of the object and the sensor in meters.

  • truncated - is a Boolean indicating if the object is cut off at the image boundary.

  • size - a list of 3 numbers representing the dimensions of the object in meters.

  • rotation - a list of 3 numbers representing the roll, pitch, yaw of the object in radians.

  • obstruction - a number between 0 and 1 representing how much the object is in view of the sensor.

The annotation file name appends “-ana” to the base of the corresponding image file name. For example, if the image file name is “0000000000-1-RGB.png then the annotation file name will be “0000000000-1-RGB-ana.json”.

Metadata Files

A metadata file contains detailed information about the scene, sensor and objects of interest in an image including how the objects were generated. Below is an example metadata file produced during a run of the satrgb channel.

Where

  • filename - the name of the corresponding image file.

  • channel - the name of the channel that produced the file.

  • version - is the metadata format version number.

  • date - the date and time the image was generated.

  • objects - a list of object metadata, one item per object of interest.

    • id - the numeric identifier for the object of interest. Note for a given object, this id corresponds to the object id in the annotation file.

    • type - the object type. Object types are channel specific.

  • sensor - specifies any sensor attributes included in metadata.

  • channel developers can add additional metadata that is channel specific. In the above example, the channel developer also included an environment type with information about how the scene was generated.

The metadata file name appends “-metadata” to the base of the corresponding image file name. For example, if the image file name is “0000000000-1-RGB.png then the metadata file name will be “0000000000-1-RGB-metadata.json”.

Mask Files

A mask file is a bit-level mask that provides semantic segmentation information for the corresponding image file. The value of each pixel in the mask file is set to the id number of the object of interest represented by the corresponding pixel in the image file. The format of the mask file is 16-bit grayscale PNG. Note that pixels that don’t correspond to any object of interest are set to a value of zero. The mask file name is the same as the corresponding image file name.

The images below show an image and mask output from the example channel. Note that the mask image has been adjusted using GIMP, setting the Exposure level to 15 and saving the image.

Rendered.ai - Unlimited Synthetic Data for AI and ML Computer Vision
New Volume Button
New Volume Dialog
Demo Volume in Organization
Workspace Resources
Include Volume Dialog
Volume Library
Upload Dialog
Uploaded File
Nodes Browser
Volumes Tab
Volume Drill Down
NewCube Added to Graph
Volume Permissions
Datasets Library
Mix Datset Settings
Mixed Dataset
Assets Tab
Empty Volume Page
New Folder Dialog
Backgrounds Folder
Upload File Dialog
Uploaded Files
Create Dialog
Inpaint dialog
Input types
Create Jobs Window
Output dir
Original background
Cleaned background

This annotation format can be converted to other common formats such as COCO or YOLO, see to learn more.

File / Directory

Description

dataset.yaml

A file that specifies dataset attributes such as creation data, description, etc. The file is in YAML format.

graph.yaml

A file that contains the graph used to produced the dataset. The file is in YAML format.

images/

A subdirectory containing the image files generated during rendering. If the channel is multi-sensor this can contain multiple images per run. Each file in images is prefixed with {run:10}-{frame}-{sensor}.{ext}.

annotations/

A subdirectory containing JSON-formatted annotations generated for each image. For each image in images/ with {run:10}-{frame}-{sensor}.{ext}, there should be a matching file in annotations/ with {run:10}-{frame}-{sensor}-ann.json.

metadata/

A subdirectory containing JSON-formatted metadata generated for each image. For each image in images/ with {run:10}-{frame}-{sensor}.{ext}, there should be a matching file in annotations/ with {run:10}-{frame}-{sensor}-metadata.json.

masks/

A subdirectory containing 16-bit PNG mask images. This allows each pixel to be assigned to 216 instances or 65536 unique objects. These masks are used to generate the annotations JSON file, but can also be helpful in segmentation.

{
    "filename": "0000000000-1-Image.png",
    "annotations": [
        {
            "id": 2,
            "bbox": [
                ...
            ],
            "segmentation": [
                [
                    ...
                ]
            ],
            "bbox3d": [
                ...
            ],
            "centroid": [
                ...
            ],
            "distance": 5096.42578125
        }
    ]
}
{
    "filename": "0000000000-1-Image.png",
    "channel": "satrgb",
    "version": "0.0.1",
    "date": "2021-05-03T00:50:08.244891",
    "objects": [
        {
            "id": 2,
            "type": "Crane_Truck_11_Yellow",
            "modifiers": [
                {
                    "warp_strength": 37.11278476273302
                }
            ]
        }
    ],
    "sensor": {
        "look_angle": 13.057741630937915,
        "azimuth": 131.06801207973155,
        "gsd_at_nadir": 0.33142178858573473,
        "resolution": [
            512,
            512
        ]
    },
    "environment": {
        "name": "mining_05",
        "lat": 0,
        "lon": 0,
        "datetime": "2020-03-17T10:00:00+00:00",
        "modifiers": [
            {
                "random_datetime": {
                    "datetime": "2020-03-17T10:00:00+00:00"
                }
            }
        ],
        "is_2d": false
    }
}

Ana Software Architecture

Release 0.3.0 - Dataset Annotations

Collaboration

Users of the Rendered.ai platform can share content with other users via the platform’s collaboration capabilities. Collaboration can happen at two levels: the organization and workspace.

Adding Users to Organization

Organizations are the primary mechanism for sharing access to other users. Users that are in the same organization have access to all workspaces and resources owned by that organization. Users can have one of two roles in an organization: Admin or Member. Admins have privileged permissions within the organizations, they can access Organization Billing and modify the role of other users. Admins and Members of an organization can both fetch, create, edit and delete workspaces and resources within the organization.

Adding users to your organization can be done in one of two ways: through the Landing page or in the Organization Settings page. From the Landing page, you can select the organization and click on the Invite button to the top right of the screen.

Clicking the Invite button will bring up a dialog where you can enter your colleague’s email address.

If the user already has a Rendered.ai account, they will be added as a member of the organization and can access all workspaces and resources owned by the organization. If they do not yet have a Rendered.ai account, they will receive an invitation email which will direct them to create an account. After signing up for their new account, the user will be added as a member of the organization.

The second way to add a user to your organization is through the Organization Members page. To get there, Click on the organization’s Settings button from the Landing page or click on the profile icon then Organizations.

Once on the Organizations Setting page, click the Members tab. From here you can also view your Organization members and their roles within the organization. You can also click the Invite button to reach the same Invite dialog.

After the user has signed up, or if they are already a user on the platform, they will show up in the Members list. From here, you can modify the user’s role by clicking the pencil icon or remove the user from the organization using the Remove button.

Graphs

A graph is a flow-based program that describes how synthetic data output, including images and other artifacts, will be generated.. The graph contains a set of functional capabilities represented as nodes. Nodes are connected by links to form the program.

Graphs are persisted as text files in YAML format. A graph file can be created by hand in a text editor or it can be created visually in the Rendered.ai web interface. Graph files can be uploaded to and downloaded from the web interface.

A graph is made up of nodes, values and links. Nodes are discrete functions that take input, process it, and produce output. Nodes have input ports and output ports. Input ports are either directly assigned a fixed data value or they can get their data value from other nodes. The flow of data between nodes is indicated by connecting an output port of a source node to an input port of a destination node.

The following figure shows an example of a graph in visual form:

Graph Files

Graph files can be built by hand in a text editor or they can be auto-generated from the Rendered.ai web interface.

Here is an example of a graph with a single node as shown in the graph editor:

The purpose of this node is to add an instance of a military tank to the synthetic data scene. The class of the node is “Tank” and it has an output port called “object_generator”. Note the node also has a name attribute but this is not displayed in the graphical user interface.

Here is the same graph persisted in YAML text file format.

version: 2
nodes:
  Tank_0:
    nodeClass: Tank
nodeLocations:
  Tank_0:
    x: -0.8408950169881209
    y: 28.892507553100586

The graph file contains three top level elements

  • “version” - This element defines the graph file language version number. In this example, the version is 2 which is the currently supported version.

  • “nodes” - This element is a dictionary that defines the nodes, links, and values that make up the graph. Each entry in the dictionary defines a node in the graph. The dictionary key is the the node name. The node class, any input values, and any input links are attributes of the node. In this example, there is a single node with the name “Tank_0” and it has a node class of “Tank”. This node has no input links or input values.

  • “nodeLocations” - This element is a dictionary that defines the screen coordinates of nodes as displayed in the graph editor. The dictionary key is the node name. Each entry has two attributes - the x and y coordinates. This element is optional but if you download a graph from the GUI then it is automatically generated using the current screen coordinates. In this example, the node named “Tank_0” has an x coordinate of -0.8408950169881209 and a y coordinate of 28.892507553100586.

Most graphs contain multiple nodes connected by links. Here is an example of a graph with two connected nodes:

The purpose of this graph is to add a tank to the scene and to also add snow to that tank. The snow will cover 50% of the tank.

Nodes can have input ports and output ports. In the graph editor, input ports are shown on the left side of the node and output ports are shown on the right side of the node. Connections between ports are shown as a line from the source node output port to the destination node input port. The line includes a caret symbol indicating the direction that data flows across the link.

In the example above, the Tank node has an output port called “Generator” which is connected to an input port on the Snow node which is also called “Generator”. The Snow node has a fixed value of “50” assigned to the input port “Coverage”. The “Generator” output port on the Snow node is not connected to anything.

Here the same graph in YAML format:

version: 2
nodes:
  Tank_0:
    nodeClass: Tank
  Snow_1:
    nodeClass: Snow
    values:
      coverage: "50"
    links:
      Generator:
        - sourceNode: Tank_0
          outputPort: Generator

This is similar to the previous example but here the SnowModifier node also includes elements for input values and input links. These are defined as follows:

  • “values” - This element is a dictionary that specifies the fixed values that are assigned to input ports on the node, one entry per input port. The dictionary key is the input port name and the value is value assigned to that port. Fixed values can be any standard JSON scalar (integer, float, string, etc.), a list, or a dictionary. In the above example, the “Coverage” input port has a fixed value of “50”.

  • “links” - This element is a dictionary that specifies the links coming into the node. Links are specified in the destination node definition with one entry per input port. The dictionary key is the input port name. Since an input port can have more than one incoming connection, link definitions are a list. The list entry for a link is a dictionary with two entries - the source node name and output port name that the link is originating from. In the above example, the Snow_1 node has one incoming link connecting to its “Generator” input port. The sourceNode for this link is node “Tank_0” and the outputPort on that node is “Generator”.

Packages

Packages organize and provide the nodes that can be included in a graph. All Ana packages are standard Python packages. They can be added to the channel from the Python Package Index (PyPI), included as a submodule from a git repository, or as source code stored directly in the channel directory.

Ana packages are added to the channel by including them in the “requirements.txt” file in the channel root directory. Here is an example of a requirements.txt file that adds two packages to the channel.

anatools
-e ./packages/testpkg1

The “anatools” package is required for all channels as it provides the base capabilities for Ana. In this example, the anatools package will be added from PyPI.

The second package in the example is “testpkg1”. The source code for this package is located in the “packages/testpkg1” subdirectory. This package can either be a git submodule or it can be source code. Note in this example we included the “-e” option which installs the package in editable mode (see PIP options for details). The -e option is useful during package development but is not necessary for a stable production channel where the package code does not change.

Package source code is stored in the “packages” subdirectory under the channel root. Each package is stored in a separate subdirectory that is named after the package. Each package includes a setup.py file so that it can be installed in the channel.

Here is an example of the directory structure for a package called “example”.

The setup.py file needs to include yml files in its package data. Here is the setup file for the example package:

import setuptools

setuptools.setup(
    name='example',
    author='',
    author_email='',
    packages=setuptools.find_packages(),
    package_data={"": ["*.yml"]}
    )

The example package directory itself contains two subdirectories - nodes and lib - and two files - __init__.py and package.yml.

The “nodes” directory contains the Python source code for all nodes in the package as well as schema files for each node. For every node module, there is a corresponding schema file. Schema files are in YAML format.

The “lib” directory contains Python modules that are used by nodes. This may include base classes, support functions, and other code that is called by a node.

The “__init__.py” is empty. It is required and indicates the directory contains a Python package.

The “package.yml” file is configuration information for the package. The content of this file is package specific, however there are two top level sections that most channels implement - “volumes” and “objects”. Here is a portion of the package.yml file for the example package:

volumes:
  example: 'df8ad806-223b-4d56-a932-838da835ec62'

objects:
  YoYo:
    filename: example:LowPoly.blend

The “objects” section is a dictionary that defines Blender objects used by the channel. The object type is the key and the value is package-specific configuration information for the object. Most channels implement the “filename” attribute which specifies the location of the blender file containing the object. This can be an absolute path, a relative path (relative to the --data directory) or it can be prepended by a volume name followed by a colon.

The “volumes” section is a dictionary that defines data volumes used by the package. The name of the volume is the key and the value is the volume ID.

Basic components

The steps necessary to generate synthetic data with Ana are described by a graph. A graph is a flow-based program that is executed by an application called a channel. The executable elements in the graph are called nodes.

The nodes in a channel are implemented as Python classes and are stored in Python packages. A package contains nodes and support libraries that have related functionality. A channel can use nodes from multiple packages.

The following diagram shows the relationship between channels, packages and volumes.

Example use case

A computer vision engineer (CVE) wants to train a machine learning algorithm that will process low Earth orbit satellite imagery and automatically count the number of cars in a parking lot. They will train the algorithm on synthetic data which consists of thousands of overhead RGB images of parking lots. These images will contain a variety of automobile types, parking lot configurations, and distractor objects such as trees and street lights, all viewed from an overhead angle.

To support this in Ana, a channel called ‘cars_in_parking_lots’ is created. This channel will allow the CVE to create graphs that generate the overhead images they need. The nodes for the channel are drawn from a Python package called ‘satrgb’ that provides basic capabilities such as placing the objects of interest in the scene, setting the sun angle, rendering the image from a simulated RGB camera, etc. Some of these nodes require static data such as 3D models of cars. This support data is provided as blender files stored on a volume called ‘satrgb_data’.

The following diagram shows how this channel is set up.

Channel Execution

Once the channel components are linked together, the user creates a graph that describes how images are to be generated. This graph file is then run through an interpreter script which executes the appropriate channel node code to generate the images and other output data.

Channels

The source code, configuration files, and support files for a channel are stored in a directory that has the following structure:

The top directory is the root directory for the channel. The root directory is typically named after the channel.

Under the root are the channel definition file and these sub directories:

  • /.devcontainer

  • /packages

  • /data

  • /docs

  • /mappings

  • /graphs

These are described in more detail in the following sections.

Channel Definition File

The channel definition file is a YAML text file that specifies how the channel is to be configured. The name of the file is the name of the channel.

The channel definition file has several sections. Here is an example of a basic channel definition file:

channel:
  type: blender

add_setup:
  - testpkg1.lib.channel_setup

add_packages:
  - testpkg1

The “channel” section specifies channel attributes. Two channel attributes are supported - “type” and “base”.

The “type” attribute specifies the execution environment for the channel. Currently, two types of environments are supported - “blender” and “python”. If the channel type is set to “blender”, then the interpreter is executed as a script inside of the Blender application and nodes can call the Blender api. If the channel type is set to “python” then the interpreter is executed as a normal Python script.

The “base” attribute is used with channel inheritance. The base attribute specifies the channel file that will function as the base for this channel. See Additional channel configuration below for details.

The “add_setup” section specifies the function(s) that will be executed before the graph is interpreted. Typically these functions perform setup operations such as deleting the default cube object that is created when Blender is first run, enabling the Cycles rendering engine, enabling GPU rendering, and any other setup tasks required.

The “add_setup” section is specified as a list of Python modules. Before the graph is interpreted, each module is searched for a function named “setup” which is executed. These functions are executed in the order that they are listed.

In this example, the channel type is set to “blender” and there is no inheritance. The “testpkg1.lib.channel_setup” module will be searched for a “setup” function which will be executed before graph interpretation.

The “add_packages” section specifies the list of Python packages that will be searched for nodes to add to the channel. By default when a package is added to the channel, all nodes in that package are added. In the above example, the nodes in Python package “testpkg1” will be added.

Additional sections can be added to a channel definition for more complex channels.

add_nodes

The “add_nodes” section allows the inclusion of specific nodes from a package without including the full package. Here is an example:

add_nodes:
  - package: anatools
    name: SweepArange
    alias: Xyzzy
    category: Tools
    subcategory: Parameter Sweeps
    color: "#0C667A"

The “add_nodes” section is a list. Each entry specifies a node to be added.

The “package” attribute is required. It specifies the package the node will be drawn from.

The “name” attribute is required. It is the class name of the node to be included.

The “alias” attribute is optional. It specifies a new class name for the node when it is used in the channel.

The “category” attribute is optional. It overrides the GUI menu category for the node that was specified in the node’s schema definition.

The “subcategory” attribute is optional. It overrides the GUI menu subcategory for the node that was specified in the node’s schema definition.

The “color” attribute is optional. It overrides the GUI menu color for the node that was specified in the node’s schema definition.

In the above example, the “SweepArange” node is selected from the “anatools” package and it is given an alias of “Xyzzy”. Graphs in this channel can reference a node of this class by specifying a nodeClass of “Xyzzy”. The GUI menu category for the node is set to “Tools”. The GUI menu subcategory is set to “Parameter Sweeps”. The color of the node will be the hex value "#0C667A".

rename_nodes

The “rename_nodes” section allows a node class to be given a new name. Here is an example:

rename_nodes:
  - old_name: Render
    new_name: Grue

The “rename_nodes” section is a list. Each entry specifies a node class to be renamed. The “old_name” attribute specifies the old class name and the “new_name” attribute specifies the new class name. In this example, the “Render” node class is renamed to “Grue”. Graphs in this channel can reference a node of this class by specifying a nodeClass of “Grue”.

default_execution_flags

The “default_execution_flags” section is only relevant when a channel is run locally from the command line. They do not affect operation in the cloud service. The purpose of this section is to specify default flag values that will used when those flags are are not included on the “ana” command line. Here is an example:

default_execution_flags:
  "--graph": graphs/my_test_graph.yml
  "--interp_num": 10

The “default_execution_flags” section is a dictionary. The key for each entry is an execution flag and the value is the default value for that flag. In the above example, the default or “--graph” is set to “graphs/my_test_graph.yml” and the default for “--interp_num” is set to 10.

Additional channel configuration

Sometimes channel configurations can get complex. If you have defined a complex channel and would like to create a new channel that is similar but with only a few minor changes then channel inheritance might be useful.

A channel can inherit attributes from a previously defined channel by specifying the previously defined channel as its base. When this is done, all of the attributes (nodes, default flags, etc) defined in the base channel are also defined in the new channel. The new channel can then add additional attributes or remove attributes that are not needed. Note that the channel definition file for the base channel must also be stored in the channel root directory.

Here is an example of channel inheritance.

channel:
  base: myoriginal.yml

add_packages:
  - mynewpkg

remove_packages:
  - testpkg2

remove_setup:
  - testpkg2.lib.channel_setup

remove_nodes:
  - Render3

In this example, the base channel is defined in the file “myoriginal.yml”. For purposes of this example, assume the base channel has three packages - “testpkg1”, “testpkg2”, and “testpkg3”.

The “add_packages” section adds a new package called “mynewpkg” to the channel.

The “remove_packages” section removes “testpkg2” from the channel. None of the nodes defined in that package will be in the channel.

The “remove_setup” section removes the “testpkg2.lib.channel_setup” module from the list of setup modules.

The “remove_nodes” section removes the “Render3” node from the channel.

Invite Button
Organizations Drop Down
Invite Button
Invite Dialog
Visual Representation of a Graph
Single Node Graph
Packages Directory With Example Package

The nodes in a package may require static support data. This data is stored as files in a virtual container called a volume. A package can make use of multiple volumes. For more see .

Channels, Packages, and Volumes
Example Channel With One Package and One Volume
Channel Execution
Channel Directory Structure
Package Volumes

Package Volumes

Volumes are used to store large asset files to keep the Docker images for channels small, which translates to faster startup times for the channel. Volumes are specified in a Package using the package.yml file. The below package.yml file is from the example package in the example channel.

volumes:
  example: 'e66b164e-8796-48aa-8597-636d85bec240'

objects:
  YoYo:
    filename: example:YoYo.blend
  BubbleBottle:
    filename: example:BubbleBottle.blend
  Skateboard:
    filename: example:Skateboard.blend
  Cube:
    filename: example:Cube.blend
  Light Wooden Box:
    filename: example:Containers/LightWoodenBox.blend

Files are referenced from the volume using the get_volume_path(package, rel_path) function where package parameter is the name of the package, i.e. “example”, and the rel_path is the volume:pathinvolume. To get the file path for the blender file containing the Yoyo object this call would be get_volume_path('example', 'example:LowPoly.blend'). Anatools has a helper functions for creating object generators from blender files, this is an example of loading the YoYo object from the blender file and wrapping it in a YoYoObject class: get_blendfile_generator("example", YoyoObject, "YoYo").

When specifying a volume, you must use a key, value pair where the key is a unique name used locally for the volume and the value is the Volume ID on the platform. Volume ID’s are generated when the volume is created via anatools' create_managed_volume(name, organizationId) SDK call or through the web interface. See the or for more details on creating and volumes.

Volumes can be mounted during local development using the anamount command. Calling anamount from the workspace directory of the development container will mount the package volumes in the workspace directory at “data/volumes/<volume-id>”, but only if the user has read or write permissions to the volume. For an example of how to mount and develop with a volume locally, review the tutorial where we create a new volume then add a Blender file to it.

anatools SDK documentation
Add a Generator Node

Schema

For every node in a channel there is an associated schema that defines what inputs, outputs, and other attributes are implemented by the node. Schema are stored in schema files in the same directory as the node files. For every node module in the package, there is an associated schema file. Schema files are written in YAML and use the same base name as the corresponding node module, e.g. the schema file for “my_node.py” is “my_node.yml”.

Here is an example schema for the “Add” node defined in the previous section:

schemas:
  Add:
    inputs:
    - name: Value1
      description: The first value to be added
    - name: Value2
      description: The second value to be added
      default: 1
    outputs:
    - name: Sum
      description: The sum of Value1 and Value2 added together
    tooltip: Add two values and return the sum
    category: Functions
    subcategory: Math
    color: "#0C667A"

The schema file has a single top level element called “schemas” which is a dictionary containing one item for every node defined in the corresponding node module. In this example, the schema file defines a single node called “Add”.

Node input ports are specified as a list of dictionaries, with one list entry per input port. Each input port must specify a name and description. Optionally, a default value for the port can be specified. This value will be used if no value or link is assigned to that port in the graph.

In this example, there are two input ports - “Value1” and “Value2”. The Value2 input port is assigned a default value of 1.

Output ports are specified as a list of dictionaries, with one list entry per output port. Each output port must specify a name and description.

In this example there is one output port - “Sum”.

The “tooltip”, “category”, “subcategory”, and “color” attributes specify information used in the GUI based graph editor.

The “category” and “subcategory” specify where the node will be located in the add-node menu on the left side of the graph editor. In this example, the node will be located under “Functions” → “Math”.

The “color” attribute specifies the color to be used when the node is displayed in the GUI.

Additional attributes can be assigned to input ports to help guide users when they are entering inputs in the GUI. Here is an example:

schemas:
  Location3d:
    inputs:
    - name: Terrain Type 
      description: The type of terrain to generate for this location
      select:
      - desert
      - forest
      - urban
      default: urban
    outputs:
    - name: Terrain
      description: The terrain for this location
    tooltip: Generate a 3d background to be used in the scene
    category: Locations
    subcategory: Procedural
    color: "#0C667A"

In this example, we define a node called “Location3d” that will procedurally generate 3d terrain. The type of terrain to be generated is specified via the “Terrain Type” input port. The “select” attribute provides a list of values for this port that will be displayed in the GUI as a pull-down menu. The user can scroll through this menu to pick a value. The default value displayed in the pull-down is “urban”.

The “tooltip” attribute specifies a string to be displayed when the user hovers over the symbol on the node.

The anatools Package

The anatools package provides the base set of capabilities to build a channel. It includes the software development kit (SDK), the ana interpreter, development utilities, and a set of libraries and nodes that can be included in a channel. It is required for all channels and is installed by including it in the requirements.txt file in the main channel directory.

Nodes in anatools

The nodes in the anatools package are general purpose capabilities that can be included in any channel. To include an anatools node, you add it to the “add_nodes” section of the channel file as follows:

channel:
  type: blender

add_setup:
  - testpkg.lib.setup

add_packages:
  - testpkg

add_nodes:
  - package: anatools
    name: RandomUniform
    category: Values
    subcategory: Random
    color: "#FF9933"

In the above example, the “RandomUniform” node is included in the channel. Note the category, subcategory, and color for the node are also specified. These override the default values.

Following are descriptions of each node in the anatools package

ConditionalSelector

The ConditionalSelector node combines two inputs with an operator and evaluates it as a logical expression. If the result is true then it outputs the value of its ‘True’ input. If the result is false then it outputs the value of its ‘False’ input.

Inputs

  • ConditionA - A numeric value that is the first operand of the expression.

  • Operator - A string representing the comparison operator. This can be one of three values - ‘Less Than’, ‘Equal To’, or ‘Greater Than’.

  • ConditionB - A numeric value that is the second operand of the expression

  • True - The value to be output if the expression evaluates as true.

  • False - The value to be output if the expression evaluates as false.

Outputs

  • Value - The output value

RandomChoice

Selects the specified number of choices from a list. The “List_of_Choices” is a list where each element is one of the choices. Note that if elements are strings then they must each be enclosed in double-quotes, e.g. [“choice1”, “choice2”, “choice3”].

Inputs

  • List_of_Choices - The list of choices to choose from

  • Number_of_Choices - The number of choices to make

  • Unique_Choices - Determines if every choice needs to be unique. This value is a string and can be either ‘True’ or ‘False’.

Outputs

  • Choices - The list of choices made

RandomNormal

Inputs

  • loc - The mean of the distribution

  • scale - The standard deviation of the distribution

  • size - The output shape. If you are drawing a single sample then leave this empty. If you are drawing multiple samples then see the numpy documentation for details.

Outputs

  • out - The sample(s)

RandomRandint

Note the upper bound is exclusive so if you want, for example, to generate random integers between 0 and 9 the low should be set to 0 and the high should be set to 10.

Inputs

  • low - Lower boundary of the interval (inclusive)

  • high - Upper boundary of the interval (exclusive)

  • size - The output shape. If you are drawing a single sample then leave this empty. If you are drawing multiple samples then see the numpy documentation for details.

Outputs

  • out - The sample(s)

RandomTriangular

Inputs

  • left - The lower limit

  • mode - The value where the peak of the distribution occurs

  • right - The upper limit

  • size - The output shape. If you are drawing a single sample then leave this empty. If you are drawing multiple samples then see the numpy documentation for details.

Outputs

  • out - The sample(s)

RandomUniform

Inputs

  • low - Lower boundary of the interval (inclusive)

  • high - Upper boundary of the interval (exclusive)

  • size - The output shape. If you are drawing a single sample then leave this empty. If you are drawing multiple samples then see the numpy documentation for details.

Outputs

  • out - The sample(s)

SelectGenerator

The SelectGenerator node allows the user to create a multi-branch junction in a generator tree. When evaluating the generator tree, one of the weighted input branches will be randomly selected.

Inputs

  • Generators - Links from one or more generator or modifier nodes. These are ‘downstream’ links in the generator tree.

Outputs

  • Generator - The ‘upstream’ link in the generator tree.

SetInstanceCount

Set the count property of an input generator to produce multiple instances. Note generator instance count must be implemented in the channel for this to work. Many common channels, such as the example channel, do not support generator instance count.

Inputs

  • Generator - Link from a generator node.

  • Count - The number of times to instance the input generator

Outputs

  • Generator - The generator with the instance count set

String

The String node allows the user to enter a single string and use it as input to multiple nodes

Inputs

  • String - The string to be passed to other nodes

Outputs

  • String - The string

SweepArange

For example if start is 0, stop is 10, and step is 1 then the sequence will be [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]. If the run number is 3 then the node will output a value of 3. If the run number is 11 then the output will be 1.

Inputs

  • start - Start of the interval (inclusive)

  • stop - End of the interval (exclusive)

  • step - Spacing between values

Outputs

  • value - Value drawn from the sequence.

SweepLinspace

For example if start is 0, stop is 9, and num is 10 then the sequence will be [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]. If the run number is 3 then the node will output a value of 3. If the run number is 11 then the output will be 1.

Inputs

  • start - Start of the interval (inclusive)

  • stop - End of the interval (inclusive)

  • num - Number of values in the sequence

Outputs

  • value - Value drawn from the sequence.

Value

The Value node allows the user to enter a single numeric value and use it as input to multiple nodes

Inputs

  • Value - The value to be passed to other nodes

Outputs

  • Value - The value

Vector2D

Creates a 2D vector with the magnitudes specified.

Inputs

  • x - The magnitude in the x direction

  • y - The magnitude in the y direction

Outputs

  • Vector - The vector represented as a 2 element list

Vector3D

Creates a 3D vector with the magnitudes specified.

Inputs

  • x - The magnitude in the x direction

  • y - The magnitude in the y direction

  • z - The magnitude in the z direction

Outputs

  • Vector - The vector represented as a 3 element list

VolumeFile

A node that represents a file on a workspace volume that was selected by the user.

Outputs

  • File - A pointer to the file on the workspace volume. This is an instance of the FileObject class.

VolumeDirectory

A node that represents a directory on a workspace volume that was selected by the user.

Outputs

  • Directory - A pointer to the directory on the workspace volume. This is an instance of the DirectoryObject class.

Weight

The Weight node modifies the weight of a branch in the generator tree.

Inputs

  • Generator - Link from a generator node. This is a ‘downstream’ link in the generator tree.

  • Weight - A numeric weight to be applied to the input branch

Draw random samples from a normal (Gaussian) distribution with mean 'loc' and standard deviation 'scale'. This node calls the numpy function .

Generate random integers from low (inclusive) to high (exclusive). This node calls the numpy function .

Draw random samples from a triangular distribution over the closed interval [left, right]. This node calls the numpy function .

Draw random samples from a uniform distribution over the half-open interval [low, high). This node calls the numpy function .

Generates a value from a parameter sweep across evenly spaced values within the half-open interval [start,stop). The range of values in the interval is determined by the numpy function . The value drawn from this sequence is determined by taking the current run number modulo the number of values in the sequence and using that as an index into the sequence, e.g. seq[run_num % len(seq)].

Generates a value from a parameter sweep across evenly spaced values within the interval [start,stop]. The range of values in the interval is determined by the numpy function . The value drawn from this sequence is determined by taking the current run number modulo the number of values in the sequence (num) and using that as an index into the sequence, e.g. seq[run_num % num].

numpy.random.normal
numpy.random.randint
numpy.random.triangular
numpy.random.uniform
numpy.arange
numpy.linspace

Ana Modules, Classes, and Functions

Ana provides a set of shared modules, base classes, and helper functions to support channel development.

The Context Module

Nodes often need information about the current execution context. This includes package configuration information, channel configuration information, runtime parameters, etc. This information is stored in the a module called “anatools.lib.context”.

Here is a node that uses context to print the current run number:

import anatools.lib.context as ctx
from anatools.lib.node import Node

class PrintRunNumber(Node):
    def exec(self):
        print(f'Current run number {ctx.interp_num')
        return {}

The following attributes can be retrieved from the context module:

  • ctx.channel - a pointer to the Channel class instance for this channel

  • ctx.seed - the initial seed for random numbers generated by “ctx.random” functions

  • ctx.interp_num - the current run number

  • ctx.preview - a boolean that specifies whether or not the current execution is a preview

  • ctx.output - the value passed in from the “--output” command line switch

  • ctx.data - the value passed in from the “--data” command line switch

  • ctx.random - an instance of the numpy “random” function seeded by ctx.seed

  • ctx.packages - a dictionary of package configurations used by the channel, one entry per package

Base classes and Helper Functions

Ana provides a number of base classes and helper functions to simplify node construction.

Base Class: Node

This is the abstract base class for all nodes. It implements input and output port processing and stores information used in node execution. Here’s a simple Node example:

from anatools.lib.node import Node

class MyNode(Node):
    def exec(self):
        return {}

For details on how to use this class, see the Node section above.

Base Class: AnaScene

The AnaScene base class simplifies the management of a scene in Blender. The AnaScene class encapsulates the Blender scene data block, allows the user to add objects to the scene, sets up a basic compositor for rendering, and provides methods for generating annotation and metadata files for objects in the scene. Here’s an example of a node that creates an AnaScene.

import bpy
from anatools.lib.node import Node
from anatools.lib.scene import AnaScene

class CreateEmptyScene(Node):
    def exec(self):
      ana_scene = AnaScene(blender_scene=bpy.data.scenes["Scene"])
      return {"Scene": ana_scene}

Base Class: AnaObject

The AnaObject base class simplifies the management of 3D objects in Blender. AnaObject encapsulates the Blender object data block, provides a common mechanism for creating objects, and stores annotation and metadata information for the object.

The following example node creates an AnaObject, loads it from a Blender file, and adds it directly to an AnaScene:

import bpy
from anatools.lib.node import Node
from anatools.lib.ana_object import AnaObject

class AddTruck(Node):
    def exec(self):
        ana_scene = self.inputs["Scene"][0] # get the AnaScene as input
        truck = AnaObject(object_type="truck")
        truck.load(blender_file="path-to-file/truck.blend")
        ana_scene.add_object(my_truck)
        return {"Scene": ana_scene}

The load method requires the Blender file be configured as follows:

  • The object must be in a collection that has the same name as the object_type, e.g. “truck”

  • The object must have a single root object that has the same name as the object_type, e.g. “truck”. If your blender object is made up of separate components then you can create an “empty” object to be the root and make the separate components children of that empty.

To manipulate the Blender object encapsulated by AnaObject, you access the “root” attribute which is a pointer to the blender data block. For example, by default new AnaObjects are placed at location [0,0,0] in the Blender scene. To move the object to a new location, you modify the location attribute of the root object. Here’s an example of a node that moves an AnaObject to a new location.

from anatools.lib.node import Node

class MoveObject(Node):
    def exec(self):
        # Inputs are an AnaObject and the x,y,z coordinates where it will be moved
        obj = self.inputs["Object"][0]
        x = float(self.inputs["X"][0])
        y = float(self.inputs["Y"][0])
        z = float(self.inputs["Z"][0])
        obj.root.location = [x, y, z]

Any blender object data block attribute can be modified in this way.

By default, when the AnaObject load method loads the object from a Blender file. To change this behavior you subclass of AnaObject and override the load method. If you override the load method then your new method must do all of the following :

  • Create the Blender object. The object must have a single root object. Set “self.root” to equal the root object's Blender data block.

  • Create a collection to hold the object. Link the root object and all its children to that collection. Set “self.collection” to equal the collection’s data block.

  • Set “self.loaded = True”

Here is an example that creates an AnaObject from a Blender primitive.

import bpy
from anatools.lib.ana_object import AnaObject
from anatools.lib.node import Node

class SuzanneObject(AnaObject):
    def load(self, **kwargs):
        # create the Blender object
        bpy.ops.mesh.primitive_monkey_add(
            size=2, enter_editmode=False,
            align='WORLD', location=(0, 0, 0), scale=(1, 1, 1)
        )
        # set the root pointer
        self.root = bpy.context.object
        # create the collection and set its data block pointer
        self.collection = bpy.data.collections.new(self.object_type)
        # link the object to the collection
        self.collection.objects.link(self.root)
        # set the loaded flag
        self.loaded = True
        
class AddSuzanne(Node):
    def exec(self):
        ana_scene = self.inputs["Scene"][0] # get the AnaScene as input
        suz = SuzanneObject(object_type="Suzanne")
        suz.load()
        ana_scene.add_object(suz)
        return {"Scene": ana_scene}

To manipulate an AnaObject, you access the “root” attribute which points to the blender data block. For example, by default a AnaObject is placed at location [0,0,0] in the Blender scene. To move the object to a new location, you modify the location attribute of the root object. Here’s an example of a node that moves an AnaObject to a new location.

from anatools.lib.node import Node

class MoveObject(Node):
    def exec(self):
        # input the AnaObject
        obj = self.inputs["Object"][0]
        # input the x,y,z coordinates where it will be moved
        x = float(self.inputs["X"][0])
        y = float(self.inputs["Y"][0])
        z = float(self.inputs["Z"][0])
        obj.root.location = [x, y, z]
        return {"Object": obj}

Any blender object data block attribute can be modified in this way.

Base Classes: ObjectGenerator and ObjectModifier

The ObjectGenerator and ObjectModifier classes provide a scalable, probability-based mechanism for creating and modifying objects in a scene.

Typical use case: A user wants to create images that contain objects randomly selected from a pool of different object types. The user wants the probability of selecting any given object type to be exposed in the graph. The user also has a set of modifications they would like to apply to those objects. Which modifications can be applied to which objects and the probability of a given modification being applied to a given object type must also be exposed in the graph. This can be challenging to represent in a graph if the combination of object types and object modifications is large.

One solution is to build a sample space of object generator and object modifier code fragments. Each entry in the sample space is one of the allowed generator / object modifier combinations along with the probability that it will occur. At run time, one of these generator/modifier combinations is drawn from the sample space, the object is generated, and the modifiers are applied. This process is repeated until the desired number of objects have been added to the scene.

In Ana, the object generator / object modifier sample space is implemented as a tree structure. The tree has a root, intermediate nodes are modifiers and end nodes are generators. Each branch of the tree has a weight assigned to it that determines the probability that branch of the tree will be taken when a root to leaf path is constructed. To select a sample, a path is generated through the tree and the generator and modifier nodes along the selected path are executed in reverse order to create and then modify an object. This process is repeated until the desired number of objects have been created.

Here is an example of a simple generator/modifier tree:

The values on the branches are the relative weights. The bold lines indicate the path constructed for one sample. To create this path we start at the root and select one of the child branches. The branch on the left has a normalized weight of 1/4 (25% probability of being selected) and the branch on the right has a normalized weight of 3/4 (75% probability of being selected). We generate a random number, select the right branch and move to the dirt modifier. From there the left and right child branches each have a weight of 1/2 (50% probability of being selected). We generate a random number, select the left branch, and move to the truck generator. This is an end node so we have a full path which is “dirt modifier → truck generator”. We then execute these code units in reverse order, first generating the truck and then applying the dirt.

This tree can be constructed by executing a graph with Ana nodes that create and link ObjectGenerators and ObjectModifiers into the desired tree structure. Here is an example graph that does this:

This graph is executed from left to right.

  1. The Tank node creates an ObjectGenerator of type “Tank” and passes it to the RustModifier and the DentModifier.

  2. The Bradley node creates an ObjectGenerator of type “Bradley” and passes it to the RustModifier and DentModifier.

  3. The RustModifier node creates an ObjectModifier of type “Rust” and sets the Tank and Bradley generators as children. This subtree is passed to the Weight node.

  4. The DustModifier node creates an ObjectModifier of type “Dust” and sets the Tank and Bradley generators as children. This subtree is passed to the PlaceObjectsRandom node.

  5. The weight node changes the weight of the branch to the subtree that was passed in from the default of 1 to a value of 3. The Weight node then passes that subtree on to the PlaceObjectsRandom node.

  6. The PlaceObjectsRandom node creates a “Branch” ObjectModifier and sets the two subtrees passed to it as children. This completes the generator/modifier tree.

  7. The PlaceObjectsRandom loops 10 times, each time generating a path through the tree and then executing it in reverse order to create and modify an object.

Here is the code for a Node that creates an ObjectGenerator:

from anatools.lib.node import Node
from anatools.lib.generator import get_blendfile_generator

class TankGenerator(Node):
    def exec(self):
        generator = get_blendfile_generator("satrgb", AnaObject, "Tank")
        return {"object_generator": generator}

This node uses a helper function called “get_blendfile_generator” that creates an ObjectGenerator from the object definition specified in the “package.yml” file. The helper function takes three parameters

  • package - name of the package that defines the object

  • object_class - the Python class that will be used to instantiate the object

  • object_type - the object type as specified in the package.yml file

Object modifiers come in three parts - the node that will generate the ObjectModifier and the object modifier method that does the actual modification.

Here is the code for a Node that creates an ObjectModifier:

from anatools.lib.node import Node
from anatools.lib.generators import ObjectModifier

class ScaleModifier(Node):
    def exec(self):
        # takes one or more object generators as input
        children = self.inputs["object_generator"]
        scale = float(self.inputs["scale"][0])

        # add modifier to the generator tree
        generator = ObjectModifier(
            method="scale",
            children=children,
            scale=scale)
        return {"object_generator": generator}

In this example, the node takes one or more object generators as inputs as well as the scale factor to apply. It then creates an ObjectModifier and makes the incoming object generators its children. It then passes the new generator tree on to the next node.

The ObjectModifier class has two required parameters plus optional keyword parameters.

  • “method” - this is the name of the modifier method, specified as a text string

  • “children” - this is a list of all children of the ObjectModifier

  • keyword arguments - these arguments will be passed as keword argument parameters to the object modifier method

The object modifier method is a member of the object class that is to be modified. The simplest way to implement this is to include the method in the object class definition. Here is an example of a Jeep object that implements the “scale” modifier method.

from anatools.lib.ana_object import AnaObject

class Jeep(AnaObject):
    def scale(self, scale):
        self.root.scale[0] = scale
        self.root.scale[1] = scale
        self.root.scale[2] = scale

The problem with this approach is the modifier can only be applied to that specific object class. In most cases we want modifiers to apply to more than one class. The easiest way to do this is to use the mixin design pattern.

A mixin is an abstract base class that defines a method that will be used by other classes. Any class that wants to implement this method specifies the mixin class as one of its parents.

This example shows how to implement the mixin. Note in this example, the mixin class is implemented in a module called mypack.lib.mixins and the classes that inherit from it are in a separate module.

from abc import ABC class ScaleMixin(ABC): def scale(self, scale): self.root.scale[0] = scale self.root.scale[1] = scale self.root.scale[2] = scale

Here is are several classes that use the mixin:

from abc import ABC

class ScaleMixin(ABC):
    def scale(self, scale):
        self.root.scale[0] = scale
        self.root.scale[1] = scale
        self.root.scale[2] = scale

‘file_metadata’ helper function

The ‘file_metadata’ function can be imported from the anatools.lib.file_metadata module. It takes a filename as input and returns a dictionary that contains the metadata for the file. Note this is intended for files contained in package and workspace volumes.

To store metadata for a file ‘myfile.blend’, you create the file ‘myfile.blend.anameta’ in the same directory. The metadata in the anameta file and is in YAML format.

Graph Validation

When using nodes in the graph editor, the user needs quick validation that input fields have correctly set values or have the correct number of incoming links. Graph validation occurs whenever a user previews or stages a graph. Error notifications for fields that fail validation are presented in the UI so the user can make appropriate changes.

Graphs are validated against a set of validation rules that are specified in the node schema for the channel. Validation is performed by a function that takes a graph as input, compares it against the validation rules in the schema and reports any validation errors.

Here is an example of how input validation rules are implemented in the schema:

schemas:
  PlaceCars:
    inputs:
    - name: Number
      description: Number of cars to place
      default: 5
      validation:
        type: integer

The above schema defines a node called “PlaceCars” which has one input field called “Number”. The validation rule for the “Number” field is specified under the “validation” keyword.

The validation rule is a “type” rule as indicated by the “type” keyword. This rule specifies that the field requires an input value of data type “integer”.

If the user enters anything other than an integer in the “Number” field and then attempts to stage or preview the graph, then the GUI will highlight the field and indicate that it contains an invalid value.

Rule Categories

All validation rules have a category. The category is indicated by use of a specific keyword in the rule. Currently there are three rule categories.

  • type - A type rule specifies that the field must be of a certain data type. A type rule is indicated by use of the “type” keyword.

  • number-of-links - A number-of-links rule specifies that the field must have a certain number of input links attached to it. A number-of-links rule is indicated by the “numLinks” keyword.

  • compound - A compound rule specifies that multiple rules must be evaluated to determine field validity. Compound rules apply a logic operation to a list of rules. A compound rule is indicated by use of one of the following keywords: “oneOf” (exclusive or), “anyOf” (or), “allOf” (and), and “noneOf” (not).

Note that since fields will often allow either a value to be entered or an input link to be attached, many rules will be compound using the “oneOf” keyword, e.g. a field can have either an input value of a certain type or it can have an input link attached to it. See <typical use cases> for more details.

Type Rules

A type rule specifies that the field must contain a value of a certain data type. A type rule is indicated by use of the “type” keyword. A type rule can optionally include type-specific qualifiers. Here’s the format of a type rule:

type: <data-type>
<type-specific-qualifier-1>
<type-specific-qualifier-2>
...

The <data-type> is a string that specifies the name of the data type to be matched by the rule. Currently the following data types are supported:

type

description

string

string of text

number

a numeric value that can be either an integer or a float

integer

a numeric value that must be an integer

array

zero, one, or more ordered elements, separated by a comma and surrounded by square brackets [ ]

boolean

true or false

null

null value

String Type

The string type is used for strings of text. Note the string type will match any string of text including numeric strings so be careful when using the string type in a compound rule.

Length

The length of a string can be constrained using the “minLength” and “maxLength” type-specific qualifiers.

Here is an example of a string validation rule using length qualifiers:

validation:
  type: string
  minLength: 1
  maxLength: 20

The above example specifies that the field must be of data type “string” and that it has a minimum length of 1 and a maximum length of 20. If the user enters an empty string or a string that is longer than 20 characters then the field will be invalid.

Format

The “format” keyword allows for basic semantic identification of certain kinds of string values that are commonly used.

Currently only one built-in format from the JSON Schema spec is supported:

  • "uri": A universal resource identifier (URI), according to RFC3986.

Here is an example of a string rule with the format qualifier

validation:
  type: string
  format: uri

In this example, the field must contain a string that is in URI format.

Numeric Types

Two numeric types are supported - “number” and “integer”.

The “number” type is used for any numeric type, either integers or floating point numbers.

The “integer” type is used for integral numbers that do not have a fractional portion.

Range

Ranges of numbers are specified using a combination of the “minimum” and “maximum” keywords, (or “exclusiveMinimum” and “exclusiveMaximum” for expressing exclusive range).

If x is the value being validated, the following must hold true:

  • x ≥ minimum

  • x > exclusiveMinimum

  • x ≤ maximum

  • x < exclusiveMaximum

Here is an example of a numeric rule:

validation:
  type: integer
  minimum: 0
  maximum: 255

In this example, the field must contain an integer value greater than or equal to 0 and less than or equal to 255.

Array Type

The “array” type is for an ordered sequence of elements. An array contains zero, one, or more ordered elements, separated by a comma and surrounded by square brackets [ ].

Length

The length of the array can be specified using the “minItems” and “maxItems” qualifiers.

Here is example of an array rule:

validation:
  type: array
  minItems: 3
  maxItems: 3

In this example, the field must contain an array of length 3. Note item validation is not specified so items may be of any data type.

Item validation

There are two mechanisms for validating the individual items within an array - list validation and tuple validation. These are described below.

List validation

The list validation option is useful for arrays of arbitrary length where each item matches the same rule. For this kind of array, set the “items” keyword to a single rule that will be used to validate all of the items in the array.

Here is the schema for a filed that is an array of red, green, and blue color values, each ranging from 0-255:

schemas:
  ChangeColor:
    inputs:
    - name: rgb
      description: The red, green, and blue values specified as an array
      validation:
        type: array
        items:
          type: integer
          minimum: 0
          maximum: 255
        minItems: 3
        maxItems: 3

Tuple validation

Tuple validation is useful when the array is a collection of items where each has a different rule and the ordinal index of each item is meaningful.

The “prefixItems” keyword indicates tuple validation is to be used. The “prefixItems” is an array, where each item is a rule that corresponds to each index of the document’s array. That is, an array where the first element validates the first element of the input array, the second element validates the second element of the input array, etc.

Here is an example of tuple validation for an array that contains red, green, and blue which are integers ranging 0-255, and an alpha value which is a number ranging from 0-1.0.

schemas:
  ChangeColor:
    inputs:
    - name: rgba
      description: The red, green, blue, and alpha values specified as an array
      validation:
        type: array
        prefixItems:
        - type: integer
          minimum: 0
          maximum: 255
        - type: integer
          minimum: 0
          maximum: 255
        - type: integer
          minimum: 0
          maximum: 255
        - type: number
          minimum: 0
          maximum: 1.0

Note the “minItems” and “maxItems” keywords do not apply to tuple validation as it must contain the exact number of elements as specified in the “prefixItems”.

Boolean Type

The “boolean” type matches only two special values: true and false. Note that values that evaluate to true or false, such as 1 and 0, are not accepted by the rule.

Null Type

The “null” type matches the JSON null special value. Note that although graphs support null values, there is currently no mechanism for specifying a null in the GUI. However null can be specified as a default value value for a field.

Number-of-links rules

A number-of-links rule specifies that the field must have a certain number of input links attached to it. A number-of-links rule is indicated by the “numLinks” keyword.

The number of links allowed is indicated by the following:

  • zero - no links

  • zeroOrOne - zero or one link

  • zeroOrMany - zero or many links

  • one - exactly one link

  • oneOrMany - one or many links

Note that although all of these are supported for completeness, a typical number-of-links rule will use either “one” or “oneOrMany”.

Here is an example of a number-of-links rule:

validation:
  numLinks: one

The above rule specifies that the field must be connected to exactly one link.

Composite rules

The validation spec includes a few keywords for combining rules together. These keywords correspond to the boolean algebra concepts AND, OR, XOR, and NOT. You can use these keywords to express complex constraints that can’t otherwise be expressed with standard validation keywords.

The keywords used to combine rules are:

  • allOf: (AND) Must be valid against all of the rules

  • anyOf: (OR) Must be valid against any of the rules

  • oneOf: (XOR) Must be valid against exactly one of the rules

All of these keywords must be set to an array, where each item is a rule.

In addition, there is:

  • not: (NOT) Must not be valid against the given rule

Note that although all of these are rules supported for completeness, a typical compound rule will use the “oneOf” keyword.

Here is an example of a composite rule:

validation:
  oneOf:
    - type: integer
    - numLinks: one

In this example, the field must either contain an integer value or it must be connected to exactly one link.

Preview

This section describes how to prepare a channel to support previews. In the GUI a preview is generated when you click the preview button in the graph canvas. You can also test preview locally by including the "--preview" flag on the ana command that runs the channel.

To determine if a given run of a channel is in preview mode, you should check the context variable 'preview'. This will be set to True if the channel should generate a preview image. Typically this is done by taking the rendered image and generating a copy called "preview.png". The preview image should be stored in the root of the output directory.

Channels should be optimized so that preview times are minimal. This includes such things as reducing the number of render samples and decreasing the resolution of the image.

Here is an example of a blender channel that generates a preview. This code is in the render node that generates the image. It lowers the sample count to 10, cuts the resolution of the preview image in half, and also doesn't write annotation or metadata if preview is set. Note the node has one input which is a pointer to the AnaScene object.

In-tool Help

Developers can provide in-tool help for users that will be displayed in the GUI. There are several changes that need to be made to a channel in order to provide in-tool help. Node schemas need two additional attributes, the channel documentation directory needs to be updated, and markdown and image files need to be added for nodes,.

Schema Attributes

There are two schema attributes for in-tool help. Note that both of these attributes are optional. If they are not present in the schema then no help will be provided.

The 'help' attribute is a reference to a markdown file that provides help when the user clicks on the "i" symbol in the upper right corner of the node icon. This attribute is structured as follows: <package>/<markdown.md> where the 'package' is the name of the package containing the help file, and 'markdown.md' is the name of the markdown file.

The 'thumbnail' attribute is a reference to a PNG file that is an image to be displayed when the user hovers over the node name in the node tray. This attribute is structured as follows: <package>/<thumbnail.png> where 'package' is the name of the package containing the thumbnail, and 'thumbnail.png' is the name of the PNG file.

Here is an example of a schema that specifies both a help file and a thumbnail attribute.

The help file is called 'Tank.md' and it is stored in the 'satrgb' package. The thumbnail is 'Tank.png' and it is stored in the 'satrgb' package.

Channel Documentation Directory Changes

With this update we force channel developers to name their channel documentation file “channel.md“ that is included in the docs/ directory at the top-level of the channel. Other assets can be included in that docs/ directory and will be referenced from the channel.md file. File references are relative to the docs/ directory so subdirectories can be included if desired. An example is this snippet showing an example output image in the channel.md file.

The image will be located in docs/output/image.png of the channel repository.

Channel Structure

The channel directory structure supports the inclusion of documentation assets. Here is an example of the structure:

The top-level docs directory contains the channel’s documentation file, channel.md along with any supporting assets required by the markdown file. The package documentation should be self-contained within the package, included in a packages///docs directory as mentioned above.

Note that the package docs directory must include an empty "__init__.py" file so that the Python package manager can find it.

Channel Deployment

The anadeploy process uses the anautils command to compile the documentation assets into a directory structure that can be used by the channel deployment service. The compiled assets directory looks like this:

Note the anautils command is automatically run by the anadeploy process so this information is for reference purposes only.

If there is something wrong with the documentation, such as a misspelled md or png file reference, then the channel deployment process will fail. It is a good idea to check the validity of your documentation before running anadeploy. To do this, you run the anautils command by hand as follows:

This will check the validity of your links and compile the documentation into the temp directory. If there is something wrong with your documentation, you will receive an error message.

Traversing a Generator/Modifier Tree
Generator/Modifier Tree in the Graph Editor

The rules for validation are derived from a subset of the 2020-12 draft of rules plus additional capabilities that are Rendered.ai specific. The rules validate node inputs (values and links). Rules are specified via the “validation” keyword in the schema.

JSON Schema format
from anatools.lib.node import Node
import anatools.lib.context as ctx
from PIL import Image, ImageFile

class Render(Node):
     def exec(self):
        ana_scene = self.inputs["Scene"][0]
        # cut the resolution in half
        if ctx.preview:
            bpy.context.scene.cycles.samples = 10
            bpy.context.scene.render.resolution_x = int(bpy.context.scene.render.resolution_x / 2)
            bpy.context.scene.render.resolution_y = int(bpy.context.scene.render.resolution_y / 2)
        # render the image
        bpy.ops.render.render(write_still=True, scene=ana_scene.blender_scene.name)
        # generate preview image
        if ctx.preview:
            filename = f'{ctx.interp_num:010}-{ana_scene.blender_scene.frame_current}-{ana_scene.sensor_name}.png'
            image_file = os.path.join(ctx.output, "images", filename)
            image = Image.open(image_file)
            image.save( os.path.join(ctx.output,"preview.png") )
        else:
            ana_scene.write_ana_annotations()
            ana_scene.write_ana_metadata()
        return {}
  Tank:
    inputs: []
    outputs:
    - name: Generator
      description: Object generator
    tooltip: Abrams Tank AFV
    help: satrgb/Tank.md
    thumbnail: satrgb/Tank.png
    category: Generators
    color: "#cc1414"
    subcategory: AFV
![Example Output](output/image.png)
docs/
  |- channel.md
  |- image1.png
packages/
  |- satrgb/
    |- satrgb/
      |- docs/
        |- __init__.py
        |- node1.md
        |- node1.png
        |- node2.md
        |- node2.png
docs/
  channel/
    |- channel.md
    |- image1.png
    |- video1.gif
  packages/
    |- anatools/
      |- docs/
    |- satrgb/
      |- docs/
anautils --mode=help --output=/tmp/

Nodes

A node is a discrete functional unit that is called by the interpreter at run time when the corresponding node in the graph file is processed. Nodes are written in Python and stored in Python modules. Node modules are collected together into Ana packages and stored in the appropriate Ana package directory, e.g. <channel-root>/packages/<package-name>/<package-name>/nodes/<node-module>.py

Nodes are Python classes that are derived from an abstract base class called “Node”. The Node base class is implemented in the anatools.lib.node module.

Here is a simple node definition:

from anatools.lib.node import Node

class MyNode(Node):
    def exec(self):
        return {}

In this example, the derived node class is called “MyNode”. Note that the Node base class implements an abstract public member function called “exec” that is called by the interpreter when it is executed. The derived class overrides this functions in order to implement its execution logic. The exec function always returns a dictionary which is the output of the node. In this case the exec function performs no function and returns an empty dictionary.

A node in a graph can receive input and produce output. This is done via input and output ports. Nodes implement ports as dictionaries.

Here is an example of a node that inputs two values, adds them together, and outputs the sum:

from anatools.lib.node import Node

class Add(Node):
    def exec(self):
        value1 = float(self.inputs["Value1"][0])
        value2 = float(self.inputs["Value2"][0])
        sum = value1 + value2
        return {"Sum": sum}

Input ports are implemented via the “inputs” dictionary which is accessible as an instance variable of the node. The dictionary key is the name of the input port. In this example, we have two input ports named “Value1” and “Value2”.

Since input ports can have multiple incoming links, the incoming values are stored in a list. In this example, we assume both input ports have a single incoming value so we use the zero index for each input port.

Note that input variables should be explicitly converted to the expected data type (in this case float). This is because fixed input values entered into the GUI are stored and returned as text strings. Note that data type conversion errors may need to be accounted for.

Output port data is returned as a dictionary. Each entry in the dictionary corresponds to one of the node output ports. In the above example, the sum of the two input values is returned as output port “Sum”.

Nodes should perform error checking. If there is an unrecoverable error then a message should be generated and execution terminated. Here is an example of error checking in a node:

import sys
import logging

logger = logging.getLogger(__name__)

class OnlyInteger(Node):
    def exec(self):
        try:
            an_int = int(self.inputs["an_int"][0])
        except ValueError as e:
            logging.error("Error converting port 'an_int' to type int", exc_info=e)
            sys.exit(1)

Ana uses the standard Python logging facility for error messages. In this example, the input data type conversion is surrounded by a try/except. If a ValueError occurs then a message is logged and the application exits.

The default logging level is set to “ERROR”. When running Ana from the command line, this can be changed at runtime via the “--loglevel” switch.

When the interpreter is run interactively, error messages are printed to the console. Messages can also be optionally written to a log file via the “--logfile” command line switch. When the application is run in the cloud, ERROR and higher level messages are displayed in the web interface.

If an error occurs in a node and it is not caught then it will be caught by a last chance handler in the interpreter. In that case, an ERROR level message will be printed and execution will terminate.

Typical Validation Use Cases

This document describes best practices and typical use cases for writing graph validation rules. These examples specify validation rules that can be copy/pasted into a schema.

Since most graph fields allow either a value to be entered or a link to be connected, the most common rule is a compound “oneOf” rule that includes both a “type” rule and a “numLinks” rule. The following examples mostly follow this pattern.

String field with minimum and maximum lengths

The following validation rule specifies that a field must be a string with a minimum length of 1 character and a maximum length of 20 characters or it must be connected to exactly one link that will provide the value.

validation:
  oneOf:
    - type: string
      minLength: 1
      maxLength: 20
    - numLinks: one

Number field with minimum and maximum values

The following validation rule specifies that a field must be a number with a value between 0-1 or it must be connected to exactly one link that will provide the value.

validation:
  oneOf:
    - type: number
      minimum: 0
      maximum: 1.0
    - numLinks: one

Integer field with minimum and maximum values

The following validation rule specifies that a field must be an integer with a value between 0-255 or it must be connected to exactly one link that will provide the value.

validation:
  oneOf:
    - type: integer
      minimum: 0
      maximum: 255
    - numLinks: one

Field that takes exactly one link

The following validation rule specifies that the field must be connected to exactly one link. A value cannot be entered for the field.

validation:
  numLinks: one

Field that takes one or more links

The following validation rule specifies that the field must be connected to exactly one or more links. A value cannot be entered for the field.

validation:
  numLinks: oneOrMany

Fixed length array of integer values

The following validation rule specifies that the field must contain an array of exactly three elements, each of which must be an integer in the range 0-255. Alternatively, the field may be connected to a link that will provide the array.

validation:
  oneOf:
    - type: array
      items:
        type: integer
        minimum: 0
        maximum: 255
      minItems: 3
      maxItems: 3
    - numLinks: one

Fixed length array with different element types

The following validation rule is for an array that contains three integers ranging 0-255, and an fourth value which is a number ranging from 0-1.0. Alternatively, the field may be connected to a link that will provide the array.

validation:
  oneOf:
    - type: array
      prefixItems:
      - type: integer
        minimum: 0
        maximum: 255
      - type: integer
        minimum: 0
        maximum: 255
      - type: integer
        minimum: 0
        maximum: 255
      - type: number
        minimum: 0
        maximum: 1.0
    - numLinks: one

Setting Up the Development Environment

Creating a custom channel for Rendered.ai is a Python-based development task. The channels are Docker containers and the development process is simplified by Rendered.ai maintained Development Containers that can be used with VSCode's Remote-Containers extension. This document will take you through installing Docker and the VSCode Remote-Containers extension so you can begin channel development on your local machine.

Installing Docker

After installing, we recommend running the hello-world test command to ensure Docker is running as expected. The following command will pull a test Docker image and print out hello world message.

docker run hello-world

Installing VSCode and Extensions

Extension Marketplace

To reach the Extension Marketplace, click the Extensions symbol on the left-hand side of the window.

Next we’ll install two extensions: Docker and Dev Containers. The Docker extensions can be helpful in seeing the Docker images and Docker containers that are currently running. The Dev Containers extension allows us to use the Docker container as a full-featured development environment from within VSCode – this helps ensure that the environment we are developing in is as close as possible to what gets deployed to the platform.

After we have the extensions installed, we are ready to pull the code and start our development.

Testing Dev Containers

Troubleshooting

Here we have compiled a list of common issues users may run into while configuring their development environment.

Docker not found

When starting the Development container in VSCode the following message may appear.

Start the Docker Desktop application.

Docker not found

When starting the Development container in VSCode the following message may appear.

Start the Docker Desktop application.

Allow your non-root user to execute docker

When starting the Development Container in VSCode the following message may appear.

This is due to VSCode having insufficient permissions to access the docker daemon on your behalf. You will need to ensure that the user has access to the docker daemon, on Linux operating systems you can add the current user to the docker group.

Important: Log out and log back in again to refresh group membership.

Failure to pull Public Image

When starting the Development Container in VSCode the following message may appear.

Add label

This process has been tested on several Linux distributions and Windows 11 with , there have been reported issues with M1 Macbooks with Docker – if you do have issues we recommend using for a better experience.

If you run into any issues or need additional help, email the Support team at .

To install Docker for your local machine, follow the official installation instructions at .

Successful Test of Docker Engine

Visual Studio Code is free code editor that allows users to install extensions from an extension marketplace. The Rendered.ai Development Environment uses several extensions from the marketplace to simplify the developer experience. VSCode can be installed by following the official installation guide at .

Extensions Marketplace
Docker Extension
Dev Containers Extension

We recommend following the tutorial to ensure your development environment is configured correctly. This will take you through cloning Rendered.ai’s Example channel, starting the Development Container then testing the execution of the channel.

sudo groupadd docker
sudo usermod -aG docker $USER
WSL2
Remote Development with AWS EC2
support@rendered.ai
https://code.visualstudio.com/Download
Run and Deploy the Toybox Channel
Installing Docker
Installing VSCode and Extensions
Extension Marketplace
Testing Dev Containers
Troubleshooting
Install Docker Engine

Local Development With NVIDIA GPUs

Rendering images can be accelerated with GPU compute. In the Rendered.ai cloud platform, we use GPU compute on deployed channels to improve channel runtimes. Linux developers who have a NVIDIA GPU on their local machine can use the GPU when developing their channels by installing the NVIDIA Container toolkit and configuring a devcontainer.json file in the codebase. Windows users will only need to configure the devcontainer.json file in the codebase.

NVIDIA Container Toolkit

After installing the NVIDIA Container Toolkit we should be able to test whether Docker has access to your GPU with the following command. This will download an officially supported NVIDIA Docker image and run the 'nvidia-smi' command to query the device.

Command

docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

Example Output

test@test ~ % docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
Unable to find image 'nvidia/cuda:11.0.3-base-ubuntu20.04' locally
11.0.3-base-ubuntu20.04: Pulling from nvidia/cuda
d5fd17ec1767: Already exists 
ea7643e57386: Pull complete 
622a04926279: Pull complete 
18fcb7509e42: Pull complete 
21e5db7c1fa2: Pull complete 
Digest: sha256:1db9418b1c9070cdcbd2d0d9980b52bd5cd20216265405fdb7e089c7ff96a494
Status: Downloaded newer image for nvidia/cuda:11.0.3-base-ubuntu20.04
Wed Jun  8 16:26:02 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06   Driver Version: 470.129.06   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:09:00.0  On |                  N/A |
|  0%   41C    P8    12W / 180W |    514MiB /  8116MiB |     10%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+

Next we will configure our VSCode Dev Container startup instructions to use the GPU.

VSCode Development Container

VSCode Development Containers use a devcontainer.json configuration file to instruct VSCode how to startup and run the Docker container. In the devcontainer.json file, we need to modify the runArgs list to include the “--gpus all“ part of our command above. Take a look at the example below:

"runArgs": ["--network","host","-v","/var/run/docker.sock:/var/run/docker.sock", "--privileged=true","--gpus","all"],

After we have the change, we can start or restart the Developer Container. In VSCode, press F1 to bring up the command palette then select Remote-Containers: Rebuild and Reopen in Container. We can then test that our Developer Container has access to the GPU by running the nvidia-smi command from a terminal within the container.

(anatools) anadev@test:/workspaces/example$ nvidia-smi
Wed Jun  8 18:06:36 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.129.06   Driver Version: 470.129.06   CUDA Version: 11.4     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  Off  | 00000000:09:00.0  On |                  N/A |
|  0%   45C    P8    12W / 180W |    575MiB /  8116MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
+-----------------------------------------------------------------------------+
(anatools) anadev@test:/workspaces/example$ 

The NVIDIA Container Toolkit allows users to build and run GPU accelerated Docker containers. More information about the library can be found at .

To install the toolkit you’ll need to first install the NVIDIA driver and Docker engine for your Linux distribution. To install the NVIDIA Container Toolkit, follow the instructions at .

NVIDIA/nvidia-docker
NVIDIA Container Toolkit
VSCode Development Container

Remote Development With AWS EC2

This guide will walk the reader through setting up a remote instance in AWS EC2 and connecting with that instance with the VSCode Remote - SSH extension. This allows users who do not have a compatible local environment to do channel development and can be a good way to ensure your channel will run correctly after being deployed to the platform.

Configuring an EC2 Instance

Selecting the right EC2 image, instance, disk space, etc can be a point of confusion. To start configuring a new instance for channel development, sign in to your AWS account, set the region you want to use for the compute, and go to the EC2 dashboard.

Select an AMI

The AMI Catalog has several pre-configured instances that will work for channel development. The instance operating system needs to be Linux-based and will need NVIDIA and Docker. In this example, we will use the Ubuntu Deep Learning Base AMI. To find this AMI in the EC2 dashboard, look under Images and navigate the AMI Catalog then search for “nvidia”. After selecting the AMI, click the Launch Instance with AMI button at the top to move on in the setup wizard.

In the “Launch an instance” dialogue, give the instance a name, such as “Channel Development”, and select a an instance type. We recommend selecting an instance that has at least one GPU, for example, an instance from the p or g family.

The majority of channels launched on the Rendered.ai platform use the AWS g4dn.2xlarge instance type.

Next a Key Pair is added so you can connect to the instance locally. If you have not set up a Key Pair in AWS, follow the dialog using all the default options. Download the .pem file and do not loose it! This is your key to accessing instance once it's running.

Finally, you might want to add extra disk space for storing generated images and videos. We recommend adding 100 GB of general purpose storage. AWS has a procedure to move your AMI to an instance with more storage later, but it's easier to add it here.

Now you can launch the instance. The default values for the remaining parameters are appropriate for channel development (generic security group, allow SSH traffic).

Launch the Instance

After completing the wizard, the instance will begin the launch process. This can take some time depending on the instance type and availability. You will need to wait until your instance is in the Running state with “2/2 checks passed” for the status check.

Billing is calculated from when the instance starts running. Make sure to stop the instance when you are done!

Once the instance is in the “Running” state you can ssh to it using the Public IPv4 address and DNS.

In the above screenshot, the public DNS is circled. Copy this for the configuration of VSCode below. For information about more ways to connect to the instance, including from the command line, follow the Connect button.

Restrict the Permissions of your PEM file

It is required the local identity file not be readable by others. On Mac and Linux, the permissions can be set in a shell as follows: chmod 400 <key name>.pem

For Windows developers, we recommend installing Ubuntu in WSL, the Windows Subsystem for Linux.

  • From PowerShell, run "wsl --install -d ubuntu"; you’ll have to create a username and password for the new OS.

  • Find and save the *.pem file from earlier into a directory that can be accessed by WSL Ubuntu.

  • Update the permissions of the .pem file by rght clicking on it, selecting "Properties" and check "Read-only". Next, under "Security" - "Advanced", select "Disable Inheritance" and for the users, "Remove All". Click "Apply", and your group or user names should be empty. Go back to "Advanced" and add your windows user and apply. Next, in the permissions give all permissions to your user.

Connect to the Instance in VSCode

Open VSCode and install “Remote - SSH” and “Remote - Containers” extensions. In the following screen shot, the extensions menu is active and the said extensions are found by searching for the keyword “remote”.

To connect to the AMI, update your ssh config file. This can be done right in VS Code by clicking the green box on the bottom left and then “Open SSH Configuration File…”.

This file is in your home directory, ~/.ssh/config. On Windows, you need to restrict the permissions on the ssh config file as described above for the .pem file.

Add an entry to the ssh configuration file as follows:

Host <ami name>
    HostName <public DNS>
    IdentityFile <key location>/<key name>.pem
    User ec2-user

All the items that are surrounded by angle bracket need to be replaced.

  • <ami name> is the name of the host, e.g. ‘anadevami’. This can be whatever reminds you of the entry when connecting

  • <public DNS> is the URL from the AWS dashboard above, it takes the form 'ec2-xx-xx-xx-xx.us-xxx-x.compute.amazonaws.com'

  • <key name> is the name you gave the key pair and <key location> is where the .pem file is stored. This might be in Downloads and is safely stored by moving it to the .aws directory in your home directory.

Now you can connect to the instance by using the “Connect to Host…” command in VSCode.

Clicking “Connect to Host…” will then list all the entries in your ssh config file. Select the one you just created.

Watch the lower left green icon to verify VSCode is connected to the host. If you have trouble connecting, the logs can be found in VSCode to run down any issues.

Now you can clone a channel repository and begin development as described here:

An Example Channel - Toybox

The toybox channel is an open-source channel maintained by Rendered.ai that is used for educating new platform users on how they can develop and deploy their own channels to generate synthetic data. The channel creates a 3D scene with toys dropped into different containers on a range of floor backgrounds. The imaging sensor is moved at random angles and lighting is slightly adjusted to demonstrate a variety of images that can be generated with a simple channel.

The toybox channel also introduces users to concepts such as 3D modelling, scene composition, physics, and rendering, all of which are important for dataset generation using Rendered.ai.

The following tutorials may be useful to new users who are looking to expand their knowledge on Channel Development on the Rendered.ai platform using the example channel.

Deploying a Channel

Channels can be deployed to the Rendered.ai latform using the anadeploy command. Once deployed, the channel is accessible through the Rendered.ai platform website to create graphs and datasets.

The anadeploy command must be called from within the VSCode Development Container, it requires a valid channel file, access to Docker, and Rendered.ai credentials.

This command simplifies the process of building and deploying a channel by:

  1. Specifying which remote channel to deploy the channel or creating a new remote channel

  2. Creating the docker context

  3. Building the docker image

  4. Uploading the docker image to the platform

  5. Displaying the status of the channel deployment

Once this process is complete, the channel will be available on the platform to add to workspaces, create graphs and run jobs that create datasets. To run the command, type:

The anadeploy command has the following parameters:

--channel (optional) If not provided, will attempt to find a valid channel file in the current directory or /workspaces directory.

--channelId (optional) If specified, will deploy the local channel to the remote channel with the given channelId.

--environment (optional) Specifies the environment to deploy the channel to. Defaults to prod.

--email (optional) If not provided, will prompt user for Rendered.ai account email.

--password (optional) If not provided, will prompt user for Rendered.ai account password.

--verbose (optional) If set, will provide more detailed messages during the deployment process.

The following sections will take you through deploying a new channel, deploying over an existing channel and other helpful utilities to have during channel deployment.

Deploying to a New Channel

Step 2: Select Create new managed channel To deploy the local channel to a new remote Channel, you need to select the “Create new managed channel“ option when prompted. Your organization may not own any managed channels so your only choice may be to create a new managed channel, as in the case below.

Step 3: Select the organization to create the managed channel in The next step is to select the organization that will own the managed channel. Remember that anybody who is a member of that organization has access to the channel and can redeploy over it.

Step 4: Confirm your selection The next step is to confirm your selection, this is a chance to back-out of channel deployment if a mistake was made, it gives the user another chance to review that the correct channel file and organization have been selected.

Step 5: Set the channel name Set the name of the remote channel. If the local channel name isn’t taken by a channel in the organization, it will prompt the user to leave blank if they want to use the local channel name.

Step 6: Build and upload Docker image After confirming your selection, the Docker image will be built and uploaded to the platform automatically. This can take some time if this is the first time you have deployed a channel.

Step 7: Channel Deployment Once the Docker image has been uploaded, the platform will run a process known as Channel Deployment which will do a series of checks of the Docker image to ensure it can be used on the platform. This includes transferring volume data, running interface checks, registering node and channel data, uploading channel documentation and annotation mapping files.

Deploying to an Existing Channel

Step 2: Choose which Channel to deploy to To deploy the local channel to a remote Channel, you need to make selection when prompted.

It’s important to note which organization owns the channel you are deploying to. Ensure you are not deploying over another channel unintentionally. Step 3: Confirm your selection The next step is to confirm your selection, this is a chance to back-out of channel deployment if a mistake was made, it gives the user another chance to review that the correct channel file and remote channel have been selected.

Step 4: Build and upload Docker image After confirming your selection, the Docker image will be built and uploaded to the platform automatically. This can take some time if this is the first time you have deployed a channel.

Step 5: Platform channel deployment checks Once the Docker image has been uploaded, the Platform will run a process known as Channel Deployment which will do a series of checks of the Docker image to ensure it can be used on the Platform. This includes transferring volume data, running interface checks, registering nodes and channel data, uploading channel documentation and annotation mapping files.

Deployment Utilities

An anautils command is supported in the devcontainers to help the user with additional information that can be used when deploying a channel. An example call for this command is shown below:

The anautils command has the following parameters:

--channel (optional) If not provided, will attempt to find a valid channel file in the current directory or /workspaces directory.

--mode (optional) If not provided, will run in the “schema“ mode. The modes are:

  • schema: generates a schema file and displays node categories, subcategories and colors.

  • docs: generates starter documentation for the channel which includes a markdown formatted table of nodes in the channel.

--output (optional) Determines where output files are generated.

Generating the Channel Schema

The anautils schema mode will create two outputs, one that is visible in the terminal that shows the category, subcategory and color of the nodes and a JSON file that defines the schema for each of the nodes for the frontend. The terminal output for the Example channel is shown below:

This command is run during Channel Deployment to generate node schemas for the backend databases. It can also be helpful to the user to ensure their nodes are structured as they expect before deployment.

Generating Starter Documentation

The Rendered.ai platform supports a single Markdown file for channel documentation. This is an opportunity to give channel users an explanation of what use cases the channel is solving, available nodes in the channel and other information such as graph requirements or execution runtimes.

The channel documentation can either be included in the Docker image during channel deployment. To include it in the Docker image, the .md file must be located in a docs/ directory in the channel workspace. In the example channel this would be located at /workspaces/example/docs/.

To generate the channel documentation, run the following anautils command:

Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.14.5 documentation

AWS Instances View
Find the Instance Public DNS
VSCode Extensions Can be Found in the Red Circled Button
Connect to the AMI
Choose the New SSH Config

Deploying a new channel will create the backend infrastructure necessary to create new graphs for the channel and to run jobs using the web interface. Step 1: Enter Rendered.ai Credentials The first thing you’ll need to enter when running anadeploy is your Rendered.ai credentials. If you do not have Rendered.ai credentials you can get access to an account by filling out the getting started form at .

Sometimes it’s important to push changes to a channel that has already been deployed to add new features or fix bugs. You can deploy over an existing channel using anadeploy by selecting which channel the local image is deployed to. Step 1: Enter Rendered.ai Credentials The first thing you’ll need to enter when running anadeploy is your Rendered.ai credentials. If you do not have Rendered.ai credentials you can get access to an account by filling out the waitlist form at .

Run and Deploy the Toybox Channel
Configuring an EC2 Instance
Select an AMI
Launch the Instance
Restrict Identity File Permissions
Connect to the Instance in VSCode
anadeploy --channel example.yml
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
Enter your choice: 0
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
Enter your choice: 0
Select an organization to create a new managed channel in:
        [0]   default
Enter your choice: 0
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
Enter your choice: 0
Select an organization to create a new managed channel in:
        [0]   default
Enter your choice: 0
Creating a new channel using the ./example.yml channel file in default organization. Continue (y/n)? y
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
Enter your choice: 0
Select an organization to create a new managed channel in:
        [0]   default
Enter your choice: 0
Creating a new channel using the ./example.yml channel file in default organization. Continue (y/n)? y
Please provide a name for the channel or leave blank for 'example': 
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
Enter your choice: 0
Select an organization to create a new managed channel in:
        [0]   default
Enter your choice: 0
Creating a new channel using the ./example.yml channel file in default organization. Continue (y/n)? y
Please provide a name for the channel or leave blank for 'example': 
Docker image "example" will be deployed to the "example" channel.
[66%] Pushing "example" to "example" 
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
Enter your choice: 0
Select an organization to create a new managed channel in:
        [0]   default
Enter your choice: 0
Creating a new channel using the ./example.yml channel file in default organization. Continue (y/n)? y
Please provide a name for the channel or leave blank for 'example': 
Step: 6/6
State: Channel Deployment Complete
Message: The channel has been fully deployed and is ready for use.
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
        [1]   Deploy to example channel in the default organization.
Enter your choice: 
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
        [1]   Deploy to example channel in the default organization.
Enter your choice: 1
Deploying ./example.yml channel to example channel in default organization. Continue (y/n)? y
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
        [1]   Deploy to example channel in the default organization.
Enter your choice: 1
Deploying ./example.yml channel to example channel in default organization. Continue (y/n)? 
Docker image "example" will be deployed to the "example" channel.
[66%] Pushing "example" to "example" 
(anatools) anadev@test:/workspaces/example$ anadeploy
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
        [1]   Deploy to example channel in the default organization.
Enter your choice: 1
Deploying ./example.yml channel to example channel in default organization. Continue (y/n)? 
Step: 6/6
State: Channel Deployment Complete
Message: The channel has been fully deployed and is ready for use.
anautils --channel example.yml --mode schema
(anatools) anadev@test:/workspaces/example$ anautils --mode schema
Using channelfile found at ./example.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
The Node Menu for this Channel will be as follows on the Rendered.ai Platform:

Objects
  Generators
    - Bubbles
    - Yo-yo
    - Skateboard
    - Playdough
    - Rubik's Cube
    - Mix Cube
    - Container
    - Floor
Modifiers
  Color
    - Color Variation
  Physics
    - Drop Objects
  Placement
    - Random Placement
  Branch
    - Weight
Render
  Image
    - Render
Values
  Generators
    - Random Integer
anautils --channel example.yml --mode docs
Run and Deploy the Toybox Channel
Add a Modifier Node
Add a Generator Node
https://rendered.ai/getstarted.html
https://rendered.ai/waitlist.html
Deploying to a New Channel
Deploying to an Existing Channel
Deployment Utilities
Generating the Channel Schema
Generating Starter Documentation

Run and Deploy the Toybox Channel

The following sections will take you through cloning the channel, running the channel locally and finally deploying the channel to the Rendered.ai platform.

Running the Toybox Channel

To run the toybox channel in a local or cloud development environment, we need to first fetch the toybox channel codebase from Github.

Cloning the Toybox Channel

The toybox channel code is shared as open source on Github.

To clone the channel repository, navigate to a directory of your choosing and call the following command in the terminal:

Opening the Development Container in VSCode

Open a new Window in VSCode and select Open Folder, navigate to the local directory that you had cloned the repository to and select Open. You should be greeted with a similar screen to the one shown below:

Once in VSCode, we can start our Development Container by pressing F1 (also CTRL+SHIFT+P or navigate to View > Command Palette…) then selecting Remote-Containers: Rebuild and Reopen in Container.

Note: the first time this is called can take awhile due to the size of the base docker image.

Once the remote container is built, the VSCode window should look similar to below. Notice the green Dev Container indicator in the bottom-left-hand corner of the window.

Congratulations, you are now in the development container.

Running the Toybox Channel

To run the toybox channel we first need to mount the necessary volumes, cloud-hosted data repositories for content that will be used when running the channel. Rendered.ai provides the anamount utility to make this easy.

The code block below shows how to mount the toybox volume using the terminal:

You may notice that a new directory called data has been added to your channel directory. In this directory we find a volumes/e66b164e-8796-48aa-8597-636d85bec240 directory that has a few files in it.

These files are static assets that the toybox package uses when generating the scenes and are necessary run the toybox channel successfully. Once we have successfully mounted the necessary volumes we can run the channel.

Note that the anamount process is required to stay open to keep the volume mounted. You’ll need to start a new terminal for executing the ana command.

To run the channel we use the ana command.

This command will generate a new run of the default graph using the toybox channel. The output of the new run can be found in the /output directory.

The toybox channel will produce a new file in the annotations, images, masks and metadata directories on each run. Running a channel multiple times will produce a dataset.

Now that we have successfully run the toybox channel, let’s deploy it to the platform.

Deploying the Toybox Channel

To deploy the toybox channel to the platform we need to run the anadeploy command. This will create a new remote channel entry on the platform and build and deploy a docker image to the platform for the channel. After this process, we will be able to user our new channel in the Rendered.ai platform web interface to create graphs and datasets.

Follow the prompts in the code block below to create a new channel in your Organization named custom_channel.

After the deployment is complete, we can navigate to the platform website to add our channel to a Workspace.

Using the new Toybox Channel

We can see that we have a new channel deployed called custom_channel that is owned by our Organization. Now to use the channel we need to add it to a Workspace. Navigate to the Workspaces tab and select a Workspace.

Adding Channel to Workspace

In the three-button ellipsis, we can add channels to our Workspace by first clicking on the Resources tab.

Then in the Resources pop-up, click on the Channels tab, select the channel you want to add, click Add, and then click Save.

Creating a new Graph

To use this channel we need to create a new graph using the nodes from the channel. Select New Graph on the Workspace page and fill in the Name and channel fields. In this case we are creating a new graph called test using the custom_channel channel.

Select Create, this will create the new graph and navigate you to it. Once inside you’ll notice that there are no nodes in the graph yet – this is because all new channels need to have their default graphs set. So first we’ll build the graph we want using the nodes from the expanded panel on the left then use Preview to make sure it works.

Set the Channel’s Default Graph

We can set this graph as the default graph by using the anatools SDK. If we go back into our Dev Container in VSCode, we can create a new terminal and use the interactive python to set the default graph for the channel.

We need three parameters for setting the default graph for a channel.

  • channelId - this can be retrieved from get_managed_channels or the GUI.

  • workspaceId - the workspace where the graph exists, can be retrieved from GUI url.

  • graphId - the graph that you’d like to set as the default graph, can be retrieved from GUI url.

By creating a new graph from the Workspace page in the GUI, we’ll see that the nodes are now in place.

Summary

At this point, you have successfully cloned the toybox channel, deployed it to your organization as a new channel, and created a graph that can be used to run jobs and as a starting point for more graphs.

Synthetic data engineering on the Rendered.ai platform begins with creating a channel. Getting started can be done by cloning Rendered.ai’s toybox channel and deploying it to the platform. At the end of this tutorial, you will have a channel that is owned by your Organization and available to your Workspaces at . You can use this channel to create new datasets, but more importantly you’ll be able to make changes to the channel to customize it to your liking so that you can generate the synthetic datasets you need.

Github Repository:

The repository includes a channel file that describes the channel, the toybox package that defines generator and modifier nodes used by the channel, and a .devcontainer folder that is used for the VSCode Remote Containers development environment. To learn more about the structure and architecture of this codebase, visit .

After cloning the codebase, we’ll use VSCode’s Remote Containers extension to build and run an interactive Docker container that has all the system libraries required to run the channel. If you have not setup VSCode Remote Containers before, refer to .

The anamount command requires Rendered.ai platform credentials. If you do not have an account, register for an account at . If you are still having issues accessing the volume, contact for help.

Navigate to and login. Once signed in, we can view the channels available to our Organization by navigating to the Organizations page in the top-right under the user icon. On the Organizations page, navigate to the Channels tab.

git clone https://github.com/Rendered-ai/toybox.git
(anatools) anadev@test:/workspaces/toybox$ anamount
Using channelfile found at ./toybox.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ************
Mounting volume e66b164e-8796-48aa-8597-636d85bec240...complete!
Remounting volumes in 3420s...
(anatools) anadev@test:/workspaces/toybox$ ana --channel toybox --graph graphs/default.yml
(anatools) anadev@test:/workspaces/toybox$ anadeploy                    
Using channelfile found at ./toybox.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Development Platform.
Email: test@rendered.ai
Password: ********
Please select one of the following options:
        [0]   Create a new managed channel.
Enter your choice: 0
Select an organization to create a new managed channel in:
        [0]   default
Enter your choice: 0
Creating a new channel using the ./toybox.yml channel file in default organization. Continue (y/n)? y
Please provide a name for the channel or leave blank for 'toybox':  new_channel
Step: 6/6
State: Channel Deployment Complete
Message: The channel has been fully deployed and is ready for use.
(anatools) anadev@test:/workspaces/toybox$ python
Python 3.7.7 (default, May  7 2020, 21:25:33) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import anatools
>>> client = anatools.client()
Enter your credentials for the Rendered.ai  Platform.
Email: test@rendered.ai
Password: ********
These are your organizations and workspaces:          
    default Organization                          fca0ef6b-1863-42f4-a2dc-ddd6c248dc95              
        Toybox                                    6cc85813-844c-4a98-9977-c74c047db3b4              
Signed into Rendered.ai Development Platform with ethan@rendered.ai
The current workspace is: 6cc85813-844c-4a98-9977-c74c047db3b4
>>> client.get_managed_channels()
[{'channelId': '517c1c59-ee31-4831-8fd0-d9a02f6baf80', 'organizationId': 'fca0ef6b-1863-42f4-a2dc-ddd6c248dc95', 'name': 'custom_channel', 'instanceType': 'p2.xlarge', 'volumes': ['3cabcc67-a398-4bee-aa1d-ecf67a72760f'], 'timeout': 120, 'interfaceVersion': 1, 'createdAt': '2021-12-16T00:13:17.027Z', 'updatedAt': '2022-05-19T22:57:40.279Z', 'organizations': [{'organizationId': 'fca0ef6b-1863-42f4-a2dc-ddd6c248dc95', 'name': 'default'}]}]
>>> client.set_default_graph(channelId='517c1c59-ee31-4831-8fd0-d9a02f6baf80',workspaceId='6cc85813-844c-4a98-9977-c74c047db3b4', graphId='8Hj5xIfo56FgZ6JVmuif')
True
>>> 
VSCode Folder Window
VSCode Development Container
Mounted Volume
Running ana
Organization Channels
Workspace Resources
Adding Channels to a Workspace
Channel Nodes
Toybox Graph
Preview
https://deckard.rendered.ai
Rendered-ai/toybox
rendered.ai
admin@rendered.ai
https://deckard.rendered.ai
Running the Toybox Channel
Cloning the Toybox Channel
Opening the Development Container in VSCode
Running the Toybox Channel
Deploying the Toybox Channel
Using the new Toybox Channel
Adding Channel to Workspace
Creating a new Graph
Set the Channel’s Default Graph
Summary

Add a Generator Node

Preparing the Blender File

Collection Structure

In Blender, we need to create a Collection (white box) and Parent Object (orange triangle) that share the same name. In this case, we want to call our object Spaceship, so we name both the Collection under Scene Collection and Object Spaceship. Note that we can still have other objects in the Blender file (Camera, Plane, Plane.001) that are not part of our Collection and thus will not be loaded into the scene when the Generator node is called.

Object Properties

Next we’ll ensure that some of the Blender object properties are configured so that the object is compatible with the channel. Things to consider in this step are object placement, size, physics, and materials.

Placement - We want to ensure the object’s center of mass is centered at 0,0,0 in the X, Y and Z coordinates. For the toybox channel this doesn’t need to be perfect but if it is too far off, the object can fall outside of the container. If your object location is not at [0,0,0] when centered, select the object and apply the Location translation in your 3D Viewport by selecting Object > Apply > Location.

Rotation - Sometimes we care about object rotation so we know the object’s orientation when its loaded into the scene. In the toybox channel we randomize this parameter so it doesn’t matter so much here.

Scale - Because the toybox channel objects are toys, we want to scale new objects added to the channel to be toy-sized. Ensure you double-check your units (in this example we are using Metric > Centimeters), and make sure the scale of your object is configured correctly. The measure tool can be helpful. After scaling, if your object isn’t set to [1,1,1] for the scale, select the object and apply the Scale transformation in your 3D Viewport by selecting Object > Apply > Scale.

Object Physics - For this channel we use gravity to drop the objects. We’ll want to enable the Rigid Body Physics under the Physics Properties tab.

Materials - Sometimes we care about the materials for an object, for example the Color Variation node that sets the color of a material is looking for specific property names. For this example we will ignore any material modifications.

Now that we have the object file configured, we can go ahead and save it. For this tutorial, we are saving our file as Spaceship.blend to keep consistent with the object name.

Adding the Object to a Volume

Creating the Volume

To create a volume on the platform we will use create_managed_volume() call in the anatools SDK.

(anatools) anadev@test:/workspaces/toybox$ python
Python 3.7.7 (default, May  7 2020, 21:25:33) 
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import anatools
>>> anatools.client()
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
These are your organizations and workspaces:            
    default Organization                          fca0ef6b-1863-42f4-a2dc-ddd6c248dc95              
        Toybox                                    6cc85813-844c-4a98-9977-c74c047db3b4                          
Signed into Rendered.ai Platform with test@rendered.ai
The current workspace is: 6cc85813-844c-4a98-9977-c74c047db3b4
>>> client.create_managed_volume(name='custom', organizationId='fca0ef6b-1863-42f4-a2dc-ddd6c248dc95')
'3cabcc67-a398-4bee-aa1d-ecf67a72760f'

The volumeId is returned from the create_managed_volume() call, we will use this value in the next step.

Mounting the Volume

To mount the volume, we’ll first want to add the volume to the toybox package’s package.yml file at toybox/packages/toybox/toybox/packages.yml. Under the volumes section we will add our new custom volume and under the objects section we will add our Spaceship Blender file.

packages.yml

volumes:
  toybox: 'e66b164e-8796-48aa-8597-636d85bec240'
  custom: '3cabcc67-a398-4bee-aa1d-ecf67a72760f'

objects:
  Spaceship:
    filename: custom:Spaceship.blend

Uploading the Object

Once we have updated the packages.yml file we can re-mount our channel using anamount.

(anatools) anadev@test:/workspaces/toybox$ anamount
Using channelfile found at ./toybox.yml.
If this is the wrong channel, specify a channelfile using the --channel argument.
Enter your credentials for the Rendered.ai Platform.
Email: test@rendered.ai
Password: ********
Mounting volume 3cabcc67-a398-4bee-aa1d-ecf67a72760f...complete!
Mounting volume e66b164e-8796-48aa-8597-636d85bec240...complete!

Notice that after mounting we see two directories under the toybox/data/volumes/ directory.

To add an file to this volume we can just drag and drop the file into the volume directory in the VSCode Explorer.

Adding the Generator Code

Now that we have our data in the volume, we can add the code to load and use the new object in the channel. To do this we need to add a new Generator node for the object, defining the loading code in the object_generators.py and the node schema in object_generators.yml.

object_generators.py

class Spaceship(Node):
    """
    A class to represent the Spaceship node, a node that instantiates a generator for the Spaceship object.
    """

    def exec(self):
        logger.info("Executing {}".format(self.name))
        return {"Spaceship Generator": get_blendfile_generator("toybox", ToyboxChannelObject, "Spaceship")}

object_generators.yml

schemas:
  Spaceship:
    alias: Spaceship
    inputs: []
    outputs:
    - name: Spaceship Generator
      description: Spaceship Object
    tooltip: Generator for the Spaceship Object
    category: Objects
    subcategory: Generators
    color: "#246BB3"

Once we have updated these nodes, we can go ahead and test the implementation to ensure the changes worked as expected.

Testing Locally

To test these code changes, we’ll add the Spaceship object to our default graph and run the ana command. The changes to the toybox/graphs/default.yml file are shown below:

default.yml

nodes
  Spaceship:
    nodeClass: Spaceship
    
  ObjectPlacement:
    nodeClass: Random Placement
    values: {Number of Objects: 20}
    links:
      Object Generators:
      - {sourceNode: ColorToys, outputPort: Generator}
      - {sourceNode: "Rubik's Cube", outputPort: "Rubik's Cube Generator"}
      - {sourceNode: Mix Cube, outputPort: Mixed Cube Generator}
      - {sourceNode: Spaceship, outputPort: Spaceship Generator}

Running the ana command will now produce the following output:

(anatools) anadev@test:/workspaces/toybox$ ana --channel toybox --graph graphs/default.yml

Using the new Object in a Graph

After the channel has successfully deployed, we can use our Generator in custom_channel graphs. In the Nodes section on the left, we should be able to find our Spaceship Generator under the Objects > Generators category. We will add this node to our graph.

Next we will configure the inputs and outputs of this node, in this case we will connect the Spaceship node’s Spaceship Generator output to the Object Generators input of the Random Placement node.

After we are happy with the graph we can create our new dataset with the Spaceship objects. Below are some examples of the images from a dataset created with this graph.

Congratulations, you have created your first generator node for the toybox channel!

Add a Modifier Node

This tutorial will take you through adding your first modifier node to the toybox channel. Modifiers nodes can manipulate 3D objects or other scene components and parameters to introduce variation. Variation is essential for creating large datasets with diverse imagery. Specifically, we’ll be adding a modifier node that will adjust the scale of the objects in the scene.

Scaling Objects in Blender

The toybox channel uses Blender to build scenes. All objects are loaded into Blender and can be manipulated using Blender python calls. First we’ll look at Blender to learn what we need to implement to adjust the scale of an object.

Opening Blender

In this starting scene we have a camera, a light and The next thing we’ll want to do is learn how to manipulate the scale of the Cube object.

Manipulating the Scale

We can set scale of the Cube object by first selecting the object in the 3D Viewport then setting the Scale properties for the X,Y,Z dimensions under the Object Properties tab. In the example below we set the scale to 5 for each of the dimensions, [5,5,5].

Finding the Python SDK call

In this example, we want to be able to allow a channel user to configure object scale without having to use Blender. The way to do this is to create a node that will include Blender python calls that modify an object. To figure out the python call to make that will scale an object, we use the Scripting view at the top of the 3D Viewport in Blender. This will provide us with a list of python calls made in the Blender session. The three most recent calls should be the calls we made to scale the Cube object.

Blender will use the the current context to determine which object to scale, indicated with the bpy.context.object prefix. In the toybox channel, we use the AnaObject class to reference the Blender object. An toybox of how we would set the scale for an AnaObject is shown below:

obj.root.scale = [5,5,5]

Adding the Modifier Code

To add the Scale Modifier to the toybox channel we will need to add a few files to the toybox package. The first two files we will add will be the Scale Modifier Node definition and the second will be the Scale Modifier Node schema. These will define the input and outputs of the node and what command will be called when the modifier is used. We create scale_modifier.py and scale_modifier.yml under the toybox/packages/toybox/toybox/nodes directory.

scale_modifier.py

from anatools.lib.node import Node
from anatools.lib.generator import ObjectModifier

class ScaleModifier(Node):
    """
    A class to represent the ScaleModifier node, a node that can modify the scale of an object.
    """
    def exec(self):
        generator = ObjectModifier(
            method="scale",
            children=self.inputs["Generators"],
            scale=self.inputs["Scale"][0])
        return {"Generator": generator}

scale_modifier.yml

schemas:
  ScaleModifier:
    alias: Scale
    inputs:
    - name: Scale
      description: The scale value to set the objects to.
      default: 1
    - name: Generators
      description: Object Generators to set the scale for.
    outputs:
    - name: Generator
      description: The modified Object Generators
    tooltip: Changes the scale of objects.
    category: Modifiers
    subcategory: Scale
    color: "#B32424"

The next file we’ll update is the existing object_generators.py, where we will update the toybox Channel Object class to add a scale method. In this method is where we will call the Blender object’s scale property, we will be scaling the objects to the same value in all three dimensions.

object_generators.py

class ToyboxChannelObject(AnaObject):
    """
    A class to represent the Toybox Channel AnaObjects.
    Add a 'color' method for the objects of interest.
    """

    def color(self, color_type=None):
        pass

    def scale(self, scale):
        self.root.scale = [scale,scale,scale]

Now that our Scale Modifier node has been implemented, we’ll want to test out our code to ensure it runs as planned and we are getting the results we want.

Testing Locally

The first thing we’ll do is modify the default graph for the channel to include the scale modifier. In toybox/graphs/default.yml, we will add Scale Modifier to modify the scale of the Rubik’s Cube. We will also replace the Object Placement node’s input for the Rubik’s Cube output with the output from the Scale Modifier.

default.yml

  ScaleModifier:
    nodeClass: Scale
    values: {Scale: 2}
    links:
      Generators:
        - {sourceNode: "Rubik's Cube", outputPort: "Rubik's Cube Generator"}
        
  ObjectPlacement:
    nodeClass: Random Placement
    values: {Number of Objects: 20}
    links:
      Object Generators:
      - {sourceNode: ColorToys, outputPort: Generator}
      - {sourceNode: ScaleModifier , outputPort: Generator}
      - {sourceNode: Mix Cube, outputPort: Mixed Cube Generator}

After making these changes, it's time to run Ana again to see what has changed. Note that the example is set to create Rubik’s cubes that are twice as big as the original, all other objects will remain the same size.

Let's test again, but this time setting the Scale parameter to 0.5.

If we are happy with the results, its ready to deploy to the platform so we can use the Scale Modifier when creating new datasets.

Using our new Node in a Graph

After the channel has successfully deployed, we can use our Modifier in our custom_channel graphs. In the Nodes section on the left, we should be able to find our Scale Modifier under the Modifiers > Scale category. We will add this node to our graph.

Next we will configure the inputs and outputs of this node, in this case we will connect the Rubik’s Cube to the Generators input and connect the Generators output to the Drop Object node’s.

After we are happy with the graph we can create our new dataset with scaled objects. Below are some examples of the images from a dataset created with a graph that scaled Skateboards randomly from 1-3.

Congratulations, you have created your first modifier node for the toybox channel!

Rendered.ai Platform

Platform Version 1.4.1

28/Feb/25

Features

Bug Fixes

Platform Version 1.6.0

17/Apr/25

Features

Platform Version 1.5.0

25/Mar/25

Features

Toybox

Starting With the Toybox Source Code to Create Custom Channels

The Toybox channel uses Blender for physics-based image simulation and 3D modeling. The Toybox source code can serve as a starting point to create custom synthetic data applications utilizing key Rendered.ai features.

Synthetic data engineers can use the Toybox channel source code to learn how Rendered.ai channels are used to control scene objects, lights, cameras, as well as custom metadata collection. The Toybox channel has nodes that randomize the 3D scene for any run on the Rendered.ai platform, including, changing the color of objects and performing a gravity simulation. Additionally, the source code contains examples of node tool tips and graph validation rules that help guide the user create graphs.

Rendered.ai Channel Architecture

Terms used when discussing channel development are detailed in the Ana Software Architecture of the Development Guides

In particular, it is useful to be familiar with packages, package volumes, nodes, schema, and base classes.

The Toybox channel uses several common nodes for random numbers and data volume access. The full list of common nodes from the anatools package are documented here:

Toybox Modules

Channels can implement randomization for dataset generation with Blender scene composition, rigid body simulation, and rendering. Channel developers design what randomization control should be exposed to the user vs hard coded in the node logic. Users control randomization by choosing nodes and setting node inputs. For example, the Toybox channel allows users to control the height and roll of the camera, but not the X, Y offset. The offsets are also randomized, but the limits were chosen to be estimated in the channel code to maximize their range for an expected result.

The “object_generators” module makes use of Rendered.ai object generators, factories that instantiate objects as needed for a given run. Similarly, the “color_variation_modifier” module makes use of Rendered.ai modifier generators.

The “random_placement” module contains two different placement nodes that uses Blender’s rigid body world settings to achieve natural placement of the toys as if dropped under the influence of gravity. One node one scatters the toys on the floor, the other drops them into a container.

The “simulate” module contains a nodes for Blender lights, cameras, and the Render node, which is required for all graphs. The Render node inputs the object of interest to be annotated, sets up the Blender compositor, and renders images and truth masks.

Variation by User Provided Data

Rendered.ai graphs can run on user uploaded data, including 3D models. To accommodate this, the Toybox channel uses two nodes from the common package: “VolumeFile” and “VolumeDirectory”. These access workspace volumes and allow users to add their Blender objects to a graph for synthetic data generation.

User Experience

Channel developers configure the channel UX in the node schema. The node schema stores node tooltips, field descriptions, and graph validation.

The following screenshot of a graph in the Toybox channel shows the Color Variation node without a required link to the “Generators” field. This shows the user the graph will not run and why.

Using the Source Code

The Developer Guide has tutorials on adding modifiers and object generators to the Toybox channel.

To navigate the code, start by setting up a development environment as described in

In the development environment, channels can be edited and deployed to a Rendered.ai organization.

DIRSIG Channel

The DIRSIG Rendered.ai Channel is a public source example of a Rendered.ai synthetic data application based on the DIRSIG simulation engine. This channel provides an easy to use UX for controlling scenario variations, including sensor setup and motion. Running DIRSIG on Rendered.ai maximizes user control of context and object variation when generating synthetic data for computer vision model training and evaluation.

By accessing the source code, synthetic data engineers can learn how to add custom randomization and capture associated metadata. After reviewing the source, technicians should be able to estimate the level of effort required to create custom Rendered.ai channels.

Configuring DIRSIG Simulations

As described in the “Creating and Using Graphs” page, Rendered.ai graphs are specific structures of nodes that determine how the scenario is assembled and the image is rendered.

For DIRSIG channels, these stochastically create the DIRSIG configuration files based on the DIRSIG File Maker library.

The workspace provided by the content code has several graphs ready to use. Several of these, like “Worldview 3” and “SkySat”, are starting points to generate specific sensor data. These place a resolution target in the scene and point the Earth observation sensor at that location. The “Drone” and “Custom Camera” graphs are starting point to generate data for placing cameras in close proximity to objects. The rest of the graphs demonstrate how to place objects in specific locations, generate clusters of objects, or adding motion to objects. The details are in the description of each graph.

Additionally, Rendered.ai graphs can run on custom user data, including 3D models. Using “File Nodes”, workspace volumes allow users to add DIRSIG Bundles as object to a graph for generating SD. For example users could add a different aircraft and use in in the Dynamic Object graph to see it flying in an urban scenario.

Annotations and Metadata

Metadata captures channel randomization per run. Parameters provided by the user or selected from the one of the random values nodes. These can be stored in the dataset as metadata for each run. For example, in the DIRSIG channel, the location and rotation of the objects or the location of sun and moon could be stored in the metadata.

Ana Software Architecture
Setting Up the Development Environment

This tutorial will take you through adding a new object to the toybox channel. We will use a Volume to store a Blender object and use an Object Generator node to define how the 3D object can be loaded from the Volume. For this tutorial we will use a Spaceship Blender file that we found on , but you are free to use any Blender file you’d like.

To ensure the Blender object is properly loaded into the scene, we need to make sure the Spaceship Blender file is structured in a way that can be interpreted by the loader. We will also want to verify properties of the object to ensure its position, scale, and physics properties are appropriate. To do this we will start by opening the file in Blender. Blender can be downloaded from . In this tutorial we used Blender 2.90 to match the toybox channel’s Blender version.

Setting the Collection and Object names.

Since your user has Read-Only access to the toybox channel’s volume, we will need to upload the Blender file to another volume that is owned by your Organization. First we need to create a volume on the Rendered.ai Platform, to learn more about Volumes, see .

Toys With Spaceships

After we have tested locally, we can now deploy the changes to the channel to the platform. To deploy a channel, reference the Channel Deployment documentation at . We will be deploying over the custom_channel in our default Organization in this tutorial.

The Spaceship node now appears in our Nodes Panel
Spaceship Graph

For this tutorial, we’ll download Blender from . For this tutorial we use Blender 2.90, the same version of Blender that is in the toybox channel environment. It is important to note that Blender python calls can change between versions of Blender. When we first open Blender, we are greeted with a screen similar to the one below:

Blender Start
Blender Scale Property
Blender Scripting
Running the Channel With Scale = 2
Running the Channel With Scale = 0.5

To deploy a channel, reference the Channel Deployment documentation at. We will be deploying over the custom_channel in our default Organization in this tutorial.

Scale Node Category
Scale Node Used in a Graph

Summary
Summary
Summary
Summary

Source Code:

Source Code:

DIRSIG File Maker:

On Rendered.ai, several annotations are collected for objects of interest. In the GUI you can view some of the basic ones, like 2D bounding box. The annotations can be converted to common formats in the GUI or with the anatools SDK, .

Turbosquid
Ana Software Architecture | Volumes
Deploying a Channel
Deploying a Channel
Preparing the Blender File
Collection Structure
Object Properties
Adding the Object to a Volume
Creating the Volume
Mounting the Volume
Uploading the Object
Adding the Generator Code
Testing Locally
Using the new Object in a Graph
Scaling Objects in Blender
Opening Blender
Manipulating the Scale
Finding the Python SDK call
Adding the Modifier Code
Testing Locally
Using our new Node in a Graph

Improved Workspace clone times when using a content code.

Fixed height issue with plots in Inferences table.

Dataset Mixing service to combine datasts on the platform. See Mixing Datasets for more information.

ML Training Checkpoints gives users the ability to save the model at specific checkpoints during training.

Authentication Service migration.

Platform Version 1.6.0
Platform Version 1.5.0
Platform Version 1.4.0
Platform Version 1.3.0
Platform Version 1.2.0
Platform Version 1.1.0
Platform Version 1.0.0
Setting Camera Height and Roll
Three Nodes Form the "Simulate" Module
UX for a Case of a Missing Required Input
View of the GitHub Source Code Repository
Content Code Graphs that Demonstrate DIRSIG Capabilities
Rendered.ai View of Annotated Image from the “Place Objects” Graph
https://github.com/Rendered-ai/toybox
Basic components
Packages
Package Volumes
Nodes
Schema
Ana Modules, Classes, and Functions
The anatools Package
Schema
Graph Validation
An Example Channel - Toybox
Setting Up the Development Environment
Rendered-ai/dirsig-channel
Creating and Using Graphs
dirsig_public / dirsig-file-maker
https://sdk.rendered.ai
Creating and Using Graphs
Download — blender.org
Download — blender.org

Platform Version 1.3.2

14/Nov/24

Bug Fixes

Summary

Fixed issue with some tables loading.

Fixed issue with previewing datasets on Workspace page.

Platform Version 1.2.6

9/Oct/24

Bug Fixes

Summary

Fixed 3DBBox generation for Blender objects with nested children.

Fixed issue with dataset post-process failing due to interruption.

Platform Version 1.3.0

31/Oct/24

Features

Summary

Graph Canvas information drawer with Channel documentation and additional node information.

Help icons to give more context on what Create actions do.

3D Viewer to explore models in Workspace Assets and Graph Canvas.

Ability to set custom thumbnails for data in Workspace Assets.

Bug Fixes

Summary

Fixed issue with downloading a volume's data recursively using download_volume_data() in anatools.

Fixed issue with some annotations not appearing in the Dataset Image Viewer.

Platform Version 1.2.5

26/Sept/24

Bug Fixes

Summary

Fixed issue where memory was calculated inappropriately for some instance types.

Fixed issue where uploaded datasets without an images/ directory would not show up in datasets list.

Fixed issue where some logs were not available for job failures.

Fixed issue where a graph was not updated after a new link was created.

Fixed issue where a node showed duplicate inputs for inputs with "-" in the name.

Fixed issue with graph validation where an error wasn't being displayed for an invalid link.

Fixed an issue where an individual file thumbnail couldn't be pulled for a volume file.

Platform Version 1.4.0

18/Feb/25

Features

Bug Fixes

Summary

Fixed issue with renaming directories or files within a Volume.

Summary

Initial release of the Machine Learning service. This service gives users the ability to run Training and Inference jobs for Object Detection and Classification models. Visit to learn more.

Initial release of the Inpaint service. This service gives users the ability to remove unwanted content from an image by uploading an image with a mask or annotation file then running an Inpaint job. Visit to learn more.

Training and Inference
Inpaint Service

Platform Version 1.2.4

19/Sept/24

Bug Fixes

Summary

Fixed issue where sometimes multiple links would be deleted unintentionally.

Fixed issue where a duplicated or edited snapshot would not appear right away in Workspace Graphs.

Fixed issue where the Dataset Viewer would crash when given incomplete data.

Fixed issue with the GEOCOCO annotation format for incomplete data.

Added the Graph name to the Staged Graphs table.

Platform Version 1.3.1

7/Nov/24

Features

Bug Fixes

Summary
Summary

Updated Support documentation text and screenshots.

Fixed issue with name overflow on tables.

Updates to data fetching after creating workspace.

Platform Version 1.2.3

05/Sept/24

Bug Fixes

Summary

Fixed issue with displaying Dataset sizes over 2.2GB.

Fixed issues with GAN Dataset thumbnails being sorted.

Fixed issues where a Volume file thumbnail wasn't generated for Blender files with spaces in the name.

Platform Version 1.2.2

29/Aug/24

New Features and Improvements

Summary

Support for multi-file and multi-part uploads.

Added the GeoCOCO annotation format.

Additional user event tracking.

Bug Fixes

Summary

Fixed issue where staged graph doesn't show up on creation until refresh.

Fixed issue where you cannot navigate back to a dataset after navigating to UMAP.

Fixed issue where clicking on a thumbnails shows the wrong image.

Platform Version 1.2.0

15/Aug/24

New Features and Improvements

Summary

GUI theme updates with dark mode, set in User profile.

A "Getting Started" splash screen for new users.

A Job Queue on the Organization landing page that shows all jobs across Organization workspaces.

Improved preview times for some channels.

Reorganized workspace pages with Volumes moved into Assets tab, UMAP and Analysis moved into Experiments tab.

Bug Fixes

Summary

Uploaded graph doesn't load.

Volume file in a subdirectory doesn't load properly.

GUI crashes when viewing a dataset.

Cannot run jobs when name is too long.

Nodes moving in group on a graph when not intended.

Staged graphs don't disappear when deleted.

Searching workspaces leads to unexpected results.

Platform Version 1.1.4

2/Jul/24

New Features and Improvements

Summary

Improved SATRGB preview speeds.

Send Contact Us to Linear.

Platform Version 1.2.1

22/Aug/24

Bug Fixes

Summary

Fixed issue where Sign-In required a User to Sign-In twice.

Fixed issue where jobs didn't show a status on Jobs Manager page.

Fixed issue where User landing page gets cached across Sign-out/Sign-in.

Fixed issue with setting Workspace thumbnail.

Fixed issue with job failing due to XML file containing "PNG".

Fixed issue with Uploaded dataset showing a seed value.

Fixed issue with Runs not completing due to timeout.

Fixed issue with Dataset Library thumbnails being out of order.

Fixed issue with Job average time for single-execution runs.

Fixed issue with Preview not timing out.

Fixed issue with logs not showing for Running jobs.

Fixed issue with modal staying open while navigating pages.

Platform Version 1.1.2

8/Apr/24

Bug Fixes

Summary

Thumbnails not displaying correctly for volume data.

Platform Version 1.1.5

12/Jul/24

Bug Fixes

Summary

Channel Deployment gets stuck at step 2/3.

Channel Deployment retagging image when version is larger than 100.

COCO annotations fails when sensor resolution is not given in [w,h].

YOLO classes.txt file not being generated.

Platform Version 1.1.3

23/Apr/24

New Features and Improvements

Summary

High Vulernabilities

LLM Service Migration

Bug Fixes

Summary

Analytics Properties job failing.

Seed stuck at one value.

GAN out of memory error with large dataset.

COCO Annotations generation getting OOM error.

Issue with Job More Info not getting updated when polling.

Platform Version 1.1.0

15/Mar/24

New Features and Improvements

Summary

Admin SDK - Reset my credentials from rai_admin

Implement email styling

Sample Gallery

Platform Version 1.0.2

16/Feb/24

New Features and Improvements

Summary

Enable Logging on router to support MAU - DAU

[Vanta] Remediate "Serverless function error rate monitored (AWS)"

[Vanta] Remediate "VPC Flow Logs enabled"

[Vanta] Remediate "Messaging queue message age monitored"

[Vanta] Remediate "IMDSv1 is disabled on EC2 Instances"

Docker image updates

Configure ingress for Admin API

Admin API shield update for Admin Cognito pool.

Production k-platform updates.

Add Deckard smoke tests

Convert daily-expiration-checks to Argo

Bug Fixes

Summary

Convert load balancers from instance to ip

Channel deployment instance type p2.xlarge is defaulted on sdk

DatasetId not passed to createGANDataset API call

Cpu nodes are using nvidia device plugin

ANA Job successRun and failedRun failed due to network issue

Jobs Page: Pressing “Clear finished jobs” button doesn't change UI state

Cannot multi-select delete nodes.

SetChannelGraph call copying graph location value to graphLocation, when graph is updated channel's graph will updated too

Object Metrics and Properties jobs fail

Platform Version 1.0.0

30/Jan/24

New Features and Improvements

Summary

Kubernetes

Platform Version 1.0.1

2/Feb/24

New Features and Improvements

Summary

Redesign Volume Sync

Deckard - Jobs Updates

Bug Fixes

Summary

Cannot use 'Set as WS thumbnail"

Post release: New GAN dataset doesn't show up in list after the gan job completes

Post release: Creating a UMAP doesn't nav to the comparison page

Post release: edit gan model doesn't show current flags, passes name even if not changed

Uploads: Fix bugs

UI on volume table pagination issue

Post release: Don't see single analytics item in Analyses Library

Post release: Can't download staged graph as yaml or json on jobs page

Platform Version 1.1.1

26/Mar/24

New Features and Improvements

Summary

Vulnerabilities for Sprint 107

Bug Fixes

Summary

gevVolumeData limit field not respected

Graph Canvas Volumes pagination

Fix COCO categories list order to match mapping file ID's

Fix the workspace "updated at" field so the recent workspaces show accurately on Landing Page

Platform Version 0.3.4.4

21/Jan/23

Bug Fixes

Summary

[Vanta] Remediate "High vulnerabilities identified in container packages are addressed (AWS)" for openssl:1.1.1w

Platform Version 0.3.4.3

17/Nov/23

Bug Fixes

Summary

Channel nodes not loading in Graph Canvas.

Platform Version 0.3.4.2

16/Nov/23

New Features and Improvements

Summary

Restrict sign ups using specific TLDs

Bug Fixes

Summary

External links are broken in deckard

Image vulnerabilities

Update link to getting started page

Resolve vulnerabilities in LLM image

Increase timeout for Core tests

Platform Version 0.3.3.1

1/Sep/23

Bug Fixes

Summary

Anatools: Update url for system notifications

Job Priority Implementation

Add validations so that Data Explorer will fail gracefully when passed bad data.

Issue with Zoom on Graph Canvas

Fix duplicate graphql call from contact us modal

GraphQL Error - getVolumes

Failed to create alarm for ec2 i-0c4f15797c6cb3de0

Platform Error - EFS Mounting

Attachments aren't working in Deckard > Contact Us modal.

Slow behavior on multi-select on Graphs

Platform Version 0.3.4

14/Sep/23

New Features and Improvements

Summary

Performance issues on landing page.

Node Input/Output Tooltips

LLM Microservice

Bug Fixes

Summary

Client returned even if authorization fails

No Logs for Jobs

Don't send out expiration notices if an org is Expired

Platform Version 0.3.3

7/Aug/23

New Features and Improvements

Summary

Update JQuery version on RAI site

Review Content Security Policy

Turn off autocomplete for password field

Disable GraphQL Introspection for Staging Environment

Update to TLS >= 1.2

Cleanup Status API mutations/queries from Gateway

Move docker-lambdas docker into python lambda

Dataset Library Masks and Bounding Boxes

Deckard Implementation for displaying Microservice ID's in UI

User API Keys

Bug Fixes

Summary

Invite member button inside the organization setting's page is not working

Reset password failure not clear

Unable to renew subscription

Issue with Invite button on Landing Page > Organization screen.

Can't rename Volume file if it's uploaded into a folder

Platform Version 0.3.2.2

5/Jul/23

Bug Fixes

Summary

Can't upload graph file

Platform Version 0.3.4.1

19/Oct/23

New Features and Improvements

Bug Fixes

Summary
Summary

Remove/disable search from within a Workspace

Update Anatools SDK login when API endpoint is down

Ensure all docker images are removed after channel deployment.

Match front-end for storing VolumeDirectory Name

Updates to registration page

Download graph is generated VolumeDirectory node as Add Entire Directory name

NVIDIA-SMI has failed due to AMI

Inexplicable Failure Shows as Success

Issue with Graph Snapshot not displaying correctly.

Renaming staged graph causes site crash.

Issue with uploading graphs with volume file node

Cannot navigate to graph editor from graphs list view.

Staged Graph Last Run field

Dropdowns sizing in Deckard.

Platform Version 1.0.3

7/Mar/24

New Features and Improvements

Summary

Improve job end time calculation

Spot Instance failover to On-Demand

Upgrade ELBs with TLS 1.3 support

Update graph preview with new designs

Uploads Service - Malicious file notifications

Bug Fixes

Summary

Workspace home page has blank status bar for running job.

Postprocess status incorrect.

Website Crashes and Graph Bugs

UMAP analyses panel has incorrect datasets listed

Job count not changing when kicking off new job.

Platform Version 0.3.2

16/Jun/23

New Features and Improvements

Summary

Add download button to left side of Analyses Library page.

Updates in Log Manager page

Lock all anatools dependencies

Volume File Metadata

Docs update for Volume Permission info

Channel Timeouts

Bug Fixes

Summary

Issue with Dataset Library page going blank when creating GAN Dataset.

GAN Models not working (size fluctuation on upload)

Logging bug in anatools interp()

GAN Model flag null handler

Home page not loading : Not Authorized

Remove cancelled jobs option from logs dropdown

Platform Version 0.3.2.1

30/Jun/23

New Features and Improvements

Summary

Update SDK to use status API instead of Gateway

Remove System Status page's dependence on Gateway

Platform Version 0.3.1.5

23/May/23

New Features and Improvements

Bug Fixes

Summary
Summary

Add additional logging for autoscaling

Channel docs not being released during channel deployment

Implement API Tests for permission parameter

Implement Volumes Organization Resources Test Plans

Implement Channels Organization Resources Test Plans

Update Annotations step function to save loose versions of annotation output.

Make position of nodes and categories predictable

Upload Graph button is disabled, can't upload graph.

Channels related integration tests are failing

Platform Error - mounting volume failed: Error retrieving region

Incorrect auto-navigation to subscription expiration message

Updated node has incorrect hash in default graph

Platform Version 0.3.1.3

20/Apr/23

New Features and Improvements

Summary

Updates to Subscription Expiration email

Bug Fixes

Summary

Attachments to contact us form

Organization member update error

Volume upload button is blocked

Fix for Channel Deployment - Log Not Found Error

Platform Version 0.3.1.6

31/May/23

New Features and Improvements

Summary

CVPR updates to Website

Bug Fixes

Summary

Platform Error: ThrottlingException

TaskArn is not passed when ana run is timeout

Improve navigation UX during authorization

Issue with invitations vs registration mismatches on email case.

Fix page height when status notification banner is on

Platform Version 0.3.1.2

12/Apr/23

Bug Fixes

Summary

Issue with get_umap() SDK call

UI : UMAP validation blocks "Compare" button until a slider is moved

removeVolumeOrganization API call leaves the volume attached to a workspace even after it's removed from the Org

Platform Version 0.3.1

7/Apr/23

New Features and Improvements

Summary

Updates to pricing page on Deckard

Minor change to TOS

Download Annotation Maps and GAN Models through UI and SDK

Set Volume Permission through UI and SDK

Status Page & Notifications

Bug Fixes

Summary

Anatools RandomChoice node doesn't work for string inputs

Update get_analytics on SDK

Analytics Microservice's downloaded json files through Deckard are broken

Channel Documentation Not Displayed Well

New Lines in Tooltips

Platform Version 0.3.1.4

5/May/23

New Features and Improvements

Summary

Anatools: download_dataset should download file as {dataset name}.zip

Anatools: Move FileObject class into non-Blender module

Website landing page for CVPR2023

Bug Fixes

Summary

Anatools: Lock requests==2.28.1 to stop anadeploy failure

Algolia Index limit downgraded to 50.

Anatools: Logging error causes error

Only users who created a workspace can access it (from 2023-04-21 onwards, intermittently)

Gatsby router flakiness on "View Logs"

GraphQL Error - getMembers

Platform Version 0.3.0.9

24/Mar/23

Bug Fixes

Summary

CORS error when setting image as thumbnail through preview.

Platform Error - Error Log Manager not executing cleanly.

Platform Version 0.3.0.8

15/Mar/23

New Features and Improvements

Summary

Update RAI website to dynamically load firebase config

Add custom pipeline for daily test environment run to deckard

Bug Fixes

Summary

VolumeSync throttling when too many files are pushed

Update getVolumeSize api to return size for organization that has access

Fix build error in pipeline

SDK Upload Channel Documentation Does Not Overwrite

Blank Organizations page after leaving an organization

Search Filter in Workspace Analytics is buggy