Once you have your channel deployed, you can customize it by adding components in the form of nodes within a package. Examples of these components are: 3D objects/backgrounds and their modifiers, sensor simulators, and scene placement configurations. This document describes the steps required to build these assets.
For a comprehensive list of package and channel components, see Ana Software Architecture.
A Rendered.ai channel is configured using graph nodes. Nodes are Python objects stored in packages. The ana code comes with a “common” package that contains many utility nodes like the Random Integer node.
Some Typical Nodes
Packages have the following directory structure.
To create a new node within a package, add a class to the corresponding python file in the
nodes folder, and enter a reference to the new class in an associated
.yml configuration file. Each reference configuration has three parameters: two for the node's interface and one for a tooltip to guide the user.
inputs - a list of dictionaries, containing an entry for each input link or user parameter
outputs - a list of dictionaries, containing an entry one for each linkable output
tooltip - a string describing the node’s functionality
In the Example channel, for instance, a new render node for black and white images could be coded in a new python class ‘RenderBWNode’ in the
packages/example/nodes/render.py module, and the following entry would be added to the associated
Example Node Configurations
Adding an Object to a Package
For a conceptual description, seeGenerators: Objects and Modifiers
A new 3D object can be added to a Rendered.ai channel by configuring the model in Blender, as described below, and saving in
.blend format. Different types of files are supported in different versions of Blender. Blender v2.90 is the latest build of the Rendered.ai dev container, and models created in this version are optimal.
In Blender, each object must have its own collection whose name will be used in the packages.yml config file. The name of the nodes in the package
/../config/package.yml file refer to the collection name in the blend file. The collection names must be unique, and should not include cameras.
Collection Names are Used in
Add Object to the Code
Create a node class definition for the object in the ObjectGenerators python module. If the object is a static data asset, add the blend file to the data directory, and add the name of the collection to the
package.yml file - this tells the object generator which blender file contains the model.
All data assets are stored in
data/volumes which is either built into the channel container or pushed to your organization’s volume storage. As the data folder grows, the docker registration process lengthens, so at some point you will want to move the data folder to storage. For more details, see Volumes and Package Data
Lists node classes to be displayed in the Graph Editor in the Rendered.ai web interface
Each node class contains an exec method used to retrieve the blend file
Object generators can be based on blend files, native blender objects (e.g. geometric), or even coded via mathematical equations.
Indicates node attributes to be displayed and used in the graph editor: outputs, input links and dropdowns, and a tooltip.
Troubleshooting tip: Blender throws a CRITICAL “Multiple roots found in collection” error when there are multiple objects (orange triangle icon) in a collection.
Steps to fix:
Make an empty - in the default view, bottom left “Add” -> Empty - Axis
In the outliner (on the right where all the objects are listed) - select all by typing ‘a’
Parent the empty by typing ctrl-p when hovering in the 3d view
Adding a Background to a Package
Backgrounds can be 2D images or a 3D scene. Nodes for individual objects, like a floor or a box, can be made as generators as described above.
Adding Scene Placement to a Package
Placement nodes place the objects in the scene. For some use cases, a placement node will organize the objects based on a background. In other use cases, like the Rendered.ai Example channel, they prepare the objects for a simulation (like dropping them in a gravitational field).
Adding a Sensor to a Package
Sensors can take various forms. They can be passive sensors (like cameras for visible light, infrared, multispectral, etc.) or active sensors (such as lidar scanners, radar, X-ray, etc.). There are two driving decisions that need to be made when designing a channel that supports multiple sensors: the number of sensors per run and, for cameras, the number of compositor pipelines.
For cameras, images are rendered and the compositor performs post processing and writes an image file. Generating multiple images sequentially means looping through the following steps: camera positioning, configuring the compositor (with a unique output filename), applying the appropriate materials, and rendering the image. This can be done for multiple cameras in different positions or different compositor pipelines of sensors at the same location (e.g. black and white, different image sizes, different focus properties).
Dataset Output Filename Format
When a graph is run, the output file names contain a 10 digit interpretation number, the frame number of the simulation, and the sensor name. For example the name ‘RGBCamera’ can be seen in the image file, `0000000000-250-RGBCamera.png`.
A single camera can be used to generate multiple outputs. For example, a color and a black and white image can be generated by making two render calls in the Render Node.
A Second Output From the Rendered.ai Example Channel
Requirements for adding a camera to a channel
Providing a name for the labeling of resulting images, metadata, and annotation files
Either replacing the code in packages/example/nodes/render.py, or creating a new sensor node for the camera and referencing it in a sensor port in the render node.
Lens Type: Perspective, Orthographic, Panoramic, Shift
“The camera lens options control the way 3D objects are represented in a 2D image.” - Blender Manual
Lens Depth of Field / Aperture: Focal Distance/Object, Blurring/Distortion parameters
“Real-world cameras transmit light through a lens that bends and focuses it onto the sensor. Because of this, objects that are a certain distance away are in focus, but objects in front and behind that are blurred.” - Blender Manual
Mode: RGB, Black and White
For many Rendered.ai channels, image sensors are achieved with the Blender camera. For others, e.g. radar or acoustics, a Blender Sensor (e.g. a Ray Sensor) can be used and the resulting signal can be processed into an image.
Ray: triggers when a pulse is reflected off a surface
Radar: triggers when objects are in a canonical field of view (sees through soft bodies)
Armature: triggers on changes to inverse kinematic parameter updates (robotics)
Adding Nodes to a Channel
Any package node, either custom or from the “common” package, for example, needs to be added to the channel to be used in a graph. This requires adding the node to the channel configuration file
When getting started with channel development, users will make a custom channel by going through the tutorialCreating a Copy of the Example Channel. In the tutorial the duplicatechannel.sh script is run which creates the following configuration files.
Channel Configuration File
Schema relating ana context namespace to nodes. This is a registry of the nodes available to the interpreter; the node names are used for graphs and shown in the web interface graph editor
Node hierarchy used in the library panel in the graph editor
Initialization for the ana interpreter, Blender configuration (GPU support)
A graph for nodes in the example package
An annotations mapping configuration. Mappings are used for annotation conversion. They map object names from Ana metadata (each object is named based on it’s ObjectGenerator.object_type) to an appropriate class name.
Coverage unit tests
To add a node to a channel the following files should be updated.
Add the name of the node to channel.yml - the name given to the object nodes in channel.yml is used by graph files and by deckard.yml.
Add the name of the node to deckard.yml
Add objects to mappings files.
Once a node is added to the channel it can be tested in Ana by adding it to a graph. Run the channel on the graph locally, as described in the README of the Ana code with a breakpoint, dumping a blend file, at some point in the code after the node is executed.
For example, dump a blend file in the render node code to make sure the objects are in the scene. The output location for this file needs to be a directory that is volume mounted to the dev containers so it becomes accessible to the OS system. A good candidate is the
data directory that contains the package data.
Testing a Node by Dumping the Scene to a Blender File
Now you can open the file
baked_scene.blend in Blender and confirm the objects are loaded.