Graph Best Practices
Last updated
Last updated
The Rendered.ai platform exposes a rich interface to manipulate graphs for generating a wide variety of datasets. While generated images and videos have virtually infinite randomness, the graph parameters provide for the control one needs when addressing specific computer vision training and validation issues. These guidelines for graph editing help users get the most from Rendered.ai channels.
A basic graph performs three functions - object creation, scene composition, and scene rendering. Most channels are built with a set of objects of interest so the graph will have one or more nodes that create those objects. Objects of interest are important because they are used for generating object annotations. Another node creates the scene and places objects of interest in it. The scene may require other components such as a background image and a sensor. Finally, a simulator node renders the output of the scene into a synthetic image.
The following is a basic graph that performs these three operations.
In the above graph, the Yo-yo node creates a yo-yo generator that is passed to the RandomPlacement node. The RandomPlacement node creates a scene and makes a cloud of yo-yos because it calls the Yo-yo generator 25 times and places the objects randomly in the scene. The drop object node takes the Yo-yo's and drops them into a container that is on a floor. The RenderNode then renders the scene.
Though not required, channels developed by Rendered.ai use generators for objects. An object generator is a factory that provides new instances of the objects as many times as the code requests one. The Ana generator class has a weight that can be used in a placement node. By using object generators instead of manually adding objects, a scene can be built procedurally based on the structure of the graph.
Object modifiers are generators with children and make changes to the children generators. For example, the ColorVariation modifier adds a call to the color method to it’s children.