DIRSIG Channel
Last updated
Last updated
The DIRSIG Rendered.ai Channel is a public source example of a Rendered.ai synthetic data application based on the DIRSIG simulation engine. This channel provides an easy to use UX for controlling scenario variations, including sensor setup and motion. Running DIRSIG on Rendered.ai maximizes user control of context and object variation when generating synthetic data for computer vision model training and evaluation.
By accessing the source code, synthetic data engineers can learn how to add custom randomization and capture associated metadata. After reviewing the source, technicians should be able to estimate the level of effort required to create custom Rendered.ai channels.
Source Code: Rendered-ai/dirsig-channel
As described in the “Creating and Using Graphs” page, Rendered.ai graphs are specific structures of nodes that determine how the scenario is assembled and the image is rendered.
For DIRSIG channels, these stochastically create the DIRSIG configuration files based on the DIRSIG File Maker library.
DIRSIG File Maker: dirsig_public / dirsig-file-maker
The workspace provided by the content code has several graphs ready to use. Several of these, like “Worldview 3” and “SkySat”, are starting points to generate specific sensor data. These place a resolution target in the scene and point the Earth observation sensor at that location. The “Drone” and “Custom Camera” graphs are starting point to generate data for placing cameras in close proximity to objects. The rest of the graphs demonstrate how to place objects in specific locations, generate clusters of objects, or adding motion to objects. The details are in the description of each graph.
Additionally, Rendered.ai graphs can run on custom user data, including 3D models. Using “File Nodes”, workspace volumes allow users to add DIRSIG Bundles as object to a graph for generating SD. For example users could add a different aircraft and use in in the Dynamic Object graph to see it flying in an urban scenario.
On Rendered.ai, several annotations are collected for objects of interest. In the GUI you can view some of the basic ones, like 2D bounding box. The annotations can be converted to common formats in the GUI or with the anatools SDK, https://sdk.rendered.ai.
Metadata captures channel randomization per run. Parameters provided by the user or selected from the one of the random values nodes. These can be stored in the dataset as metadata for each run. For example, in the DIRSIG channel, the location and rotation of the objects or the location of sun and moon could be stored in the metadata.