Follow these steps to start your journey by interfacing with the Rendered.ai web interface to create an account, configure a graph, run a job, and download the resulting dataset. After registration and signing in for the first time, users will be taken to the Landing page. The Landing page is where you can view recent work across organizations and drill down into the the Organization-specific data quickly.
Clicking on the drop-down icon next to our organization will give us quick access to the Workspaces, Channels, Volumes, GAN Models and Annotation Maps that the organization has access to.
New accounts with Rendered.ai will want to customize their Organization. The Organization is where all workspaces will be housed and where all members from your business will be able to share their work, contribute to channels, and generate new datasets. Initially the organization is named after the name of the user it was created for, to edit this name we will go to the Organization Setting page. We can get there one of two ways, the first is by clicking on the Organization name on the left hand side of the screen then clicking on the Settings button.
The second method is to click on the User Icon in the top-right, and click on the Organizations from the drop-down list.
The Organization Settings page has information about the organization: the name, organization ID and plan. It also contains information about limitations put on the organization and any workspace within the organization. We will edit our organization name by clicking on the pencil icon next to our organization name, entering the new name and clicking Save.
Now that we’ve customized our organization name, lets take a look at what you can do within the organization.
Every new organization that is created within Rendered.ai is given a workspace. The workspace in your organization will depend on if you entered a Content Code during the registration process or not. A workspace is a place to create graphs and generate datasets which may be shared within or outside your organization. You can create a new workspace by selecting the organization and clicking the green New Workspace button.
For this new workspace, we are going to name it Example and add the example channel to it. You could optionally specify a Content Code which could include additional data. To learn more about Content Codes, see our application user guide on Content Codes.
For this workspace we have selected Rendered.ai’s Example channel. Channels define the simulation architecture (3D objects, backgrounds, sensors, sensor platforms, etc.) required to generate synthetic data. The Rendered.ai Example channel serves as a generic channel (toys generated in a box) that allows you to experiment and learn the platform. This channel corresponds to the public codebase on Github: Rendered-ai/example. New organizations are provided with the Example channel. Rendered.ai develops a number of other channels for various applications that are available on request. Channel customizations can be made directly to a cloned version of the Example codebase (see Ana Software Architecture), or provided as an Engineering service by Rendered.ai .
After creating the workspace, we can go into the workspace to generate synthetic data. Initially the workspace will be empty and look like the screenshot below. We will need to create a graph before we can generate the synthetic data.
A graph is a visual diagram that is based on a channel’s codebase, allowing you to view the objects and connections that exist in that channel, modify them within the diagram view, and stage the modified channel to produce simulation sets. Within the workspace view, create a new graph by clicking the New Graph button on the left-hand side of the screen.
Provide a graph name, channel and optionally a description for the graph then click Create. We can also optionally upload a JSON or YAML representation of a graph that will place nodes and links for us rather than a default graph.
Once created, the graph can be viewed as a node-edge diagram in the graph viewer. You can learn more about Creating and Using Graphs and Graph Best Practices by following these links.
Clicking the Preview button in the top right of the screen renders a sample image of the provided graph configuration, allowing you to ensure your requirements are met before staging the graph for production.
The below is an image generated during preview of the default graph on the example channel.
Staging Graphs within the Jobs Manager
Once the graph is providing a satisfactory output, it can be staged by clicking the Stage button on the top right of the screen.
Staging a graph adds an item to the Staged Graphs section of the job manager.
When configuring a dataset job, a name and description can be given to the output dataset, and you can specify the number of images to generate, as well as a specific seed to initialize the random number generators in order to match results across multiple runs if desired. You can also designate priority for the job if multiple jobs are being run at once. To access this information click the down arrow to the right of the staged graph name. Clicking the Run button will launch a new dataset job.
Once the dataset job has started, you can view more detailed status of the job by pressing the downward-facing arrow on the right-hand side of the job.
This shows average time per image, and gives an estimated end time for the run. It also provides information about compute instances being used to process the job.
Once the job is complete, you can find the dataset in the Datasets Library section of the workspace.
From here, you can view more information about the dataset. You can download a compressed version of the dataset and begin using the outputs to train your AI. To learn more about Datasets, reference our guide Creating and Using Datasets. You can also use services to learn more about the dataset, compare datasets or generate new annotations.
Now that you have set up a new workspace, staged a graph, and created a dataset, we recommend you learn about Organization and Workspace resources.
Here are a few definitions to help you understand the components and patterns used in Rendered.ai.
A Platform as a Service (PaaS) is a category of cloud computing services that allows users to provision, instantiate, run, and manage a modular bundle comprising a computing platform and one or more applications, without the complexity of building and maintaining the infrastructure typically associated with developing and launching the application(s).
Rendered.ai is a Platform as a Service for data scientists, data engineers, and developers who need to create and deploy unlimited and customized synthetic data for pipelines for machine learning and artificial intelligence workflows. The benefits of the platform are: reducing expense, closing gaps, overcoming bias and driving better labeling, security, and privacy outcomes when compared with the use of real-world data.
An organization is a billable entity and a way of segmenting off work for collaboration or from collaboration for security purposes. The Organization is fundamentally a collaboration tool. All subscriptions to Rendered.ai typically grant the customer access to one Organization.
A Workspace is a container for organizing work related to one set of projects or applications. Workspaces may be used as a collaboration device in that Guest users can be invited to a Workspace who with then not have access to any other part of your Organization. Your Workspace shows recent Graphs, recent Jobs, and recent Datasets you have worked on.
A Channel is a container for Graphs, Packages (sensors and application specific requirements) and code that is used to define the universe of possible synthetic output for a particular application. For example, Channels may represent synthetic data generation as diverse as video of microscopy or satellite based SAR data acquisition. All of the components of a Channel together define the set of capabilities that solve a specific synthetic generation use-case.
A Graph is a visual representation of the elements (capabilities, objects, modifiers) and their relationships that compose an application. Jobs are created from Staged Graphs to create synthetic data.
Through both coding and visualization in the Rendered.ai user interface, Graphs are designed as Node-Edge diagrams. These diagrams allow the user to engineer the linkages between objects, modifiers, and other Channel components to design synthetic datasets.
Nodes are either objects, capabilities or modifiers and will appear in a Graph as boxes with a name and properties that can be set in the Graph interface.
Some Nodes may be described as ‘Objects,' indicating that they are simulated physical objects that will be used in synthetic images. Other nodes may be referred to as 'Modifiers,' indicating that they somehow change or impact image generation during processing of the Graph.
For example, Nodes may represent types of assets to be placed in a scene, a digital sensor model, a renderer, post-processing imagery filters, annotation generators, and much more. Node capabilities are channel dependent.
Edges are the term we use to describe the connectors between Nodes in a Graph. A connector is used in the visual interface to show that a particular parameter of one Node can be used to populate a parameter or for processing in another Node.
A Staged Graph is a Graph that has been queued to enable Members of the Organization to run Jobs that generate synthetic data.
A Job is a processing effort that generates a specific quantity of synthetic images or video that will be run on the Rendered.ai high performance compute environment.
A Dataset is a variable collection of output images or video, masks, annotation, and other metadata that has been created by execution of a Job. Different Channels may contain different components depending on the specific application and problem domain of the Channel. Some sensor models, for example, may not lend themselves to easy creation of masks (images with pixel values capturing the location of scene assets).
Common functions useful across multiple channel.
Rendered.ai Volume store static assets for use in synthetic data generation.
Package Volumes, a.k.a. channel volumes, are deployed with Rendered.ai channels within the platform. These volumes are maintained by channel developers, tested and provided as ideal use cases for the channel's intent.
Workspace volumes are associated with user-managed workspaces and are created and maintained by users of Rendered.ai. These volumes are dynamically updated for increased fidelity of generated datasets.
The Engine is the underlying set of capabilities shared by all Channels and are accessible either through the SDK or the Rendered.ai web interface. The Engine executes the cloud compute management, configuration management and various other functions.
Loosely coupled services that are available to be executed as part of a Channel: Preview, Annotations, Analytics and GAN are examples of the Platform’s microservices.
The service trains two neural nets, one that translates the synthetic data to real data, and the other one that translates the real data to synthetic and it attempts to make it self-consistent and allows it to converge into a matched style
anatools is Rendered.ai’s SDK for connecting to the Platform API. It is the toolkit for third-party and Rendered.ai developers to use in producing applications on the Platform.
A Channel uses a domain specific sensor and other capabilities for supporting the application.
An application collects the Channel elements in executable code in a Container to produce synthetic data for a specific end-user use case.
Code that lives in a docker container. Defines nodes, node capabilities, and node procedures and how they interconnect. These libraries may include packages that describe sensors and other components of the channel.
Content codes are the fastest way to get started on the Rendered.ai platform. By using a content code you will be given a new workspace that has predefined content that can include example graphs, datasets, and analysis. Using a content code is easy, and there are two ways to do it depending on whether or not you already have a Rendered.ai account.
Signup for an account here: https://rendered.ai/free-trial/
After a brief demographics survey, you will receive an invitation email to join the platform. That email will redirect you to the Platform’s registration page that looks similar to the image below.
New users can specify a content code when filling in the registration information. In the example below we are using the TOYBOX content code which gives us access to a channel that simulates toys being dropped into a toybox.
After filling in the information, click the Sign up button. You will receive an email for verification, after which you will be able to sign into the platform by navigating to https://deckard.rendered.ai/sign-in and filling out the same email and password.
After you login, you will see the content code workspace in your new organization.
The second way of using a content code is to create a new workspace with a content code. To do this, click on your organization and then click on the New Workspace button.
After clicking on the New Workspace button, you will be shown a new dialog where you can name the new workspace, select channels or enter a content code. For this example we will just name the new workspace and provide the same content code.
After clicking on the Create button, the new workspace will be created within the organization. It may take a bit for the new workspace to sync. Refresh the screen to see when it is done. The Workspaces table should now show the new content code workspace.
Now you can open up the workspace and start exploring!
anatools is Rendered.ai’s SDK for connecting to the Rendered.ai Platform.
Follow the steps on PyPI to get the SDK installed on your machine using pip. You must have an active Rendered.ai account to use the SDK. Your Rendered.ai account credentials can be used both for the SDK and the Rendered.ai web interface.
Execute the python command line, create a client, and login to Rendered.ai. In this example we are instantiating a client with no workspace or environment variables, so it is setting our default workspace. To access the tool, you will need to use your email and password for https://deckard.rendered.ai.
You must first generate an API Key in order to log in with it. You can generate as many API keys as you desire with custom expiration dates in order to bypass the email login. The context for the login session via an API Key is pre-set to the Organization the key was created for.
Run create_api_key
with name, optional expiration date, and the Organization ID for the context. Make sure to save the resulting output that is your new API Key. This will only be shown once.
Now you can log in with the API key in one of two ways:
1. API Key Param
Set APIKey
to your API Key on client instantiation:
2. Environment Variable: RENDEREDAI_API_KEY
Export the key string to the RENDEREDAI_API_KEY
environment variable in your active command line or save it in your OS path variables. If no key is detected, then you will get the prompt to enter your email/password.
anatools
SDK After logging into Rendered.ai with the SDK, you can
Browse and Manage your Organizations and Workspaces
Deploy and Manage your Organization Channel(s)
Upload and Manage Volume Data required for your Channels
Set Default Graph for a Managed Channel
Get information about the Channels that are available to you
Create, Manage, and Download Staged Graph
Generate, Manage, and Download Synthetic Datasets
Generate, Manage, and Download Annotations locally or in the Platform Cloud
Generate and View Analytics
Generate GAN-based datasets
Generate UMAP comparison between datasets
Detailed documentation can be found in the SDK Developer Guide.