Annotations are a description of what is in the imagery, usually in a format that a machine learning algorithm can ingest. The annotations generated by Rendered.ai channels are a proprietary format that needs to be converted before being ingested by a machine learning algorithm. For this reason, Rendered.ai provides a service to convert our annotations to several common formats, including:
The Common Objects in Context (COCO) dataset was created for object detection, segmentation and captioning. It has an data format described on this page, the Annotation service generates the Object Detection format.
The PASCAL VOC challenge provides standardized datasets for comparing model performance.
The You Only Look Once (Yolo) object detection system.
The KITTI format for image classification, the input label file is described here.
Object Detection using MXNet, the inputs are described as Train with the Image Format.
During the conversion of annotations, we offer a way to map objects to specific classes. This can be helpful when you have several objects of one type in imagery that you want to be classified into a single class. For example, maybe you have imagery with Ford Focus, Honda Accord and Toyota Camry objects that you want to be classified as Car.
Mappings can do this for you, below is an example Annotation Map file that classifies the objects in the example channel to Cubes or Toys.
classes: 0: [none, Cubes] 99: [none, Toys] properties: obj['type'] == 'YoYo': 99 obj['type'] == 'BubbleBottle': 99 obj['type'] == 'Skateboard': 99 obj['type'] == 'Cube': 0 obj['type'] == 'Mix Cube': 0 obj['type'] == 'PlayDough': 99
The first section classes is a key, value pair where the key is an integer and the value is a list of classes. The reason it is a list, is because some annotation formats such as COCO can have an object hierarchy with super-classes.
The second section properties is also a key, value pair. The key in this case is a python eval string that can either be evaluated to True or False, the value is a class number specified in classes. The way it works, is we use the metadata file to specify properties about the objects, if the eval statement is true for that object it will get classified as the assigned class.
Lets say we have a Skateboard type object and are using the above Annotation Map to generate COCO annotations. The first two keys in the properties section will evaluate to False for the skateboard object, but the third key will evaluate to True – thus the skateboard object is assigned to class 99 (Toys).
Creating Annotation Maps
New annotation maps can be created on the platform by navigating to an organization’s Annotation Map tab and clicking the New Annotation Map button.
After clicking the button, you will be given a dialog where you need to enter the name of the Annotation Map and choose a file to upload. Remember your annotation map must be in YAML format with the classes and properties keys. You can optionally add a description.
After you are complete, click Create to create your new Annotation Map. The new Annotation Map will show up under your organization Annotation Maps table.
After creating the Annotation Map, we’ll need to add it to our workspace before using it when creating annotations for our dataset. To do this we’ll go back into our workspace, click the three-button icon and then Resources.
After this, we’ll add the Cubes Annotation Map to the Included column of our workspace resources and finally click Save.
Now we can go back to our dataset to generate the new annotations. In this example, we select our latest dataset and then Click the + icon next to Annotations in the dataset.
Next, we will select the annotation format and annotation map from a list. We start the annotation job by clicking the Create button.
After we click the Create button, a new entry will be shown in the dataset’s Annotations section. The new entry has a status symbol, COCO format and Cubes map.
When the status symbol goes away it means the job is done and we can download our COCO-formatted annotations. Click on the download icon next to the dataset’s Annotations entry to download the new annotations file(s).
All dataset services share the same type of status symbols when a job is running, complete or failed.
No symbol means that the service job is complete and ready to use.
The sand dial symbol means that the job is running. It will remain this way until the job has either completed or failed.
The error symbol means that the job has an issue. You can click on the symbol to fetch a log of the service to help determine what caused the issue.