Using the Simulator Template
The simulator template controls the simulation parameters and specifies what data is produced when you process your scenario. We advise you to be aware of the settings here and what they can produce. Simulator templates are created and edited from the chameleon menu.
The simulation setting allow you to set the time/date and location the scenario takes place. These settings will change the natural lighting of the scene by altering the height and location of the sun.
The location is set using longitude latitude. You can use https://www.latlong.net to turn your desired location in to this format.
The visible image is output in uncompressed PNG format. Each image has a timestamp encoded into the file name.
The segment image is output in uncompressed PNG format. The colors used for segmentation are automatically set.
This is a raw byte array, twelve bytes to each pixel representing three single precision floating point values: x, y, z. These are the surface normal vector of the object visible at that pixel location.
This is a raw byte array, four bytes to each pixel in single precision floating point format. This represents the world space distance from the camera to the object visible in that pixel position, in meters.
This is a raw byte array four bytes to each pixel in UINT format. Each object in the scene is given a unique ID number which persists for the duration of the simulation. The ID number is written into every pixel where the object is visible, forming an instance segmentation mask. The details of the object can be retrieved by looking up its ID number in the annotations file for that simulation run.
Keypoints are provided for human pose estimation and tracking. There are three skeletons and a set of Dlib face points provided as standard.
In addition to the skeleton points, chameleon also tracks the head and eyeball poses, placing these outputs in the annotations file.
The minimal skeleton shows the keypoints of the top of the head and the bottom of the feet.
You can learn more about what data is produced here.
The basic skeleton give more joint keypoints and basic facial keypoints (eyes and nose).
You can learn more about what data is produced here.
The hand skeleton creates keypoints for every joint in the hand.
You can learn more about what data is produced here.
The face skeleton uses the Dilib 68 standard, which uses 68 points to map the jaw line, mouth, eyes and eye brows.
You can learn more about what data is produced here.