Using the Simulator Template

(Fig.7) Simulator Template

simulator-sim-template.png

General
Template Name
Simulator timeout - the maximum time the simulation should run for
Output Settings
User data path - path relative to the Dataset location path set in zookeeper. Can be used to separate data into projects
Descriptor path - path that the output data will be saved to, relative to the User Data Path. Usually used to identify specific scenarios. Successive runs of a scenario will create incrementing versions of this path.
Pass Duration - how long each pass will take in simulation time.
Capture Settings
Capture screen size - all cameras in the simulator will output at this size
Frame rate - in simulation frame rate of the placed cameras
Delay capture by (seconds) - a delay to allow the simulator to run before capturing, this allows for action to be captured.
Simulation Settings
Simulation date - set this to run the simulation at a specified day of the year
Location - longitude and latitude. Combines with Simulation date to set the length of the day and angle of the sun .
Output Data Settings
Segment - check to output segmentation images
Normal - check to output normal images
Distance - check to output distance images
Masks - check to output mask images
Min Mask Size - size of the smallest mask to be captured
Keypoint Settings
Minimal Skeleton - shows top and bottom keypoints on humans
Basic Skeleton - tracks the body and basic facial facial features
Hand Skeleton - captures the position of each joint in a humans hands
Face Skeleton - uses the Dilib 68 standard

Simulator template

The simulator template controls the simulation parameters and specifies what data is produced when you process your scenario. We advise you to be aware of the settings here and what they can produce. Simulator templates are created and edited from the chameleon menu.

Simulation Settings

The simulation setting allow you to set the time/date and location the scenario takes place. These settings will change the natural lighting of the scene by altering the height and location of the sun.

Location

The location is set using longitude latitude. You can use https://www.latlong.net to turn your desired location in to this format.

Output Data Settings

Image

The visible image is output in uncompressed PNG format. Each image has a timestamp encoded into the file name.

Segment

The segment image is output in uncompressed PNG format. The colors used for segmentation are automatically set.

Image image_1920x1080_2024-04-19_11-14-55-86_0000.png
Segment segment_1920x1080_2024-04-19_11-14-55-86_0000.png

Normal

This is a raw byte array, twelve bytes to each pixel representing three single precision floating point values: x, y, z. These are the surface normal vector of the object visible at that pixel location.

Image image_1920x1080_2024-04-19_11-14-55-86_0000.png
Normal Normal.png

Distance

This is a raw byte array, four bytes to each pixel in single precision floating point format. This represents the world space distance from the camera to the object visible in that pixel position, in meters.

Masks

This is a raw byte array four bytes to each pixel in UINT format. Each object in the scene is given a unique ID number which persists for the duration of the simulation. The ID number is written into every pixel where the object is visible, forming an instance segmentation mask. The details of the object can be retrieved by looking up its ID number in the annotations file for that simulation run.

Keypoint Settings

Keypoints are provided for human pose estimation and tracking. There are three skeletons and a set of Dlib face points provided as standard.
In addition to the skeleton points, chameleon also tracks the head and eyeball poses, placing these outputs in the annotations file.

Minimal skeleton

The minimal skeleton shows the keypoints of the top of the head and the bottom of the feet.
You can learn more about what data is produced here.
Minimal Skeleton.png

Basic skeleton

The basic skeleton give more joint keypoints and basic facial keypoints (eyes and nose).
You can learn more about what data is produced here.
Basic Skeleton.png

Hand skeleton

The hand skeleton creates keypoints for every joint in the hand.
You can learn more about what data is produced here.Hands-keypoints.png

Face skeleton

The face skeleton uses the Dilib 68 standard, which uses 68 points to map the jaw line, mouth, eyes and eye brows.
You can learn more about what data is produced here.
Face-keypoints.png