Using the Simulator Template

(Fig.7) Simulator Template

Simulator - Sim Template.png

General
Template Name
Simulator timeout - the maximum time the simulation should run for
Output Settings
User data path - path relative to the Output path set in the config file.
Descriptor path - path that the output data will be saved to, relative to the User Data Path
Pass Duration - how long each pass will take in simulation time.
Capture Settings
Capture screen size - all cameras in the simulator will output at this size
Frame rate - in simulation frame rate of the placed cameras
Interval - the number of frames between each capture
Delay capture by (seconds) - a delay to allow the simulator to run before capturing, this allows for action to be captured.
Simulation Settings
Simulation date
Location - longitude and latitude
Output Data Settings
Segment - check to output segmentation images
Normal - check to output normal images
Distance - check to output distance images
LiDAR - check to output LiDAR data
Masks - check to output mask images
Min Mask Size - size of the smallest mask to be captured
Keypoint Settings
Minimal Skeleton - shows top and bottom keypoints on humans
Basic Skeleton - tracks the body and basic facial facial features
Hand Skeleton - captures the position of each joint in a humans hands
Face Skeleton - uses the Dilib 68 standard

Simulator template

The simulator template is how you can control what data is produced when you process your scenario. We advise you to be aware of the settings here and what they can produce. You can create and edit the simulator templates in both the Editor and the Simulator.

Simulation Settings

The simulation setting allow you to set the time/date and location the scenario takes place. These setting will change the natural lighting of the scene by altering the height and location of the sun.

Location

The location is set using longitude latitude. You can use https://www.latlong.net to turn your desired location in to this format.

Output Data Settings

Segment

The segment image is created from the Masks captured. The colours used for segmentation are automatically set but can be set in the Editor scene view in the editor (Fig.3 - 15).

Image image_1920x1080_2024-04-19_11-14-55-86_0000.png
Segment segment_1920x1080_2024-04-19_11-14-55-86_0000.png

Normal

The world normal rgb is a 32 bit, 8 bit per channel output. The normal of the pixel surface converts x to r, y to g, z to b. All this means that the way a pixel is facing si represented by the colour shown.

Image image_1920x1080_2024-04-19_11-14-55-86_0000.png
Normal Normal.png

Distance

This is an output in a Hierarchical Data Format (.hdf). It is a byte array of 32 bit each floating values. What this give you is the distance of the floating value between the position of the pixel and the camera, This is represented in grey scale. The regions get darker the further away they are.

LiDAR

If you scenario contains a LiDAR camera this will need to be turned on so that the LiDAR data is captured. You have a choice of how the data is out put, this has to be set in the editor. There is a JSON format (MLAS) or XYZ which can be converted in to images (below). You can learn more about using a LiDAR camera in the scenario on this page Editor - LiDAR.

Lidar-Supermarket-Preview.png

Masks

Masks are used to create the segmentation images but masks are split in to layers so one image will just contain one element.

Keypoint Settings

Keypoints are mainly used for tracking human positions.

Minimal skeleton

The minimal skeleton shows the keypoints of the top of the head and the bottom of the feet.
You can learn more about what data is produced here.
Minimal Skeleton.png

Basic skeleton

The basic skeleton give more joint keypoints and basic facial keypoints (eyes and nose).
You can learn more about what data is produced here.
Basic Skeleton.png

Hand skeleton

The hand skeleton creates keypoints for every joint in the hand.
You can learn more about what data is produced here.Hands-keypoints.png

Face skeleton

The face skeleton uses the Dilib 68 standard, which uses 68 points to map the jaw line, mouth, eyes and eye brows.
You can learn more about what data is produced here.
Face-keypoints.png