Using the Simulator Template
The simulator template is how you can control what data is produced when you process your scenario. We advise you to be aware of the settings here and what they can produce. You can create and edit the simulator templates in both the Editor and the Simulator.
The simulation setting allow you to set the time/date and location the scenario takes place. These setting will change the natural lighting of the scene by altering the height and location of the sun.
The location is set using longitude latitude. You can use https://www.latlong.net to turn your desired location in to this format.
The segment image is created from the Masks captured. The colours used for segmentation are automatically set but can be set in the Editor scene view in the editor (Fig.3 - 15).
The world normal rgb is a 32 bit, 8 bit per channel output. The normal of the pixel surface converts x to r, y to g, z to b. All this means that the way a pixel is facing si represented by the colour shown.
This is an output in a Hierarchical Data Format (.hdf). It is a byte array of 32 bit each floating values. What this give you is the distance of the floating value between the position of the pixel and the camera, This is represented in grey scale. The regions get darker the further away they are.
If you scenario contains a LiDAR camera this will need to be turned on so that the LiDAR data is captured. You have a choice of how the data is out put, this has to be set in the editor. There is a JSON format (MLAS) or XYZ which can be converted in to images (below). You can learn more about using a LiDAR camera in the scenario on this page Editor - LiDAR.
Masks are used to create the segmentation images but masks are split in to layers so one image will just contain one element.
Keypoints are mainly used for tracking human positions.
The minimal skeleton shows the keypoints of the top of the head and the bottom of the feet.
You can learn more about what data is produced here.
The basic skeleton give more joint keypoints and basic facial keypoints (eyes and nose).
You can learn more about what data is produced here.
The hand skeleton creates keypoints for every joint in the hand.
You can learn more about what data is produced here.
The face skeleton uses the Dilib 68 standard, which uses 68 points to map the jaw line, mouth, eyes and eye brows.
You can learn more about what data is produced here.