Using cameras
Cameras are used to capture your data. There must be at least one camera in your scenario in order to get data out but any number of cameras can be placed. Each camera will produce one complete set of images, meta data and annotations that will be output to a folder in the output hierarchy. This folder will be named after the database path name you give to the camera instance when it is placed in the scenario. The data in the output folder is rendered from the perspective of its camera and the world space location of the camera is stored in the annotations file so that cross referenced multi-camera setups such as stereo vision can be simulated.
Cameras are assets that are imported to chameleon in .mtb files just like standard assets and appear in the cameras list in the editor. Camera assets can be set up to simulate real life camera properties such as color and noise profiles. (Most camera properties such as field of view and capture format are set up in the editor).
Cameras can be placed directly into the scenario by selecting a camera asset from the menu and placing it, or they can be attached to other elements in the scenario by selecting it via that element. Cameras that are attached to a moving object will move along with it, keeping the fixed relationship established when it was attached.
Cameras are always loaded as a standard type and can be set to have certain behaviors and attributes by changing the selection in its camera type dropdown. Below describes each of Mindtech's camera types and their uses.
The standard camera produces a rectilinear image using a simple pinhole model. Most of the functions of this camera are shared with the rest of the camera types.
This is a dropdown that allows you to change the camera type
Field of view is the is the maximum angle the camera can view. If you are simulating a real camera you will be able to find it's field of view in its technical specifications. If you are wanting replicate human sight you would need to set it to 60 degrees for central sight and between 130 -135 to include peripheral vision. The down side to a large FOV will be the distortion of the images it produces.
Setting camera type to drone camera causes the camera to behave as though it is attached to a drone. To use a drone camera you first need to create a circuit, you can find how to do that in Using Circuits and attach the drone to it by passing the name of the circuit in the 'Circuit' field (Fig.2 - 1).
When the simulation runs, the camera will fly along a straight line between the waypoints of the circuit, at a fixed height above the waypoints according to the Height setting (Fig.2 - 2) and at a constant speed according to the Speed setting. Note that the height of each waypoint can be set individually when creating the circuit, allowing for variations in drone height above the ground.
The camera position may be set up to lie on and align with the circuit at the start of the simulation but it does not need to be. On startup, it will automatically find the nearest waypoint, set its height to the waypoint height plus the value set in the Height parameter and fly towards it. The facing direction will be automatically set to point to the waypoint unless it has been set to not do that as described later.
When the camera arrives within a certain radius of the waypoint, it will select the next point in the circuit and fly towards it. Note that if the circuit is composed of waypoints arranged as a net work rather than as a single loop, there may be waypoints that have multiple possible next connections in which case the next waypoint will be chosen randomly from the list of candidates. The choice is controlled by the Seed value.
The drone camera has three look at modes, when Fix Direction (Fig.2 - 4) is set it will always point in the direction set in the scenario editor otherwise it will always point towards the next waypoint with a smooth transition when the direction of flight changes. If the drone camera is given a target object, it will always point towards the center of that object.
Setting camera type to cloud camera causes the camera to relocate after a set number of images have been captured and continue capturing from the new location.
Unlike a standard camera, the cloud camera is not located at the placement point in the scenario, (the location defined by the camera position and rotation fields). In cloud camera mode this point defines the center of a hollow sphere, or cloud, within which the camera will appear. It is possible to set the inner and outer radius of the hollow sphere and the camera can be restricted to a smaller volume by setting the various parameters.
It is easiest to think of camera placement in terms of longitude and latitude. The spread restricts the camera to locate itself within a certain range of longitude while the direction defines the center of the excluded area. The max and min elevations set the limits of the latitude where the camera may locate. All locations are relative to the center point; note that the camera is aware of other objects in the scene and will not locate itself inside any object although it will place itself in locations where the target may be occluded, including below ground. so caution is needed in setting possible placements.
The camera will be aimed at a random point within a target volume which is by default 1 meter on a side and located at the center of the sphere. The location and size of the target can be changed without affecting the location of the sphere. To ensure the camera always points at the same spot, set the dimensions of the target to zero.
During simulation runs the annotation file for a cloud camera records the current location and orientation of the camera, not the location and orientation of the cloud sphere.
Cloud cameras are placed by the user in the scenario editor. Any camera can be chosen from available cameras accessed via the camera select icon; from the camera setup dialog an extra set of properties must be specified by the user:
Radius min, max
The inner and outer radii of a hollow bounding sphere. Specified as floating point numbers representing meters in real-world units, centered on the camera unless the camera is attached to an element in which case it is centered on the element.
Elevation min, max
Lower and upper bounding angles, centered on the bounding sphere and specified in degrees around the x-z plane. Restricts the bounding sphere to an annulus bounded by the min max radii and the min max elevation.
Spread
The angular spread of the bounded annulus, centered on the Direction. Specified in degrees 0 - 359, it restricts the annulus to a region bordered by the spread.
Direction x, y, z
The centre axis of the spread of the generation volume, specified as an angle relative to the forward facing direction of the camera, unless the camera is attached to an element in which case it is centered on the facing direction of the element.
Target position x,y,z
The center point of a cuboid volume used as a target for the look-at direction of the camera. Specified as floating point numbers representing meters in real-world units, the target position is relative to the camera unless the camera is attached to an element in which case it is relative to the element position.
Target size x, y, z
The dimensions of the target in meters. Each regeneration of the camera will choose a look-at direction that randomly intersects the target.
seed
An integer seed number used for randomising the positions at which the camera will generate. If this number is unchanged from one pass to another, the positions will also be unchanged. This number is exposed in pass system so that each pass can generate new positions.
Framestep
A integer that specifies the number of captured frames to display for each generated capture position. when this number is exceeded, the camera will move to its new position.
The user uses the radius and elevation properties to specify an annular placement volume within which the camera will be placed and the spread and direction properties to restrict the volume to a directed wedge shape. The target position and size specify the position and size of a target volume at which the camera will be aimed. The camera will be randomly placed within the placement volume and aimed so as to intersect with a random part of the target volume.
It the camera is attached to an element and the element moves, the target and placement volumes will move along with it.