In the Processing panel (View menu > Processing), you can access the following system-wide settings. These settings affect all the connected cameras and devices.
Some settings are only available when advanced parameters are displayed (at the top right of the panel, click Advanced Parameters ).
Object Tracking section
|Compute Object Quality
|When selected, Object Quality scores are computed for each object.
|Low Jitter mode
|When selected, sets the Grayscale Mode for all cameras to Only, which applies advanced centroid fitting and jitter reduction algorithms to reduce data noise. Note that running in this mode increases sensitivity to bandwidth limitations and its effectiveness is related to system size. For more information, contact Vicon Support.
|Specify the number of threads to use for object tracking. If zero (the default), the thread count is automatically calculated.
|Enable Constant Velocity Tracker
|If selected, improves tracking for fast-moving objects. It requires more computation so increases latency and decreases throughput.
|The minimum proportion of markers that must be visible to the cameras before the object is booted. If it is less than this value, the object is not booted. You can override this value for selected objects by using Object Presets (see Resolve issues with object similarity in the Vicon Evoke User Guide. When set to its default value, 100% of markers are needed to boot the object, but it may take longer to boot than lower values.
|Maximum Boot Iteration Count
|The maximum number of iterations allowed for the booting algorithm. Increasing this parameter improves booting quality, but has a small performance cost.
|Minimum Object Marker Separation
|The minimum distance allowed between marker positions on each object in order for objects to be tracked separately.
|Boot Recon. Min. Cams
|The minimum number of camera rays required to generate the reconstructions that are used for object booting.
|Halve Processing Rate
|Processes every second frame that comes from the centroid system. You can select this option when the host machine is not powerful enough to process every frame at the current rate.
Camera Healing section
|Enable Bias Handling
|Computes and corrects for camera calibration bias. See Change Auto Bias Handling in the Vicon Evoke User Guide.
|Enable Auto Recover Camera Pose
|Enables automatic recovery of bumped cameras. See Understand Auto Recover Camera Pose in the Vicon Evoke User Guide.
|Generate Online Masks
|For use in systems that include cameras with built-in strobes (such as Vero, Vantage, or Valkyrie cameras). When selected, enables the generation of online masks based on live data and uses these to improve the system health score. These masks are used to remove static, unlabeled centroids from consideration when generating the System Health Centroid Connectivity metric, which is used to determine when a camera requires automatic bump-healing. This option is particularly useful when a camera has been bumped and previously masked grayscale data becomes visible; it prevents a camera being repeatedly, unnecessarily bump-healed.
Performance Tuning section
|Bias Handling State Throttle
|Process every nth frame when computing the camera calibration bias.
|Live System Health State Throttle
|Control how much processing is required for system health, by considering only every nth frame, eg, for a system frame rate of 100 Hz, set to 100 to process 1 frame every second.
Proximity Grouping section
|Enable Proximity Grouping
|When selected, enables objects in the same template group to be tracked and then distinguished using their distance relationship to unique objects in the same proximity group. For details, see Use proximity-based tracking.
|Enable Object Instance Tracking
When selected, assigns free template objects to object template instances. For details, see Track multiple identical objects.
|Proximity Labeler Threshold
Distance in mm 3D. For details, see Use proximity-based tracking.
|Label Unambiguous Instances
Label any template instances that appear to be unambiguous. This doesn't require proximity to a unique object. For details, see Use proximity-based tracking.
|Unlabeled Recon. Min. Cams
|Controls how many cameras (rays) must see the same marker (centroid) to create a new, unlabeled reconstruction. The minimum value that can create a reconstruction is two cameras. The maximum value of this parameter is 50 camera rays. If there are a large number of unlikely reconstructions being created, increase this value.
The default value for this property is 3 (ie, three cameras), so if you are using a two-camera system, ensure you change the value to two before starting to work with Evoke.
|Environmental Drift Tolerance
An uncertainty applied (in mm) to camera calibration to take into account environmental factors such as temperature change, that may cause drift in the calibration. For larger volumes, increase this value; for smaller volumes, decrease this value.
|Reconstruction Minimum Separation
|The minimum distance, specified as a value in the range 0–100 mm, allowed between 3D marker positions in order for them to be considered for reconstruction. If two candidate reconstructions are closer than this minimum separation, only the most likely reconstruction (in terms of the number of cameras contributing) will be reported. The other will be discarded. A higher value decreases the likelihood of creating spurious reconstructions, but increases the possibility that some genuine markers will not be reconstructed.
To turn off this feature, set the value to 0.0.
|Enable Unlabeled Reconstructions
|Enables generation of reconstructions using centroids that are not labeled as object markers.
Characters from Clusters section
|Maximum Number of Characters
|The maximum number of characters from clusters that are assumed to be present.
|When selected, character solving and retargeting are deactivated. This enables characters to be used for auto-assignment and grouping, in conjunction with object-tracking.
System Utils section
|Enable Live System Health
|When selected, enables live System Health metrics. If Auto Recover Camera Pose is selected, live System Health metrics are always generated, so this setting is used only when Auto Recover Camera Pose is cleared. Disabling this option also disables Auto Recover Camera Pose.
|Enable System Health Report
|When selected, enables System Health Report.
|Pause Buffer Size
|The size (in seconds) of the output cache. See also Live Review.
DataStream Output section
|Include Unlabeled Reconstructions
|Add unlabeled reconstructions to the DataStream output.
|Include Object Quality
|Add object quality to the DataStream output.