Best Practices
Traces FAF API accepts both videos and frames as input. Below you may find general recommendations for each data type.
Last updated
Traces FAF API accepts both videos and frames as input. Below you may find general recommendations for each data type.
Last updated
Correct frame selection is an important factor for getting the most accurate results.
The advice may vary for every use-case individually, so it is best to talk to Traces Account Manager to schedule a demo.
Below you may find a general recommendation for frames sampling.
Traces AI FAF accepts any number of frames. Even as little as 5 frames are usually enough to receive accurate results.
For motion events that are 10 - 15 seconds long, it is recommended to split a video equally into 4 chunks and take the first frame of each chuck and the last frame from the last chunk as shown on a diagram.
For example, given a 10 seconds long motion event with 12 fps, it is best to sample
frame #0,
frame #30,
frame #60,
frame #90, and
frame #120
and send it to Traces FAF as fields "image_0", "image_1", "image_2", "image_3", and "image_4" respectively.
Traces AI accepts .mp4, .m4v, .avi video formats.
A general recommendation is to send a video 10-20 seconds long. It is advisable to avoid sending videos over 30 seconds long as it may cause a long response time.
Motion mask is an area on a video/frame where motion triggers will be ignored. The mask indicates an insignificant area from a camera view. It is defined by a customer and unique for every camera. Each motion mask is represented by a set of coordinates that form a polygon. All coordinates should be within the image's dimension to guarantee correct results.
Traces AI supports an unlimited number of motion masks per event of any shape and any size.
Motion mask - [((1920, 500), (1920, 1080), (1000, 1080))]
All motion events that are coming from the same camera should have the same scenery. Changing camera location and using the same Cameras ID will reset the AI auto training progress.