# Best Practices

## How to sample frames for a motion event?

Correct frame selection is an important factor for getting the most accurate results.

The advice may vary for every use-case individually, so it is best to talk to Traces Account Manager to schedule a demo.

Below you may find a general recommendation for frames sampling.

Traces AI FAF accepts any number of frames. Even as little as 5 frames are usually enough to receive accurate results.

For motion events that are 10 - 15 seconds long, it is recommended to split a video equally into 4 chunks and take the first frame of each chuck and the last frame from the last chunk as shown on a diagram.

For example, given a 10 seconds long motion event with 12 fps, it is best to sample&#x20;

* **frame #0,**&#x20;
* **frame #30,**&#x20;
* **frame #60,**&#x20;
* **frame #90, and**&#x20;
* **frame #120**&#x20;

and send it to Traces FAF as fields "image\_0", "image\_1", "image\_2", "image\_3", and "image\_4" respectively.

![frame sampling](https://archbee.imgix.net/bLFGjdFdIY6MzXs3Kd3G8/IhEbt9dMEVWym95LiozhD_group-369.png?auto=format\&ixlib=react-9.1.1\&h=244\&w=2293)

## Recommendations for videos <a href="#cn-recommendations-for-videos" id="cn-recommendations-for-videos"></a>

Traces AI accepts .mp4, .m4v, .avi video formats.

A general recommendation is to send a video 10-20 seconds long. It is advisable to avoid sending videos over 30 seconds long as it may cause a long response time.

## Motion Masks

Motion mask is an area on a video/frame where motion triggers will be ignored. The mask indicates an insignificant area from a camera view. It is defined by a customer and unique for every camera. Each motion mask is represented by a set of coordinates that form a polygon. All coordinates should be within the image's dimension to guarantee correct results.&#x20;

Traces AI supports an unlimited number of motion masks per event of any shape and any size.&#x20;

#### Below are some examples of various motion masks (blue), given an image (gray) of 1920\*1080 resolution.

{% tabs %}
{% tab title="Example 1" %}
Motion mask - `[((1920, 500), (1920, 1080), (1000, 1080))]`

![](https://2748832723-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-McYmA310oN--gaDwXjq%2Fuploads%2F0GDIXZYvhA7Z5IFTgXLY%2Fmotion_mask1.jpg?alt=media\&token=c3e0ebb9-7e7b-46f7-8548-4ff79c58d269)
{% endtab %}

{% tab title="Example 2" %}
Motion mask - `[((0, 500), (150, 800), (200, 600), (700, 400), (400, 400), (100,200))]`

![](https://2748832723-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-McYmA310oN--gaDwXjq%2Fuploads%2FW949ZrYhnbdVMgvIKYFj%2Fmotion_mask2.jpg?alt=media\&token=faf5852e-0670-4e2a-bf74-d98819e446bb)
{% endtab %}

{% tab title="Example 3" %}
Motion mask - `[((0, 500), (150, 800), (200, 600), (700, 400), (400, 400), (100,200)), ((1920, 500), (1920, 1080), (1000, 1080))]`

![](https://2748832723-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F-McYmA310oN--gaDwXjq%2Fuploads%2FxIrr4rNHWFFYkzluPbVc%2Fmotion_mask3.jpg?alt=media\&token=dacd5e09-6f4b-494e-82ec-4a40aa70a8d4)
{% endtab %}
{% endtabs %}

## Cameras ID & Cameras Location <a href="#at-cameras-id-and-cameras-location" id="at-cameras-id-and-cameras-location"></a>

All motion events that are coming from the same camera should have the same scenery. Changing camera location and using the same Cameras ID will reset the AI auto training progress.
