Skip to content

Commit

Permalink
Merge branch 'blakeblackshear:dev' into dev
Browse files Browse the repository at this point in the history
  • Loading branch information
hawkeye217 authored Nov 8, 2024
2 parents 30bac7c + 3249ffb commit a2659f2
Show file tree
Hide file tree
Showing 17 changed files with 220 additions and 140 deletions.
2 changes: 1 addition & 1 deletion docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ services:
# count: 1
# capabilities: [gpu]
environment:
YOLO_MODELS: yolov7-320
YOLO_MODELS: ""
devices:
- /dev/bus/usb:/dev/bus/usb
# - /dev/dri:/dev/dri # for intel hwaccel, needs to be updated for your hardware
Expand Down
2 changes: 1 addition & 1 deletion docker/tensorrt/Dockerfile.base
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
COPY docker/tensorrt/detector/rootfs/ /
ENV YOLO_MODELS="yolov7-320"
ENV YOLO_MODELS=""

HEALTHCHECK --start-period=600s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,11 @@ FIRST_MODEL=true
MODEL_DOWNLOAD=""
MODEL_CONVERT=""

if [ -z "$YOLO_MODELS"]; then
echo "tensorrt model preparation disabled"
exit 0
fi

for model in ${YOLO_MODELS//,/ }
do
# Remove old link in case path/version changed
Expand Down
8 changes: 6 additions & 2 deletions docs/docs/configuration/genai.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,13 @@ id: genai
title: Generative AI
---

Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects.
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.

Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
:::info

Semantic Search must be enabled to use Generative AI.

:::

## Configuration

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/configuration/object_detectors.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,7 +223,7 @@ The model used for TensorRT must be preprocessed on the same hardware platform t

The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.

By default, the `yolov7-320` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
By default, no models will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.

If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-320-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU.

Expand Down Expand Up @@ -264,7 +264,7 @@ An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yol
```yml
frigate:
environment:
- YOLO_MODELS=yolov4-608,yolov7x-640
- YOLO_MODELS=yolov7-320,yolov7x-640
- USE_FP16=false
```

Expand Down
4 changes: 3 additions & 1 deletion frigate/api/event.py
Original file line number Diff line number Diff line change
Expand Up @@ -1017,9 +1017,11 @@ def regenerate_description(
status_code=404,
)

camera_config = request.app.frigate_config.cameras[event.camera]

if (
request.app.frigate_config.semantic_search.enabled
and request.app.frigate_config.genai.enabled
and camera_config.genai.enabled
):
request.app.event_metadata_updater.publish((event.id, params.source))

Expand Down
42 changes: 36 additions & 6 deletions frigate/events/cleanup.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ class EventCleanupType(str, Enum):
snapshots = "snapshots"


CHUNK_SIZE = 50


class EventCleanup(threading.Thread):
def __init__(
self, config: FrigateConfig, stop_event: MpEvent, db: SqliteVecQueueDatabase
Expand Down Expand Up @@ -107,6 +110,7 @@ def expire(self, media_type: EventCleanupType) -> list[str]:
.namedtuples()
.iterator()
)
logger.debug(f"{len(expired_events)} events can be expired")
# delete the media from disk
for expired in expired_events:
media_name = f"{expired.camera}-{expired.id}"
Expand All @@ -125,13 +129,34 @@ def expire(self, media_type: EventCleanupType) -> list[str]:
logger.warning(f"Unable to delete event images: {e}")

# update the clips attribute for the db entry
update_query = Event.update(update_params).where(
query = Event.select(Event.id).where(
Event.camera.not_in(self.camera_keys),
Event.start_time < expire_after,
Event.label == event.label,
Event.retain_indefinitely == False,
)
update_query.execute()

events_to_update = []

for batch in query.iterator():
events_to_update.extend([event.id for event in batch])
if len(events_to_update) >= CHUNK_SIZE:
logger.debug(
f"Updating {update_params} for {len(events_to_update)} events"
)
Event.update(update_params).where(
Event.id << events_to_update
).execute()
events_to_update = []

# Update any remaining events
if events_to_update:
logger.debug(
f"Updating clips/snapshots attribute for {len(events_to_update)} events"
)
Event.update(update_params).where(
Event.id << events_to_update
).execute()

events_to_update = []

Expand Down Expand Up @@ -196,7 +221,11 @@ def expire(self, media_type: EventCleanupType) -> list[str]:
logger.warning(f"Unable to delete event images: {e}")

# update the clips attribute for the db entry
Event.update(update_params).where(Event.id << events_to_update).execute()
for i in range(0, len(events_to_update), CHUNK_SIZE):
batch = events_to_update[i : i + CHUNK_SIZE]
logger.debug(f"Updating {update_params} for {len(batch)} events")
Event.update(update_params).where(Event.id << batch).execute()

return events_to_update

def run(self) -> None:
Expand All @@ -222,10 +251,11 @@ def run(self) -> None:
.iterator()
)
events_to_delete = [e.id for e in events]
logger.debug(f"Found {len(events_to_delete)} events that can be expired")
if len(events_to_delete) > 0:
chunk_size = 50
for i in range(0, len(events_to_delete), chunk_size):
chunk = events_to_delete[i : i + chunk_size]
for i in range(0, len(events_to_delete), CHUNK_SIZE):
chunk = events_to_delete[i : i + CHUNK_SIZE]
logger.debug(f"Deleting {len(chunk)} events from the database")
Event.delete().where(Event.id << chunk).execute()

if self.config.semantic_search.enabled:
Expand Down
9 changes: 4 additions & 5 deletions frigate/genai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,11 +54,10 @@ def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:

def get_genai_client(genai_config: GenAIConfig) -> Optional[GenAIClient]:
"""Get the GenAI client."""
if genai_config.enabled:
load_providers()
provider = PROVIDERS.get(genai_config.provider)
if provider:
return provider(genai_config)
load_providers()
provider = PROVIDERS.get(genai_config.provider)
if provider:
return provider(genai_config)
return None


Expand Down
10 changes: 10 additions & 0 deletions frigate/output/output.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ def receiveSignal(signalNumber, frame):
birdseye: Optional[Birdseye] = None
preview_recorders: dict[str, PreviewRecorder] = {}
preview_write_times: dict[str, float] = {}
failed_frame_requests: dict[str, int] = {}

move_preview_frames("cache")

Expand Down Expand Up @@ -99,7 +100,16 @@ def receiveSignal(signalNumber, frame):

if frame is None:
logger.debug(f"Failed to get frame {frame_id} from SHM")
failed_frame_requests[camera] = failed_frame_requests.get(camera, 0) + 1

if failed_frame_requests[camera] > config.cameras[camera].detect.fps:
logger.warning(
f"Failed to retrieve many frames for {camera} from SHM, consider increasing SHM size if this continues."
)

continue
else:
failed_frame_requests[camera] = 0

# send camera frame to ffmpeg process if websockets are connected
if any(
Expand Down
11 changes: 8 additions & 3 deletions frigate/output/preview.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def __init__(
# write a PREVIEW at fps and 1 key frame per clip
self.ffmpeg_cmd = parse_preset_hardware_acceleration_encode(
config.ffmpeg.ffmpeg_path,
config.ffmpeg.hwaccel_args,
"default",
input="-f concat -y -protocol_whitelist pipe,file -safe 0 -threads 1 -i /dev/stdin",
output=f"-threads 1 -g {PREVIEW_KEYFRAME_INTERVAL} -bf 0 -b:v {PREVIEW_QUALITY_BIT_RATES[self.config.record.preview.quality]} {FPS_VFR_PARAM} -movflags +faststart -pix_fmt yuv420p {self.path}",
type=EncodeTypeEnum.preview,
Expand Down Expand Up @@ -154,6 +154,7 @@ def __init__(self, config: CameraConfig) -> None:
self.start_time = 0
self.last_output_time = 0
self.output_frames = []

if config.detect.width > config.detect.height:
self.out_height = PREVIEW_HEIGHT
self.out_width = (
Expand Down Expand Up @@ -274,7 +275,7 @@ def should_write_frame(

return False

def write_frame_to_cache(self, frame_time: float, frame) -> None:
def write_frame_to_cache(self, frame_time: float, frame: np.ndarray) -> None:
# resize yuv frame
small_frame = np.zeros((self.out_height * 3 // 2, self.out_width), np.uint8)
copy_yuv_to_position(
Expand Down Expand Up @@ -303,7 +304,7 @@ def write_data(
current_tracked_objects: list[dict[str, any]],
motion_boxes: list[list[int]],
frame_time: float,
frame,
frame: np.ndarray,
) -> bool:
# check for updated record config
_, updated_record_config = self.config_subscriber.check_for_update()
Expand Down Expand Up @@ -332,6 +333,10 @@ def write_data(
self.output_frames,
self.requestor,
).start()
else:
logger.debug(
f"Not saving preview for {self.config.name} because there are no saved frames."
)

# reset frame cache
self.segment_end = (
Expand Down
8 changes: 7 additions & 1 deletion web/src/api/ws.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,10 @@ function useValue(): useValueReturn {
...prevState,
...cameraStates,
}));
setHasCameraState(true);

if (Object.keys(cameraStates).length > 0) {
setHasCameraState(true);
}
// we only want this to run initially when the config is loaded
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [wsState]);
Expand All @@ -93,6 +96,9 @@ function useValue(): useValueReturn {
retain: false,
});
},
onClose: () => {
setHasCameraState(false);
},
shouldReconnect: () => true,
retryOnError: true,
});
Expand Down
Loading

0 comments on commit a2659f2

Please sign in to comment.