DepthAI - Release version 1.0.0
themarpe
released this
19 May 22:29
·
427 commits
to b5aa9362dc777e033c5f6a78f65fe53ab2692333
since this release
First DepthAI Release
Depthai documentation: https://docs.luxonis.com/
Release Notes:
- Calibration Improvements including checking for synchronization and displaying UI stacked.
- Cleanup of example scripts to allow running w/ -cnn option (see here for example usage).
- Fix example scripts to use the latest technique for using w/ and w/out depth (see examples, here).
- Enable configurable max distance threshold (configurable here)
- Option to enable bounding box on depth stream (use the
-bb
option). - Timeout for sending neural results if grayscale cameras are not detected (e.g. if they are disconnected intentionally or unintentionally).
- Ability to run H.264 and/or H.265 encoding on color sensor alone. (See how to use here)
- Reduce resolution off of IMX378 to save resources (reduced from 4K to 1920x1080). Can go back up if of interest to anyone. We've since done further optimizations which make this not needed.
- macOS Support (see here)
- Ability to configure what portion of the bounding box is used for averaging depth (Z) and thereby X and Y locations. Configurable via
padding_factor
, here. - Add capability to get the die temperatures off of DepthAI VPU. Selectable with
meta_d2h
stream. See here. - Fix a segfault on closing streams caused by upgrade to OpenVINO 2020.1 XLink.
- Fix color aliasing effect with some neural models (see here)
- Store calibration, left-to-right, left-to-RGB, etc. board parameters to EEPROM. See PR here and example usage here.
- More rigorous mapping of depth and object detector bounding box (see here). Includes fix w/ balanced center-cropping on grayscale cameras.
- Facial landmark, age, and facial expression networks run standalone.
- Decode CNN structure based on JSON definition.
- Upgrade to OpenVINO to 2020.1
- Capability to pull JPEG from DepthAI API on-demand (see here)
- Capability to pull H.264/H.265-encoded 1080p stream from DepthAI (see here)
- Multi-stage vehicle detection application (available upon request, see here for example of it running)
- Custom training support (see here and here)
- Multi-output-tensor support. Allows running networks which aren't a single output tensor (like MobileNetSSD which is single-tensor).
- Add mirror capability to Calibration flow for more intuitive use.
- Implement timestamp onto all streams to allow host syncing as desired.
- Store calibration to EEPROM with versioning.
- Enable tighter stereo synchronization (now on the order of micro-second-level sync).
- Tie exposure of left and right together for better stereo matching.
- Fix for depth directionality here and here
- Open Source of DepthAI (depthai-api, here)
- Sanitize floating point returns in the C++ depthai-api so Python doesn't have to deal with them. (see )
- Implement USB2 mode (see here)
- Implement basic integration test system (here), which acts as a decent integration example.
- Host watchdog solution, which catches host-side errors (or example errors in coding, uncaught exceptions, etc.) and automatically restarts the script if an exception occurs. See here.
- Fix USB issue w/ Pi 4. See here.
- Allow selectable frame-rate per stream. See PR here and how to use here
- Platform-agnostic DepthAI reset. A watchdog runs in the DepthAI firmware. If the host does not respond within a certain period of time, DepthAI resets itself to wait for the host-side watchdog to trip and get DepthAI running again.
- megaAI (BW1093)
- 3D Object Localization MVP
- Allow user to select individual streams
- Colorized Depth capability
- Reduce frame latency on Raspberry Pi operation to be that of using w/ fast hosts