🚀 Savant Computer Vision Framework v0.2.4 is out: More Rust-y than before, packed with New Samples and Features

We’re thrilled to announce the release of Savant version 0.2.4, a cutting-edge computer vision and video analytics framework optimized for Nvidia hardware! Here’s what’s new:

Following a period of intense dedication, we’re thrilled to unveil Savant version 0.2.4, enriched with a plethora of exciting features and examples. This latest update dramatically enhances its functionality, providing a broader suite of tools for the rapid and efficient construction of dependable computer vision pipelines.

We’re steadfast in our belief that practical examples carry more weight than a mound of text, hence our focus isn’t solely on creating extensive documentation but also on presenting features in a manner that’s simple to understand and use.

The recent update presents three new instances:

  • Age/Gender Prediction Example demonstrates how to utilize YoloV5-Face, work with a custom model predicting age and gender, and execute sophisticated in-GPU affine transformations based on facial landmarks using OpenCV-CUDA and Python;
  • Conditional Video Encoding Example illustrates a pipeline that adds drawings to frames and encodes a video stream only when prompted by a user (in this instance, when a model detects objects); it provides insights on how to prevent unnecessary use of computational resources when the need for footage depends on certain conditions;
  • Multiple RTSP Streams Example showcases a straightforward pipeline that processes two RTSP streams and broadcasts them to RTSP; Savant’s approach to dynamic stream processing deviates considerably from common expectations, which often over-complicate the process, hence we’ve designed a straightforward pipeline that handles multiple streams simultaneously to demonstrate its functionality.

Novel Additions

  • Conditional Drawing and Encoding contribute to reducing traffic and optimizing the use of CPU/GPU resources;
  • A fresh FFmpeg-based RTSP source adapter performs considerably better than its GStreamer-based counterparts, especially when streams include B-frames;
  • An innovative, all-purpose FFmpeg-based source adapter that can manage all inputs compatible with FFmpeg;

Quality Assurance

  • We now monitor potential performance regressions with each ticket merge; our goal is to accelerate Savant’s performance, not decelerate it; hence we keep a close watch on how our code impacts performance;
  • We’re transitioning from Python-based internals to Rust-based: we’re developing a core functionality library, Savant-rs, where we scrupulously test code; we’re gradually replacing Python-based components with Rust-based ones to guarantee Savant operates GIL-free where feasible and that the code is of superior quality. This is an ongoing process; in the future 0.2.5 release, we plan to introduce more GIL-free integrations.

Documentation

  • We have meticulously documented source and sink adapters;
  • We’ve demonstrated how to use image preprocessing in both our standard documentation and in a fully-featured example (age/gender prediction);
  • We’ve penned a new section on setting up the development environment in VS Code.

DeepStream 6.2 Bug Resolution

We’ve reported a bug related to the NVENC functionality on Jetson devices. DeepStream 0.6.2 is affected: NVENC incorrectly sequences encoded frames when the framerate doesn’t match the configured rate, which can happen with RTSP or when frames are bypassed under specific conditions.

In Savant, we’ve implemented a workaround: we reorder frames when needed. We’re hopeful that Nvidia will address this issue in the forthcoming DS release.

Upcoming in 0.2.5

The impending release will incorporate more Rust code to reduce GIL dependency in the pipelines. It’ll introduce additional functions related to dynamic pipeline configuration and edge-related development. Expect three to four new examples illustrating both basic and advanced functionalities.

🔗 Join our Discord and visit GitHub to learn more about Savant’s cutting-edge features and to stay updated with our latest advancements. Let’s shape the future of computer vision together!