I’m working on a project where I’m using GStreamer in a QGC to stream an RTSP feed from a camera. The application is designed to run on an Android platform, and I’m experiencing some issues with my GStreamer pipeline when the camera is in motion or when the distance between the controller and the receiver increases.
- When the camera moves or the receiver gets further from the controller, the video feed pixelates heavily and eventually freezes.
- The pixelation becomes worse with rapid movement or unstable network conditions
- std::string pipeline_str =
“rtspsrc location=” + camera + " latency=50 buffer-mode=auto drop-on-latency=true ! "
"rtph264depay ! "
"queue leaky=no ! "
"h264parse ! "
"decodebin ! "
"videoconvert ! "
"video/x-raw,format=RGB ! "
“queue leaky=downstream !”
“appsink name=app_sink sync=false drop=true enable-last-sample=false”;
With this setup, delay increases steadily as the camera moves or streams high-motion content. The stream starts in real time but gradually lags by several seconds.
std::string pipeline_str =
“rtspsrc location=” + camera + " latency=100 protocols=tcp buffer-mode=auto drop-on-latency=true ! "
"rtph264depay ! "
"queue max-size-buffers=10 max-size-time=500000000 max-size-bytes=0 leaky=downstream ! "
"h264parse ! "
"decodebin ! "
"videoconvert ! "
"video/x-raw,format=RGB ! "
“appsink name=app_sink sync=false drop=true enable-last-sample=false”;
How can I minimize this delay or how can I use software or hardware accelerated decoders?
QGC has forcesoftwaredecode and forcehardwaredecoder in its gst structure, but when I use these decoders in this pipeline, I cannot get the image. The pipeline is not working. How can I achieve this? I would be very happy if you could help me.