Even though it might theoretically be feasible to do computer vision tasks within PX4 on the Snapdragon platform, I would choose a different, more separated, approach.
You have the PX4 application running (mostly on the DSP of the Snapdragon, with another process running on the Linux side). Then you have your computer vision application running, also on the Linux side. The output of this application (e.g. pose estimates if you're developing a visual positioning system) is then communicated to PX4 via some interprocess communication such as Mavlink or RTPS. On the PX4 side, you write a module that receives your data and takes the appropriate actions (in the case of vision position, this is already taken care of in the state estimators LPE or EKF2).
With this approach, the flight control application remains lean and is not bogged down by large library inclusions and heavy computation tasks.