Follow me framing target


#1

Product: [Bebop/Bebop2]:Bebop2
Product version: [X.X.X]:4.7.1
SDK version: [X.X.X]:3.13.1
Use of libARController: [YES/NO] (Only for ARSDK):Yes
SDK platform: [iOS/Android/Unix/Python…]:Android
I’m tring to implement Follow me.
And I find someting should be set before active follow me mode.

featureControllerInfo.setGps()
featureControllerInfo.setBarometer()
featureFollowMe.sendConfigureRelative()
featureFollowMe.sendTargetFramingPosition()
featureFollowMe.sendStart()

Is there anything else should be set?
sendTargetImageDetection
should I use this?

the other question is
with the function sendTargetFramingPosition()
I can set the target object. (can it?
but this function only can set horizontal and vertical with %
I don’t know how to use the parameter to frame my object like ROI


#2

Framing position establishes the offset within the viewable frame that target image detection should sync to.


#3

Thanks, I have seen your post

In your post you didn’t sendTargetImageDetection
so,should I set it by myself?
if yes,
sendTargetImageDetection(deviceController->follow_me, (float)target_azimuth, (float)target_elevation, (float)change_of_scale, (uint8_t)confidence_index, (uint8_t)is_new_selection, (uint64_t)timestamp);

May I ask for example about what number shoud I insert?

And I still can’t understand the “offset”.
if I send this
featureFollowMe.sendTargetFramingPosition((byte)50,(byte)50,)
I will get the range of red frame ,right?


#4

Passing the parameters 50,50 in sendTargetFraming will cause the drone’s camera to center (50% vertical / 50% horizontal) the computed visual location of the target image passed in via sendTargetImageDetection.

https://developer.parrot.com/docs/reference/all/index.html#set-the-target-framing

I have these functions working in a rudimentary manner using BoofCV’s TLD tracker to convert target coordinates into the format sendTargetImageDetection requires but my own implementation using direct camera control has proven to be more reliable – mainly because the (assumedly) proprietary build of opencv that parrot is using is not available to the public and we have to guess how best to implement this method.

I can further speculate that they are using some form of the opencv tracker classes (one example: https://docs.opencv.org/3.4/dc/d1c/classcv_1_1TrackerTLD.html) for their own follow me routines but, again, this is mere speculation on my part.

Here’s something rough:

    final float horizAngle = (float) Math.toRadians(droneData.getDroneAttitude().getYaw() + status.getPan());
    final float vertAngle = (float) Math.toRadians(status.getTilt());

    featureFollowMe.sendTargetImageDetection(horizAngle, vertAngle, 1, (byte) 255, (byte) (lastStatus == null || !lastStatus.isVisible() ? 1 : 0), System.currentTimeMillis());

And snippets from my TrackerStatus class (you see it declared above as status and lastStatus:

    public float getPan() {
        final float zero = compressedWidth / 2;
        final float minMultiplier = droneData.getCameraOrientation().getPanMin() / zero;
        final float maxMultiplier = droneData.getCameraOrientation().getPanMax() / zero;
        final float x = (float) (boxWidth / 2 + location.a.x / SIZE_MULTIPLIER);

        if (x > zero) {
            return droneData.getCameraOrientation().getPan() + (x - zero) * maxMultiplier;
        } else if (x < zero) {
            return droneData.getCameraOrientation().getPan() + (zero - x) * minMultiplier;
        }

        return droneData.getCameraOrientation().getPan();
    }

    public float getTilt() {
        final float zero = compressedHeight / 2;
        final float minMultiplier = droneData.getCameraOrientation().getTiltMin() / zero;  // downward as negative
        final float maxMultiplier = droneData.getCameraOrientation().getTiltMax() / zero;  // upward as positive
        final float y = (float) (boxHeight / 2 + location.a.y / SIZE_MULTIPLIER);

        if (y > zero) {
            return droneData.getCameraOrientation().getTilt() + (y - zero) * minMultiplier;
        } else if (y < zero) {
            return droneData.getCameraOrientation().getTilt() + (zero - y) * maxMultiplier;
        }

        return droneData.getCameraOrientation().getTilt();
    }

Above, “location” represents my actual tracker Quadrilateral_F64 results. This is all very raw (and arguably sloppy) but is as far as my implementation got trying to use the Parrot provided sendTargetImageDetection and sentTargetFramingPosition methods.

IMHO, without having more details and/or access to their tracker it is probably just easier to implement your own using the CameraOrientation methods like I eventually did.


#5

sorry for reply you so late.

I have tried my best to use your code, but there are some variable I can’t understand.

compressedWidth
boxWidth
location.a.x
SIZE_MULTIPLIER

and how can I implement this
droneData.getCameraOrientation().getPan()

another question, I am trying look_at mode instead.
when I use sendTargetFramingPosition((byte)50,(byte)50,)
it set to the center of screen.
how can I set the object to look_at.
I try to stand at the center of the screen.
but the camera doesn’t look at me.

state of follow me received me idle behavior.
and the follow me info received me:
mode:look_at
missing_requirement: 8 8 63 (the while-loop of command give me three number)
improvement: 127 127 127

sorry ,I have so many question can’t understand.


#6

Compressed Width, BoxWidth, SIZE_MULTIPLIER are helpers that I use for rescaling the tracker processed image and the actual screen dimensions. You don’t want your tracker trying to classify a target using native resolution as it will dramatically slow down the computational process.

I address this by downscaling the video frame to process and then recompute the coordinates for placement back onto the screen, or in this case feeding targetImageDetection.

As stated, location is my actual tracker results. Within this quadrilateral are the individual cooridnates (bounding box) returned by my tracker.

droneData.getCameraOrientarion.getPan in this context is a just a wrapper for the actual SDK camera orientation properties.

I have not messed with look at.


#7

Oh, I think I finally understand what you said.
So, I need to implement a tracker by myself first, right?

Actually, I had already tried opencv tracker before, but its tracker seems can’t use on Android.
And the way of camshift isn’t reliable.

Maybe I will tried BoofCV.

thanks for your answer. It helps me a lot.


#8

I have opencv trackers working too but I found boofcv is easier to package and deploy.


#9

I make the drone’s camera focus on me successfully.
but when I walk away from the drone.
It doesn’t follow me. It just flies at the same place.

final double lat = location.getLatitude();
final double lng = location.getLongitude();
final float alt = (float)location.getAltitude();

    final float accuracy = location.getAccuracy();

    if (lastLoc != null) {
        // calculate ns, es, and ds
        final LatLng latLng = new LatLng(location.getLatitude(), location.getLongitude());
        final LatLng lastNsLatLng = new LatLng(lastLoc.getLatitude(), location.getLongitude());
        final LatLng lastEsLatLng = new LatLng(location.getLatitude(), lastLoc.getLongitude());

        final float timeOffset = (location.getTime() - lastLoc.getTime()) * 1000f;

        ns = ((float) SphericalUtil.computeDistanceBetween(lastNsLatLng, latLng) / timeOffset) * (location.getLatitude() > lastLoc.getLatitude() ? 1f : -1f);
        es = ((float) SphericalUtil.computeDistanceBetween(lastEsLatLng, latLng) / timeOffset) * (location.getLongitude() > lastLoc.getLongitude() ? 1f : -1f);
        ds = (float) (lastLoc.getAltitude() - alt) / timeOffset;
        mBebopDrone.setGps(lat, lng, alt, accuracy, accuracy, ns, es, ds, System.currentTimeMillis());

I don’t know whether the setGps are wrong.

sendConfigureGeographic((byte)0, (float)0, (float)0, (float)0);
Should I give any value to send? or just keep it as 0.
And I only call this once before start follow.
Should I keep update it?