ARFrame returns null array on getByteData() call


I’m trying to convert an ARFrame to a Bitmap by creating it from the Frame’s ByteData. Now I have a few questions/issues.

  1. Is it even possible to do so? Is there an easy way to perform such a conversion?
  2. Why does the function return null? If the frame itself was null I’d simply get a NullPointer but the function actually return a null array.

I’d be glad if someone was able to help me out here!



You can’t. ARFrame content for Bebop 2 is not Bitmap encoded but h264 encoded.


Ah alright thx!

I‘ve never worked with h264 encoded data. I found some information online but they all explain it as a VIDEO encoding standard.

So on receiving an ARFrame - which obviously should represent a single image not a video - how can I work with it? Is it somehow possible to convert such a single frame into wheter an NV21 Encoded Image or a Bitmap?

In order to provide a little more insight, that‘s the situation:
I‘m trying to implement Object Tracking. In order to do so, I use BoofCV‘s Object Tracking Framework. The Problem is, it can only work with the so called ImageUInt8 datatype. It provides two conversion functions:
NV21 -> ImageUInt8
Bitmap -> ImageUInt8


You mean like this? :slight_smile:


Yes, pretty much. Everythings working fine when using the tablet‘s camera but I can‘t seem to get any compatible image format from the drone.


It’s the same process I documented here:

And instead of converting to a Mat, you want to convert to a GrayU8 (assuming TLD), like so:

    if (SIZE_MULTIPLIER == 0) {
        SIZE_MULTIPLIER = bitmap.getWidth() / 320;

    width = (b.x - a.x) / SIZE_MULTIPLIER;
    height = (c.y - a.y) / SIZE_MULTIPLIER;

    final Bitmap compressed = Bitmap.createScaledBitmap(bitmap, Math.round(bitmap.getWidth() / SIZE_MULTIPLIER), Math.round(bitmap.getHeight() / SIZE_MULTIPLIER), true);
    storage = ConvertBitmap.declareStorage(compressed, storage);

    frame = ConvertBitmap.bitmapToGray(compressed, null, GrayU8.class, storage);


Thank you, I will look into that! :+1:


Unfortunately I ran into another problem. Though the conversion now apparently works, I can’t display the Frame anymore. The app only display a black image. I don’t get any errors, just the black image.

Below the BebopVideoView as a TextureView. I hope someone finds a solution!

public class BebopVideoView extends TextureView implements TextureView.SurfaceTextureListener {

    private static final String CLASS_NAME = BebopVideoView.class.getSimpleName();

    private static final String TAG = "BebopVideoView";
    private static final String VIDEO_MIME_TYPE = "video/avc";
    private static final int VIDEO_DEQUEUE_TIMEOUT = 33000;
    private static final int VIDEO_WIDTH = 640;
    private static final int VIDEO_HEIGHT = 368;
    private Surface surface;
    private MediaCodec mediaCodec;
    private boolean surfaceCreated = false;
    private boolean codecConfigured = false;
    private ByteBuffer[] buffers;

    private Lock readyLock;

    private ByteBuffer spsBuffer;
    private ByteBuffer ppsBuffer;

    public BebopVideoView(Context context) {
        this(context, null);
        readyLock = new ReentrantLock();

    public BebopVideoView(Context context, AttributeSet attrs) {
        this(context, attrs, 0);
        readyLock = new ReentrantLock();

    public BebopVideoView(Context context, AttributeSet attrs, int defStyleAttr) {
        super(context, attrs, defStyleAttr);
        readyLock = new ReentrantLock();

    public void displayFrame(ARFrame frame) {

        if ((mediaCodec != null)) {
            if (codecConfigured) {
                // Here we have either a good PFrame, or an IFrame
                int index = -1;

                try {
                    index = mediaCodec.dequeueInputBuffer(VIDEO_DEQUEUE_TIMEOUT);
                } catch (IllegalStateException e) {
                    Log.e(TAG, "Error while dequeue input buffer");
                if (index >= 0) {
                    ByteBuffer b;
                    if (android.os.Build.VERSION.SDK_INT >= android.os.Build.VERSION_CODES.LOLLIPOP) {
                        b = mediaCodec.getInputBuffer(index);
                    } else {
                        b = buffers[index];

                    if (b != null) {
                        b.put(frame.getByteData(), 0, frame.getDataSize());

                    try {
                        mediaCodec.queueInputBuffer(index, 0, frame.getDataSize(), 0, 0);
                    } catch (IllegalStateException e) {
                        Log.e(TAG, "Error while queue input buffer");

            // Try to display previous frame
            MediaCodec.BufferInfo info = new MediaCodec.BufferInfo();
            int outIndex;
            try {
                outIndex = mediaCodec.dequeueOutputBuffer(info, 0);

                while (outIndex >= 0) {
                    mediaCodec.releaseOutputBuffer(outIndex, true);
                    outIndex = mediaCodec.dequeueOutputBuffer(info, 0);
            } catch (IllegalStateException e) {
                Log.e(TAG, "Error while dequeue input buffer (outIndex)");


    public void configureDecoder(ARControllerCodec codec) {

            ARControllerCodec.H264 codecH264 = codec.getAsH264();

            spsBuffer = ByteBuffer.wrap(codecH264.getSps().getByteData());
            ppsBuffer = ByteBuffer.wrap(codecH264.getPps().getByteData());

        if ((mediaCodec != null) && (spsBuffer != null)) {


    private void configureMediaCodec() {
        try {
            final MediaFormat format = MediaFormat.createVideoFormat(VIDEO_MIME_TYPE, VIDEO_WIDTH, VIDEO_HEIGHT);
            format.setByteBuffer("csd-0", spsBuffer);
            format.setByteBuffer("csd-1", ppsBuffer);

            mediaCodec = MediaCodec.createDecoderByType(VIDEO_MIME_TYPE);
            mediaCodec.configure(format, surface, null, 0);

            if (android.os.Build.VERSION.SDK_INT < android.os.Build.VERSION_CODES.LOLLIPOP) {
                buffers = mediaCodec.getInputBuffers();

            codecConfigured = true;
        } catch (Exception e) {
            Log.e(CLASS_NAME, "configureMediaCodec", e);

    public void onSurfaceTextureAvailable(SurfaceTexture surface, int width, int height) {
        this.surface = new Surface(surface);
        surfaceCreated = true;

    public void onSurfaceTextureSizeChanged(SurfaceTexture surface, int width, int height) {

    public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
        if (mediaCodec != null) {
            if (codecConfigured) {
            codecConfigured = false;
            mediaCodec = null;

        if (surface != null) surface.release();
        if (this.surface != null) this.surface.release();

        return true;

    public void onSurfaceTextureUpdated(SurfaceTexture surface) {

Thanks in advance!


EDIT: The Bitmap I get from the Texture View actually appears to be an empty Image (or transparent if you will)


You see a valid video stream being rendered in the view? Where are you calling getBitmap() from?


No the video stream isn’t rendered.
I call getBitmap() in the onFrameReceived Function in the Activity.


Let me take a closer look at your view when I have more time. There’s likely something subtle missing.

It is completely appropriate for the bitmap to be black if there is no video stream being rendered.


That is a very expensive call to make at 24/26/30 fps. I would seriously consider a separate thread for it with an interval closer to 5 or so FPS.

You’ll need to do some tweaking regardless. You want the video pipeline to run as fast as possible and sift off the bitmap as appropriate.


Oh right, looking at the code again, I actually call the method in a separate Thread. But thanks nonetheless!


Still couldn’t find a solution :confused: