Guidance on Implementing SLAM for Autonomous QR Code Scanning with ANAFI Drone

Hi,

I’m currently working on a project where I use the ANAFI drone to autonomously scan a room for QR codes and collect their information. The goal is to enable the drone to navigate an indoor space without manual control, detect and localize QR codes, and systematically scan the entire area to retrieve data from multiple codes.

So far, I have:
• Implemented a basic OpenCV script for QR code detection.
• Tested a simple takeoff/landing trigger based on QR detection.

Next steps:
I’m considering using SLAM (Simultaneous Localization and Mapping) for autonomous navigation—allowing the drone to create a map of the room and then use it for systematic exploration (e.g., grid-based or wall-following path) to detect all QR codes.

Before I proceed, I’d like to ask:

  1. Is SLAM the right approach for this type of project with the ANAFI platform?
  2. Does the ANAFI SDK (Olympe) provide access to drone pose data (position, orientation) that would allow integration with an external SLAM algorithm (e.g., ORB-SLAM, RTAB-Map)?
  3. Are there recommended tools, libraries, or best practices for integrating SLAM with ANAFI drones in indoor environments?
  4. What are the limitations I should be aware of when using ANAFI indoors for mapping (e.g., GPS availability, IMU drift, visual features)?
  5. Is there any support for visual odometry or pre-existing mapping features in the ANAFI?
  6. Would you recommend an alternative approach (e.g., visual marker tracking, custom waypoint navigation) for indoor QR code scanning instead of SLAM?

I’d appreciate any insights, resources, or pointers you could provide!

Thank you so much for your time.

Hello @ladadulina ,

Here’s a point-by-point response based on your questions:


Is SLAM the right approach for this type of project with the ANAFI platform?

SLAM is not a plateform dependant algorithm, so yes it could be implemented with ANAFI.


Does the ANAFI SDK (Olympe) provide access to drone pose data (position, orientation) that would allow integration with an external SLAM algorithm (e.g., ORB-SLAM, RTAB-Map)?

Yes, pose data is available through the telemetry and is mandatory to use SLAM.


Are there recommended tools, libraries, or best practices for integrating SLAM with ANAFI drones in indoor environments?

SLAM is not implemented on the ANAFI drone itself, so you’ll need to use external libraries. There are many resources, libraries, and research papers available online depending on the type of SLAM you’re interested in. It’s up to you to choose the approach and tools that best fit your project constraints and environment.


What are the limitations I should be aware of when using ANAFI indoors for mapping (e.g., GPS availability, IMU drift, visual features)?

As detailed in the white paper, GPS is not available indoors, and the IMU experiences drifting. These are precisely the kinds of issues that SLAM aims to solve. SLAM helps compensate for such limitations by relying on camera-based localization and its objective is precisely to avoid relying on these sensors.


Is there any support for visual odometry or pre-existing mapping features in the ANAFI?

The drone includes an occupancy grid used for obstacle avoidance; however, this grid is not designed for SLAM or mapping purposes.

To implement visual odometry or mapping, users must develop their own processing pipelines using the available sensor data and video feed.


Would you recommend an alternative approach (e.g., visual marker tracking, custom waypoint navigation) for indoor QR code scanning instead of SLAM?

Choosing the right strategy really depends on your project’s goals and constraints. There are easier methods than SLAM, such as combining pattern detection with exploration algorithms. These approaches enable the drone to systematically scan the environment and detect QR codes without implementing a full SLAM system.

Best regards,
Hugo

1 Like