Hi,
I’m currently working on a project where I use the ANAFI drone to autonomously scan a room for QR codes and collect their information. The goal is to enable the drone to navigate an indoor space without manual control, detect and localize QR codes, and systematically scan the entire area to retrieve data from multiple codes.
So far, I have:
• Implemented a basic OpenCV script for QR code detection.
• Tested a simple takeoff/landing trigger based on QR detection.
Next steps:
I’m considering using SLAM (Simultaneous Localization and Mapping) for autonomous navigation—allowing the drone to create a map of the room and then use it for systematic exploration (e.g., grid-based or wall-following path) to detect all QR codes.
Before I proceed, I’d like to ask:
- Is SLAM the right approach for this type of project with the ANAFI platform?
- Does the ANAFI SDK (Olympe) provide access to drone pose data (position, orientation) that would allow integration with an external SLAM algorithm (e.g., ORB-SLAM, RTAB-Map)?
- Are there recommended tools, libraries, or best practices for integrating SLAM with ANAFI drones in indoor environments?
- What are the limitations I should be aware of when using ANAFI indoors for mapping (e.g., GPS availability, IMU drift, visual features)?
- Is there any support for visual odometry or pre-existing mapping features in the ANAFI?
- Would you recommend an alternative approach (e.g., visual marker tracking, custom waypoint navigation) for indoor QR code scanning instead of SLAM?
I’d appreciate any insights, resources, or pointers you could provide!
Thank you so much for your time.