Product: AR Drone 2.0
Product version: [2.0.X]
SDK version: [2.0.1]
Use of libARController: [YES/NO] (Only for ARSDK) Yes
SDK platform: [iOS/Android/Unix/Python…] Ubuntu v12 (SDK2) and V14 (ROS, OpenCV)
Reproductible with the official app: [YES/NO/Not tried] Yes
I have installed on a Mac with VMware the ARDrone SDK2.0 onto an Ubuntu 12 32-bit OS and have installed Robotic Operating System (ROS) and OpenCV facial recognition software on Ubuntu 14. I would like for the drone to view the video and detect images such as faces but I do not know which is the best way to connect this together into an application.
I have successfully run ROS with OpenCV as a Catkin C program to view default USB camera from laptop (ROS + OpenCV). I have the facial detection application on OpenCV (OpenCV + Facial recognition) working where from the USB camera the OpenCV makes a circle around the face. I have the ARDrone SKD2 (SDK2 + ardrone_navigation) application running with Sony PS2 game controller. The ARDrone flies much better with game controller as opposed to iPhone and I increased the WIFI strength from laptop with WIFI booster. I have installed ardrone_autonomy drivers in ROS communicating with ARDrone (ARDrone + ROS). I can view the navdata flight information from ARDRone and view the camera (ARDrone + Camera + ROS). The only issue is linking all these pieces together (ARDrone + Camera + ROS + OpenCV = Facial Recognition).
Any helpful information would be appreciated especially in the C++ application (.cpp) written for ROS (Hydro) catkin. I’m having problems coding the ARDrone camera portion “/ardrone/image_raw” and overlapping that circle on the face based on the “facedetect” application in OpenCV.