This page has deprecated and will be archived. Please go to https://www.bitcraze.io/.
This is documentation for the proof-of-concept where the Kinect is used to automatically pilot the Crazyflie. The code is available in the Crazyflie Python client repo on the dev-kinect
branch.
The application is intended as a proof-of-concept to show that this is possible and also to act as a starting point for anyone that wants to do some development using the Kinect and the Crazyflie. The application will probably not work out of the box for everyone since the image processing is still pretty basic and some tuning might be needed to detect the Crazyflie.
The application uses libfreenect and OpenCV.
A Crazyflie, a Xbox30 Kinect and lib libfreenect (code) and the libfreenect python wrappers.
These instructions are valid for Ubuntu 13.04 but similar steps should work with other OSes.
git clone https://github.com/OpenKinect/libfreenect.git cd libfreenect mkdir build cd build cmake .. make
cd libfreenect/wrappers/python python setup.py install
dev-kinect
branchgit clone https://github.com/bitcraze/crazyflie-clients-python.git git checkout dev-kinect
cd crazyflie-clients-python/bin ./cfkinect
The set-point can be moved when holding down the mouse button in the RGB window.
To tweak and develop have a look at the following files:
crazyflie-clients-python/lib/kinect/cfkinect.py crazyflie-clients-python/lib/kinect/kinect.py crazyflie-clients-python/lib/kinect/pid.py
The tracking is done using the depth image in combination with the normal RGB image. The depth image is used to get the distance from the Crazyflie to the Kinect and the RGB image is used to get the X/Y position (the yaw is not tracked at all). In order to make the detection in the RBG image easier a red ball should be attached to the Crazyflie. We used a styrofoam ball and painted it read with a filt-tip pen.
In order for the tracking to work no other objects should be in the image that will be picked up by the depth or the RGB image. The best is use a white wall as a background.
Since the Kinect's RGB camera and the camera used to read the IR laser is not aligned it's hard to find what is the Crazyflie in the depth image. Therefore it s not done at all. The image processing uses a range in the middle (i.e not farthest away and not closest) and calculates a mean of the depth value. If other objects are picked up then this calculation will not work as well.
The RGB image is used to pick out the red ball using a red tint in the HSV space. This can tuned to detect other colors instead. Obviously it's not a good idea to have other objects picked up with similar tint.
The most tricky part to detect is the depth which is why we used the Kinect. The problem is that, with the current implementation, the effective range of the IR laser detection is about 0.8 - 2 meters, which is a bit too small. At two meters the RGB image is about 1.5×1.5 meters which is a bit too small to do flight maneuvers. To do more fancy stuff a larger area would be required which means that the depth measurement would be lost with the Kinect.
Only using one visual marker makes it impossible to detect the yaw. So more visual markers would have to be added and tracked to fix this. One idea investigated is to add a black dot in the center of the red ball and to try to keep this dot in the middle as seen from the camera.
Currently the application is running at ~30 FPS which is too low when the Crazyflie moves fast. The image becomes smeared and with a small operating area of 1.5×1.5 meters the Crazyflie quickly goes out of the frame.
The model for the regulation has be be improved a lot to get the Crazyflie more stable.
The image processing should be improved to be more robust and generic.
The regulation has to be improved.
Better result could be achieved by calibrating the kinect(?).