Autonomous Landing Site Selection (Dissertation)
My third-year project at the University of Southampton was on autonomous landing site selection for micro aerial vehicles (such as small quadcopters). As well as experimenting with ultrasound terrain scanning, I created a real-world partial implementation of the algorithm from Park and Kim’s paper, “Landing site searching algorithm of a quadrotor using depth map of stereo vision on unknown terrain,” which uses data from a simulated 3D scanner to find suitable landing sites.
My implementation used an ASUS Xtion PRO Live 3D scanner (similar to Microsoft’s Kinect), which was originally going to be mounted on a Parrot AR.Drone. Unfortunately, due to weight and difficulty controlling the AR.Drone in suitable indoor spaces, I had to resort to fixing the 3D scanner in place and moving various items to create a ‘landscape’ below it. Within that landscape, my algorithm (an improved version of Park and Kim’s) identified potential landing sites, highlighting them on a computer screen.
The system was written in Python, and used Robot Operating System to connect to the 3D scanner (and originally the AR.Drone’s controller and sensors).
Park and Kim’s proposed algorithm is fairly simple; essentially, it finds the largest circle of smooth (though not necessarily horizontal) terrain in a depth map. After implementing as much of it as I could using a fixed 3D scanner, I made a couple of improvements, adding a method for assessing the slope of a potential site, and using aircraft dimensions to allow it to determine whether the aircraft could fit in much smaller sites than its bounding circle would suggest.