This programme is the result of the realization that most UAV manufacturers have concentrated on hardware and functionality and not enough on post-processing of the data obtained. On the other hand, some potential users have advanced geographical information systems (GIS) but these assume high input-image quality that is not easily obtainable from vibrating, low-resolution uncorrected UAV cameras, and cannot analyze UAV data even with quite advanced GIS. Moreover, civilian customers require advanced analysis tools to derive useful results such as fire predictions, hydrology maps, cartography, etc. Therefore, there is a need for a software tool capable of performing these tasks, to be combined with existing software, so as to offer an attractive product to potential civilian end-users.
We have developed a leading-edge ground control system for UAVs that includes a mission planning and re-tasking module, synthetic vision in a virtual reality environment, an auto-pilot autonomous take-off and landing using a single camera, a robust wind-compensating algorithm, camera correction and calibration, a photogrammetry module, a complete geographical information system including modules such as ortho-rectification, geo-referencing, image fusion, image mosaicing, ISO19,115-compliant metadata editing, etc. This software was first shown at Farnborough 2008, it will also be shown at the Paris Air Show in 2009, and is being offered to UAV manufacturers worldwide. This station has been successfully employed to control fixed-wing, blimps and rotary-wing unmanned vehicles in applications such as power-line monitoring, surveillance and marine SAR. Current work includes enhancements such as the ability to analyze multi-spectral images, 3D reconstruction derived from real-time video, and persistent wide are surveillance.
Dynamic mission planning and tasking
Today's high altitude endurance (HAE) reconnaissance unmanned aerial vehicles (UAVs) are extremely complex and capable systems. They are only as good as the quality of their implementation, however. Mission planning is rapidly increasing in complexity to accommodate the requirements of increasing aircraft and information control capabilities. Effective mission planning is the key to effective use of airborne reconnaissance assets, which demand extremely intensive and detailed mission planning.
The mission plan must accommodate a range of possible emergencies and other unplanned in-flight events, like pop-up threats or a critical aircraft system failure. Current in-flight mission re-planning systems do not have sufficient capability for operators to effectively handle the full range of surprises commonly encountered in flight operations. Automation is commonly employed to reduce this high workload on human operators.
Our dynamic mission planning module overcomes a variety of common operational situations in HAE UAV reconnaissance that necessitate more direct human involvement in the aircraft control process than is currently acknowledged or allowed. A state of the art mission planning software package, OPUS, can be used to demonstrate the current capability of conventional mission planning systems. This current capability can be extrapolated to depict the near future capability of highly automated HAE reconnaissance UAV in-flight mission replanning. Many scenarios exist in which current capabilities of in-flight replanning fall short.
Our dynamic mission planning module has been developed and implemented in “Raphael” and when the same problematic scenarios are revisited with it, improved replanning results can be demonstrated, particularly being able to reroute in the light of new information and threats, slack time available, interpretation rating scale of points of interest and a given route survivability estimate.
We have recently tested a sophisticated 80-parameter helicopter model including a module to make the algorithm robust and linearized, control, is effected through a Kalman filter as standard.
The model used for the atmospheric parameters such as temperature, pressure, density, speed of sound, gravity, etc. is that of the NASA 1976 standard atmosphere with a variation of less than 1% compared to tabulated values. The autopilot model includes models of the systems response of the two types of servo used in our µUAV hardware.
The autopilot is able to navigate in three modes: GPS, inertial and stellar. For the stellar navigation, a basic set of bright stars is used, based on the 5th revised edition of the Yale Bright Star Catalog, 1991. The star tracking algorithm is a high accuracy proprietary method able to resolve position and time of a craft anywhere in the world down to 100 meters.
We are currently testing our autopilot, using our 3D virtual reality environment, with a robust wind-correction algorithm aimed at off-the-shelf unmanned aerial vehicles. The 3D environment includes stellar information, solar positioning so shading and illumination conditions can be predicted and accounted for, as well as an extensive library of 3D objects, textures, etc.
In order to do the more advanced tasks, such as cartography, ortho-rectification, target-acquisition, DEM modeling, mosaicing, etc. high quality images must be produced from UAV-derived imaging, which requires modules such as a 6-degree pixel calibration camera correction tool, capable of correcting for camera and lens distortion. Other functions available in this module include normalization, photo-triangulation, stereo-plotter (anaglyph), rectification, interior and exterior orientation correction.
Camera pixel correction (0-4 pixel correction range)
Our sophisticated photogrammetry module allows to make a number of improvements to restore images, unavailable in other products, such as automatic blurring removal and another common problem with UAV-derived imaging: that we often have 5-10 low-quality images of a target and require a single high-resolution image. So, an advanced sub-pixel super-resolution resolution algorithm using wavelets was implemented.
Original image Super-resolution image
Single image zoom Super-resolution detail
Geographical Information Module
A problem often encountered by users is that the quality of unmanned vehicle images is simply not of sufficient quality to analyze the data in a GIS. Moreover, the interface between UAV-derived images and a GIS is extremely difficult due to the image quality and referencing requirements of a GIS. The image quality restoration available in Raphael makes it possible to combine UAV aerial imagery with existing data such as geological, topological, land use, rain fall statistics, etc. The GIS module currently has the following basic functions:
However, final users such as aid agencies, search-and-rescue, agriculture planning, disaster relief units, etc. often require more sophisticated reports
Annual solar radiation exposure index
Humidity (blue palette) and drainage path (in red)
Currently, we are working on two optional modules, of which only the “3D reconstruction from video” module is available. This module is able to recover a DEM or 3D shape file from any video sequence and also has the capability to compare said 3D information with CAD information, which makes it ideal for verification of power station chimneys, oil platforms, bridges and other large-scale structures. The following images show a typical application, where from two aerial views (ortho-rectified in the two lower strips) we derive the terrain details:
Digital elevation model reconstruction
Standard Raphael station:
The standard specification includes the following components:
• Basic Raphael software suite
• Manual radio-control
• Telemetry transmitter/receiver
• Directional antenna mounted on a gimbal able to communicate with a vehicle at up to 30 Km. in direct line of sight.
• Ruggerized laptop
• User manual