These days we’re all living in the clouds—or at least our data is. Cloud-stored data is estimated to have exceeded over 1 Exabyte in quantity—or well over 1 billion Gigabytes. In the drone world, the potential of cloud-based data processing is an exciting proposition. Using the cloud, data processing can be automated to create beautiful data products with a fraction of the time and effort spent doing local data processing.
Or can it?
The reality is there are pitfalls in the promises of cloud processing. What assumptions about the data is cloud software making to create data products? Will the data products meet the burden of proof for geospatial positioning accuracy standards? Do the available data products fulfill your end goal or do you still need third party software to complete the project?
We’re going to explore the benefits and the shortcomings of cloud-based data processing as it compares to local data processing. But first, let’s take a look at how photogrammetry software developed.
Drones are proving to be viable tools that augment traditional data collection tasks. Due to this success, there has been rising consumer demand for software to turn drone data into actionable information. Drones have always been represented as a package solution: data collection, data processing, and analytics. Thus, initially there was intense competition to produce inexpensive, efficient, and accurate software packages to automate data processing. This demand sparked a wide landscape of impressive software solutions that have taken unique and diverse approaches to the photogrammetric process:
• Feature Detection
• Robust Feature Matching
• Bundle Adjustment
• Dense Matching
• Feature Modeling
LOCAL PROCESSING SOFTWARE
However, there is an ongoing debate between traditional photogrammetrists and drone proponents regarding the user-friendly automation these programs provide, primarily stemming from different intended uses for the geospatial data being collected. Initially, the software packages offered diversity in mechanisms and algorithms. Providers took varied approaches. Some took an approach that reflected traditional photogrammetry approaches, while others were more experienced in computer vision (CV).
These varied approaches provided different results with similar datasets. In short, different software solutions could provide different data products and with different qualities. As these programs matured, more advanced processing options have appeared, allowing consumers to fine-tune their performance and trial the available options to decide which programs best fit their use and standards.
THE TRUE ‘LOCAL’ ADVANTAGE
Often, human intervention is required to achieve the desired results for drone data projects. Fortunately, local processing programs provide the user with the ability to manually process a dataset one step at a time. Additionally, each program offers verbose and wide-ranging reports, providing transparent insights to evaluate not only the drone data but also the processing software itself. This is a huge benefit of using local processing software.
DRONE DATA AND CLOUD PROCESSING
As with any computer-based technology, drone data processing has now begun expanding into the cloud. This expansion evokes a lot excitement about infinitely-scalable computational power, web analytics algorithms, and extended ‘end-to-end’ workflow automation.
Already, cloud-based platforms have expanded the availability of processing performance without the need to invest in server infrastructure or high-end computers. This is good value for many enterprises. However, the transparency and processing options, available in local programs, has been hidden behind a veil of automation or removed entirely in favor of ‘one button survey’ simplicity.
Finally, there are the questions of data security in the cloud, bandwidth management for data processing, and subsequent delays caused by retrieving your data from the cloud.
THE PROBLEM WITH AUTOMATION
Throughout the photogrammetric process, automated software must make a large series of assumptions about the data it encounters. The data processor’s job is to minimize the number of assumptions and ensure the outcome shown is as expected. For example, if images in a dataset include downwelling light variance, camera positions, and INS-derived orientations, those values should always be introduced to the system before processing and their adjusted values examined afterward.
This process is made more rigorous if the a priori estimated standard deviations of the inputs can be included as well. Unfortunately, the approaches taken in many cloud-based platforms are that of ‘one-size-fits-all’ automation. Often, the only broad processing options that are available are ‘terrain or structures’ or ‘RGB or CIR’. This condensed design causes unsupported assumptions to be made from outside the data.
An example of this is globally-assigning GPS positions from WAAS-corrected C/A data a single horizontal standard deviation of 3m, regardless of the particular conditions under which the data was obtained. A ‘black box’ approach like this is designed for maximum throughput, often at the expense of maximum quality. This is not to say the approaches taken within such systems are not internally robust, rather, we cannot know what assumptions and methods are being employed.
With no ability to tweak the software to meet the requirements of the dataset beyond the automated processes designed by the provider, it’s a ‘what you see is what you get’ scenario. Simple data products such as elevation models and orthomosaics are generated quite well through these systems. Yet, any subsequent manipulation of the processed data will necessitate a third party program after downloading the data from the processing provider.
This contrasts with most local processing programs that provide not only the ability to spot issues during each step of processing, but also have the tools necessary to resolve the issues before continuing the processing workflow.
DON’T FORGET STANDARDS
The lack of manual intervention also hinders the reporting necessary for projects which require meeting positioning accuracy standards or quantifiable proof of performance. Some providers do offer ‘accuracy’ reports, if requested, but without visibility into the processing steps, these reports can be biased.
For example, in a local processing program, if a single image induces errors in the self-calibrating bundle adjustment, it is simple to notice and remove the outlier. In a fully-automated, closed-off system, this outlier would not have been caught and the accuracy report would be biased.
Often, a processor would want at the least to have immediate access to the following information:
• Calibrated camera-adjusted values and standard deviations
• Tie-point reprojection error (mean values and RMSE)
• EOP estimates and standard deviations
• Tie-point residuals and covariance of all parameters in the bundle adjustment
These all help an advanced analyst to evaluate the quality of the solution and find outliers the automated algorithms may have failed to remove.
MOVING FORWARD IN THE CLOUD
At the end of the day, competition between cloud processing and local processing is good. Every project has a specific data solution and our job is to advise our customers on what solution is best for their product. In the end, does a ‘one-size-fits-all’ system meet your standards? The multitude of cloud processing solutions and local processing solutions is creating a diverse and exciting landscape. New processing technologies introduced into a cloud-based environment will continue to spark new ideas and advance the industry.
The goal moving forward, however, should be creating processes that data processors can interact with as needed. By combining the power and potential of cloud processing with the interaction available in local processing, drone-collected data will begin to deliver on its promise to augment and impact data collection in the 21st century.