Version 65 (modified by edfi, 12 years ago) (diff)

--

Eagle/Hawk Processing Guide

This guide describes how to process hyperspectral data using the apl suite. This should be used for 2011 data onwards. If you are processing earlier data, see here for instructions for processing with the az suite.

The apl suite consists of the following programs: aplcal, aplmask, aplnav, aplcorr, apltran, aplmap.

Before starting, make sure the navigation is processed and all raw data is present and correct.

Projects are located ~arsf/arsf_data/<year>/flight_data/<area>/<project>. Processing files and deliveries will be generated under <project>/processing. You should be logged in as the airborne user when doing processing. Check here for the project lay-out and file name standards.

DEM

To return sensible results for all but very flat areas, you will need a dem. You can create a DEM by running asterdem.sh or nextmapdem.sh and output this in the hyperspectral DEM directory; otherwise use the liDAR aster DEM from the liDAR processing.

Note: If you create a nextmap DEM then this can not be included with the delivery as there are restrictions on it's distribution. Generate a separate ASTER DEM to be included with the delivery.

Creating config file

This file will be used to automatically generate the commands necessary to process your hyperspectral lines.

If no config file exists (in <proj_dir>/processing/hyperspectral) then, in top project directory, run:

generate_apl_config.py

This should generate a config file based on the raw data and applanix files, and output it to the processing/hyperspectral directory.
Go through carefully and check everything is correct. Most notably:

  • project_code
  • dem and dem_origin
  • transform_projection is correct for the data

If using SBETs from IPAS to process the hyperspectral make sure to use these lever arm values (referenced from pav80 not GPS antenna. They should be automatically selected according to the year but best to check):

Eagle : 0.415 -0.014 -0.129
Hawk : 0.585 -0.014 -0.129

And use these boresight values (PRH):

Eagle : -0.322 0.175 0.38
Hawk : -0.345 0.29 0.35

Submitting processing to gridnodes

To submit jobs to the grid, from the top level directory use: specim_qsub.py <config_file>

The actual script which does the processing of each job is: process_specim_apl_line.py

Once submitted, you can keep an eye on your jobs using qmon.

Individual processing stages

You shouldn't have to worry about this unless something goes wrong. However something often does! Detailed explanations of each step is explained here

Problems

If you have any problems, check the files created in logs e.g.

EUFAR10-03_2010-196_eagle_-2.o293411

The last part of the name is the grid node job number.
Check these for errors (look for stars). Common problems are listed here, along with possible solutions.

SCTs

The script will have produced 21 iterations of each flightline, with a range of sct values. SCT is a timing offset which affects the position and geometry of the image. Currently they range from -0.1 to 0.1 seconds. A tiff will have been produced for each version, which will be put in <project>/processing/hyperspectral/flightlines/georeferencing/mapped. You will need to go through these using gtviewer and find the image that looks correct, and note down the sct value. You usually determine the correct image by the amount of wobble in the image. Lines with an incorrect offset will cause kinks in straight lines such as roads where the plane trajectory wobbles. Selecting the image with the straight road is usually what is required. You can use .sync file created together with the config file to store correct SCT values.

Creating final files

The stage that creates the geolocated tiff's that you use to find SCTs deletes the original level 1 files after it's finished. You therefore need to use the config one more time to generate the full set of files for each flightline, using the correct SCT value. To do this, change the sctstart and sctend values so they are both the correct figure or use mergeCFSync.py to put the values from your sync file into the config file. Then in the global section set slow_mode = true.

Also, to generate mapped files for the delivery, change the eagle and hawk 'bandlist' in the config files to 'ALL'. This will map all bands of the data.

Running this with specim_qsub.py will once again submit your lines to the gridnode and you should soon have all the files you require to make a delivery. Before doing it make sure to delete the old files or move them to another directory.

Once you have the final files run (if you will be creating a delivery with make_hyper_delivery.py this step is not required here)

aplxml.py --meta_type=p --config_file=<config_file>

in the main project directory to get the xml project information.

OS vectors

If the project is in the UK, you will need to check the positional accuracy of the images against the OS line overlay. These should have been ordered before hand and are located in ~arsf/vectors.

Making a delivery

Use the make_hyper_delivery.py script to make the delivery directory. Run it from within the main project directory. By default it runs in dry run mode. Make sure the only lev3's in the georeferencing/mapped directory are the sct correct versions.

Use --final if happy with what it says it will do. Use -m <config> to generate screenshots and mosaics

Note: this script automatically moves over the contents of the DEM directory. You will need to revert this if this is a nextmap DEM as these shouldn't be delivered, and include an ASTER DEM instead.

Making the Readme

To make the readme first generate the readme config file using

generate_readme_config.py -d <delivery directory> -r hyper -c <config_file>

The readme config file will be saved as hyp_genreadme-airbone.cfg in the processing directory, do not delete this file as it is required for delivery checking. Check all the information in the readme config file is correct, if not change it.

Then create a readme tex file using: create_latex_hyperspectral_apl_readme.py -f <readme_config_file>

Finally run latex <readme_tex_file> to create the PDF readme. This readme should be placed in the main delivery folder.

This page details the old manual way for creating the delivery and Readme.