Version 37 (modified by knpa, 13 years ago) (diff)

--

Eagle/Hawk Processing Guide

This guide describes how to process hyperspectral data using the apl suite. This should be used for 2011 data onwards. If you are processing earlier data, see here for instructions for processing with the az suite.

The apl suite consists of the following programs: aplcal, aplnav, apltran, aplmap.

Before starting, make sure the navigation is processed and all raw data is present and correct.

DEM

To return sensible results for all but very flat areas, you will need a dem. One should have already been completed in the unpacking stage. If not, it will need to be created. For the UK, use NextMap. Otherwise use ASTER. If you can't use ASTER for some reason, then you can also create one from our own LiDAR using make_lidardem_or_intensity.sh.

Creating config file

This file will be used to automatically generate the commands necessary to process your hyperspectral lines.

If no config file exists (in <proj_dir>/processing/hyperspectral) then, in top project directory, run:

generate_apl_runscripts.py -s s -n <numlines> -j <jday> -y <year>

This should generate a config file based on the raw data and applanix files, and output it to the processing/hyperspectral directory. The file may need editing, the main parts to check:

  • project_code
  • dem and dem_origin
  • transform_projection is correct for the data
  • If using SBETs from IPAS to process the hyperspectral make sure to use these lever arm values (referenced from pav80 not GPS antenna. They should be automatically selected according to the year but best to check):

Eagle : 0.415 -0.014 -0.129

Hawk : 0.585 -0.014 -0.129

And use these boresight values (PRH):

Eagle : -0.322 0.175 0.38

Hawk : -0.345 0.29 0.35

Submitting processing to gridnodes

To submit jobs to the grid, from the top level directory use: specim_qsub.py <config_file>

The actual script which does the processing of each job is: process_specim_apl_line.py

Individual processing stages

You shouldn't have to worry about this unless something goes wrong. However something often does! Detailed explanation of each step is explained here

Problems

This page details common problems in hyperspectral processing, and how to resolve them.

Making a delivery

Use the make_hyper_delivery.py script to make the delivery directory. Run it from within the main project directory. By default it runs in dry run mode.

Use --final if happy with what it says it will do. Use -m <config> to generate screenshots and mosaics

To make the readme file use the script: create_latex_hyperspectral_apl_readme.py

This page details the old manual way for creating the delivery and Readme.