Changes between Version 29 and Version 30 of Procedures/EagleHawkProcessing
- Timestamp:
- Jul 4, 2011, 10:53:12 AM (13 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Procedures/EagleHawkProcessing
v29 v30 3 3 This guide describes how to process hyperspectral data using the apl suite. This should be used for 2011 data onwards. If you are processing earlier data, see [wiki:Procedures/AZEagleHawkProcessing here] for instructions for processing with the az suite. 4 4 5 '''All processing in the ~arsf/arsf_data directories should be done as airborne user:''' 5 The apl suite consists of the following programs: apl cal, apl nav, apl tran, apl map. 6 7 == Creating config file == 8 9 This file will be used to automatically generate the commands necessary to process your hyperspectral lines. 6 10 7 11 If no config file exists (in <proj_dir>/processing/hyperspectral) then, in top project directory, run: … … 16 20 * transform_projection is correct for the data 17 21 18 19 === Scripts to process the data: === 20 21 * To submit jobs to the grid, from the top level directory use: specim_qsub.py <config_file> 22 * The actual script which does the processing of each job is: process_specim_apl_line.py 23 * If using SBETs from IPAS to process the hyperspectral make sure to use these lever arm values (referenced from pav80 not GPS antenna): 22 * If using SBETs from IPAS to process the hyperspectral make sure to use these lever arm values (referenced from pav80 not GPS antenna. They should be automatically selected according to the year but best to check): 24 23 25 24 Eagle : 0.415 -0.014 -0.129 … … 33 32 Hawk : -0.345 0.29 0.35 34 33 35 === Making a Delivery ===36 34 37 Use the make_hyper_delivery.py script to make the delivery directory. Run it from within the main project directory. By default it runs in dry run mode. 35 === Submitting processing to gridnodes === 38 36 39 Use --final if happy with what it says it will do. Use -m <config> to generate screenshots and mosaics 37 To submit jobs to the grid, from the top level directory use: specim_qsub.py <config_file> 40 38 41 To make the readme file use the script: create_latex_hyperspectral_apl_readme.py 39 The actual script which does the processing of each job is: process_specim_apl_line.py 40 42 41 43 42 === Individual processing stages: === … … 94 93 95 94 `aplmap -igm e092013b_p_sct0.1_utm31n.igm -lev1 e092011b.bil -mapname e092013b_utm31.mapped -bandlist 30 15 7` -oversample 3 3 95 96 97 === Making a Delivery === 98 99 Use the make_hyper_delivery.py script to make the delivery directory. Run it from within the main project directory. By default it runs in dry run mode. 100 101 Use --final if happy with what it says it will do. Use -m <config> to generate screenshots and mosaics 102 103 To make the readme file use the script: create_latex_hyperspectral_apl_readme.py