= Eagle/Hawk Processing Guide = This guide describes how to process hyperspectral data using the apl suite. This should be used for 2011 data onwards. If you are processing earlier data, see [wiki:Procedures/AZEagleHawkProcessing here] for instructions for processing with the az suite. The apl suite consists of the following programs: '''aplcal''', '''aplnav''', '''aplcorr''','''apltran''', '''aplmap''', '''aplsite'''. Before starting, make sure the [wiki:Procedures/ProcessingChainInstructions/NavigationProcessing navigation is processed] and all raw data is present and correct. Projects are located ~arsf/arsf_data//flight_data//. Processing files and deliveries will be generated under /processing. You should be logged in as the airborne user when doing processing. Check [wiki:Processing/FilenameConventions here] for the project lay-out and file name standards. === DEM === To return sensible results for all but very flat areas, you will need a dem. One should have already been completed in the unpacking stage. If not, it will need to be created. For the UK, use [wiki:Processing/NextMapDEMs NextMap]. Otherwise use [wiki:Processing/SRTMDEMs ASTER]. If you can't use ASTER for some reason, then you can also create one from our own LiDAR using make_lidardem_or_intensity.sh. === Creating config file === This file will be used to automatically generate the commands necessary to process your hyperspectral lines. If no config file exists (in /processing/hyperspectral) then, in top project directory, run: `generate_apl_runscripts.py -s s -n -j -y ` This should generate a config file based on the raw data and applanix files, and output it to the processing/hyperspectral directory.[[BR]] Go through carefully and check everything is correct. Most notably: * project_code * dem and dem_origin * transform_projection is correct for the data If using SBETs from IPAS to process the hyperspectral make sure to use these lever arm values (referenced from pav80 not GPS antenna. They should be automatically selected according to the year but best to check): Eagle : 0.415 -0.014 -0.129[[BR]] Hawk : 0.585 -0.014 -0.129 And use these boresight values (PRH): Eagle : -0.322 0.175 0.38[[BR]] Hawk : -0.345 0.29 0.35 === Submitting processing to gridnodes === To submit jobs to the grid, from the top level directory use: specim_qsub.py The actual script which does the processing of each job is: process_specim_apl_line.py Once submitted, you can keep an eye on your jobs using qmon. === Individual processing stages === You shouldn't have to worry about this unless something goes wrong. However something often does! Detailed explanations of each step is explained [wiki:Procedures/AplSuiteDetails here] === Problems === If you have any problems, check the files created in logs e.g. EUFAR10-03_2010-196_eagle_-2.o293411 The last part of the name is the grid node job number. [[BR]] Check these for errors (look for stars). Common problems are listed [wiki:Processing/problemsHS here], along with possible solutions. === SCTs === The script will have produced 21 iterations of each flightline, with a range of sct values. SCT is a timing offset which affects the position and geometry of the image. Currently they range from -0.1 to 0.1 seconds. A tiff will have been produced for each version, which will be put in /processing/hyperspectral/flightlines/georeferencing/mapped. You will need to go through these using gtviewer and find the image that looks correct, and note down the sct value. You usually determine the correct image by the amount of wobble in the image. Lines with an incorrect offset will cause kinks in straight lines such as roads where the plane trajectory wobbles. Selecting the image with the straight road is usually what is required. === Creating final files === The stage that creates the geolocated tiff's that you use to find SCTs deletes the original level 1 files after it's finished. You therefore need to use the config one more time to generate the full set of files for each flightline, using the correct SCT value. To do this, change the sctstart and sctend values so they are both the correct figure, then run again in the global section set `slow_mode = true`. Running this with specim_qsub.py will once again submit your lines to the gridnode and you should soon have all the files you require to make a delivery. Once you have the final files run `aplxml.py --meta_type=p --config_file= ` in the main processing directory to get the xml project information. === OS vectors === If the project is in the UK, you will need to check the positional accuracy of the images against the OS line overlay. These should have been ordered before hand and are located in ~arsf/vectors. === Making a delivery === Use the make_hyper_delivery.py script to make the delivery directory. Run it from within the main project directory. By default it runs in dry run mode. Use --final if happy with what it says it will do. Use -m to generate screenshots and mosaics ==== Making the Readme ==== To make the readme first generate the readme config file using `generate_readme_config.py -d -r hyper -c ` The readme config file will be saved as /tmp/hyp_genreadme-airbone.cfg. Check all the information in the readme config file is correct, if not change it. Then create a readme tex file using: `create_latex_hyperspectral_apl_readme.py ` Finally run `latex ` to create the PDF readme. This readme should be placed in the main delivery folder. [wiki:Procedures/DeliveryCreation/Hyperspectral This] page details the old manual way for creating the delivery and Readme.