Version 1 (modified by mggr, 14 years ago) (diff)

--

Eagle/Hawk Processing

Once the data have been unpacked and the nav data have been processed, the Eagle/Hawk data need to be run through the AZ Systems processing chain (primarily azspec, azimport, aznav and azgcorr).

Firstly, if it's not already there, copy for the project into a directory in the ARSF workspace (~arsf/workspace/). Note use the fastcopy_arsf_proj_dir.sh script rather than just a basic cp, because that symlinks most of the things that don't need to be altered (like data files) rather than copying them unnecessarily.

su - airborne
cd ~arsf/workspace
fastcopy_arsf_proj_dir.sh ~arsf/<year>/flight_data/<campaign>/<project_directory> ./<project_directory>

Create the processing scripts

To use generate_runscripts.py the navigation must first be processed and an sbet file located in the applanix/proc directory. There are now some new ways to create run scripts for the Eagle and Hawk using the generate_runscripts.sh script. If no log sheet is available, one can use the keywords on the command line to specify the global variables, such as:

--PI="P.I. Name" --PROJCODE=EX09_01

using "" if requires a space within the name. For unknown variables (global or flight line specific) a "?" will be inserted in the script and should be replaced by hand. For a full list of keywords and their default values run: generate_runscripts.py -h

EG: generate_runscripts.py -s s -n 4 --JDAY=200 --YEAR=2009 --PI="P.I. Name" --SITE="Over There" --PROJCODE=EX01_99

If a logsheet is available you can run the script as before, but beware that the method is not robust and may fill in the wrong data details, especially for the per-line information.

EG: generate_runscripts.py -s s -n 4 --JDAY=200 --YEAR=2009 -l admin/logsheet.TXT

To use the logsheet just for the global variables you can add to command line --NOPERLINELOG. This will then use the logsheet for global variables, but not for per flight line values. For the per flight line values it will use liblogwriter.py and the raw eagle/hawk header and sbet files to extract average values for the speed/altitude/direction. But using this method will not name the scripts by flight line order on the logsheet, but by the filename of the raw data.

EG: generate_runscripts.py -s s -n 4 --JDAY=200 --YEAR=2009 -l admin/logsheet.TXT --NOPERLINELOG

NOTE:

Adding keywords to the command line and using a logsheet results in the command line keywords taking precedent over the logsheet keyword.

Always check for '?' within the script for missing information.

Easy way

You should now have in the root of the project directory a .cfg file named with the year and julian day of the project. In order to run the project on the gridengine with default settings, run:

specim_qsub.py <cfg_file>

By default this will do timing runs for each line on the gridengine using SCT offsets between -0.1 and 0.1.

This will run without a DEM by default, since none has been created yet. To run with a DEM, first create the DEM - if it's in the UK you can use Nextmap data (try running nextmapdem.sh in the project directory to do it automatically, otherwise see the wiki page). If it's non-UK, you'll need to use LiDAR data (you may wish to use this anyway if it's available - see here? on how to do this using a script), or failing that SRTM 90m data. Copy it into the dem directory. Once you've done this include the DEM file in the config file by entering "dem=<dem_file_name>" under the DEFAULT section.

Check against OS vectors.

Old-fashioned way

If you're unlucky and the automated script fails for some reason, you may need to do at least some of the processing the old-fashioned way.

  1. cd to the project directory in the workspace.
  2. Ensure that directories called "logs", "dem", "lev1" and "lev3" have been created.
  3. If there isn't one already, create a "calibration" symlink to the calibration data: ln -s ~arsf/calibration/<year> calibration.
  4. Create a DEM for the project. If it's in the UK you can use Nextmap data (try running nextmapdem.sh in the project directory to do it automatically, otherwise see the wiki page). If it's non-UK, you'll need to use LiDAR data (you may wish to use this anyway if it's available - see here? on how to do this using a script), or failing that SRTM 90m data. Copy it into the dem directory.
  5. Create a symlink to the SBET file if there isn't one already.
    1. cd applanix
    2. ln -s Proc/sbet_01.out apa<year><jday>.sbet
    3. cd ..
  6. Copy the sample config file from ~arsf/sample_scripts/<year> to the project directory in the workspace - you need specim_qsub.py, process_specim_line.py and template_specim_config.cfg.
  7. Comparing the .cfg file with the logsheet, replace the entries that need to be replaced as appropriate. You should be able to see which bits these are in the sample scripts because they'll have keywords instead of values. You will need to create one cfg file section flightline per sensor.
    • Note that dates must be of the form DD/MM/YY or DD/MM/YYYY (must use / as a separator)
    • Note that times must be of the form HH:MM:SS (must use : as a separator)
  8. Run the processing scripts. You can either do this via the gridengine (recommended) by running specim_qsub.py, or you can do it on your machine one line at a time on your machine by running process_specim_line.py with appropriate arguments for each line/sensor combination from the root of the project directory. If you do the latter you should pipe the output to tee to ensure a log file is generated: rune/e12301.sh 2>&1 | tee rune/e12301.log.
  9. If anything fails check Common or known problems. This is now a bit out-of-date - if your solution isn't on there then please add it once you find what it is.
  10. Check each set of flightlines to work out which has the best timing offset (ie has the straightest roads, etc). Make a note of the timing offset values in the ticket
  11. Check against OS vectors

Once you're satisfied with the processed data, you need to create a delivery directory for it.