Changes between Initial Version and Version 1 of Procedures/EagleHawkProcessing


Ignore:
Timestamp:
May 28, 2010 8:37:52 PM (12 years ago)
Author:
mggr
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Procedures/EagleHawkProcessing

    v1 v1  
     1= Eagle/Hawk Processing =
     2
     3Once the [wiki:Procedures/NewDataArrival data have been unpacked] and the [wiki:Procedures/ProcessingChainInstructions/NavigationProcessing nav data have been processed], the Eagle/Hawk data need to be run through the [wiki:Processing/Flow AZ Systems processing chain] (primarily azspec, azimport, aznav and azgcorr).
     4
     5Firstly, if it's not already there, copy for the project into a directory in the ARSF workspace (~arsf/workspace/). Note use the `fastcopy_arsf_proj_dir.sh` script rather than just a basic `cp`, because that symlinks most of the things that don't need to be altered (like data files) rather than copying them unnecessarily.
     6
     7{{{
     8su - airborne
     9cd ~arsf/workspace
     10fastcopy_arsf_proj_dir.sh ~arsf/<year>/flight_data/<campaign>/<project_directory> ./<project_directory>
     11}}}
     12
     13== Create the processing scripts ==
     14
     15To use generate_runscripts.py the navigation must first be processed and an sbet file located in the applanix/proc directory.   
     16There are now some new ways to create run scripts for the Eagle and Hawk using the generate_runscripts.sh script.  If no log sheet is available, one can use the keywords on the command line to specify the global variables, such as:
     17
     18 --PI="P.I. Name" --PROJCODE=EX09_01
     19
     20using "" if requires a space within the name.  For unknown variables (global or flight line specific) a "?" will be inserted in the script and should be replaced by hand. For a full list of keywords and their default values run: generate_runscripts.py -h
     21
     22EG:   generate_runscripts.py -s s -n 4 --JDAY=200 --YEAR=2009 --PI="P.I. Name" --SITE="Over There" --PROJCODE=EX01_99
     23
     24If a logsheet is available you can run the script as before, but beware that the method is not robust and may fill in the wrong data details, especially for the per-line information. 
     25
     26EG:   generate_runscripts.py -s s -n 4 --JDAY=200 --YEAR=2009 -l admin/logsheet.TXT
     27
     28To use the logsheet just for the global variables you can add to command line --NOPERLINELOG. This will then use the logsheet for global variables, but not for per flight line values. For the per flight line values it will use liblogwriter.py and the raw eagle/hawk header and sbet files to extract average values for the speed/altitude/direction. But using this method will not name the scripts by flight line order on the logsheet, but by the filename of the raw data.
     29
     30EG:   generate_runscripts.py -s s -n 4 --JDAY=200 --YEAR=2009 -l admin/logsheet.TXT --NOPERLINELOG
     31
     32'''NOTE:'''
     33
     34Adding keywords to the command line and using a logsheet results in the command line keywords taking precedent over the logsheet keyword.
     35
     36Always check for '?' within the script for missing information.
     37
     38
     39== Easy way ==
     40
     41You should now have in the root of the project directory a .cfg file named with the year and julian day of the project. In order to run the project on the gridengine with default settings, run:
     42
     43`specim_qsub.py <cfg_file>`
     44
     45By default this will do timing runs for each line on the gridengine using SCT offsets between -0.1 and 0.1.
     46
     47This will run without a DEM by default, since none has been created yet. To run with a DEM, first create the DEM - if it's in the UK you can use [wiki:Processing/NextMapDEMs Nextmap data] (try running `nextmapdem.sh` in the project directory to do it automatically, otherwise see the [wiki:Processing/NextMapDEMs wiki page]).  If it's non-UK, you'll need to use [wiki:Help/LeicaLidarDems LiDAR data] (you may wish to use this anyway if it's available - see [wiki:Processing/CreateTifs here] on how to do this using a script), or failing that [wiki:Processing/SRTMDEMs SRTM 90m data]. Copy it into the dem directory. Once you've done this include the DEM file in the config file by entering "dem=<dem_file_name>" under the DEFAULT section.
     48
     49Check against OS vectors.
     50
     51== Old-fashioned way ==
     52
     53If you're unlucky and the automated script fails for some reason, you may need to do at least some of the processing the old-fashioned way.
     54
     55 1. cd to the project directory in the workspace.
     56 1. Ensure that directories called "logs", "dem", "lev1" and "lev3" have been created.
     57 1. If there isn't one already, create a "calibration" symlink to the calibration data: `ln -s ~arsf/calibration/<year> calibration`.
     58 1. Create a DEM for the project. If it's in the UK you can use [wiki:Processing/NextMapDEMs Nextmap data] (try running `nextmapdem.sh` in the project directory to do it automatically, otherwise see the [wiki:Processing/NextMapDEMs wiki page]).  If it's non-UK, you'll need to use [wiki:Help/LeicaLidarDems LiDAR data] (you may wish to use this anyway if it's available - see [wiki:Processing/CreateTifs here] on how to do this using a script), or failing that [wiki:Processing/SRTMDEMs SRTM 90m data]. Copy it into the dem directory.
     59 1. Create a symlink to the SBET file if there isn't one already.
     60   1. `cd applanix`
     61   1. `ln -s Proc/sbet_01.out apa<year><jday>.sbet`
     62   1. `cd ..`
     63 1. Copy the sample config file from ~arsf/sample_scripts/<year> to the project directory in the workspace - you need specim_qsub.py, process_specim_line.py and template_specim_config.cfg.
     64 1. Comparing the .cfg file with the logsheet, replace the entries that need to be replaced as appropriate. You should be able to see which bits these are in the sample scripts because they'll have keywords instead of values. You will need to create one cfg file section flightline per sensor.
     65   * Note that dates must be of the form DD/MM/YY or DD/MM/YYYY (must use / as a separator)
     66   * Note that times must be of the form HH:MM:SS (must use : as a separator)
     67 1. Run the processing scripts. You can either do this via the gridengine (recommended) by running specim_qsub.py, or you can do it on your machine one line at a time on your machine by running process_specim_line.py with appropriate arguments for each line/sensor combination from the root of the project directory. If you do the latter you should pipe the output to tee to ensure a log file is generated: `rune/e12301.sh 2>&1 | tee rune/e12301.log`.
     68 1. If anything fails check [wiki:Processing/KnownProblems Common or known problems]. This is now a bit out-of-date - if your solution isn't on there then please add it once you find what it is.
     69 1. Check each set of flightlines to work out which has the best timing offset (ie has the straightest roads, etc). Make a note of the timing offset values in the ticket
     70 1. Check against OS vectors
     71
     72----
     73
     74Once you're satisfied with the processed data, you need to [wiki:Procedures/DeliveryCreation create a delivery directory] for it.