Version 46 (modified by mark1, 10 years ago) (diff)

--

Arrival of new flight data

This procedure should be followed on receipt of new flight data from ARSF.

If in any doubt about something (e.g. a dataset has two project codes), contact Gary.

Permissions fixing

Ensure permissions on the SATA disk are correct and all files readable ("chmod a+rX /mnt/satadisk" as root)

Copying to PML data storage

Copy to appropriate location on the filesystem (~arsf/arsf_data/in_progress/2007/flight_data/...)

  • when copying from the source media, preserve timestamps with cp -a or rsync -av

Scripting method

  • in the directory above the new project directories (eg '../flight_data/unpacking/') run 'unpack_folder_structure.py --dry-run'
    • using --dry-run will output the commands it will run to the terminal. Check these look ok
    • if happy, either re-run without --dry-run or cut/paste commands
    • Each project directory should be re-formatted to the current standard
  • in each project directory run 'unpack_file_check.py -l <admin/logsheet.doc(.txt)>'
    • This will convert .doc logsheet to .txt, or use the .txt if one available. NOTE to convert to .doc requires ooffice macro
    • Will then do various checks of data against logsheet as listed below. Information will be output to terminal. Important (error) messages are printed again at the end.
      • Check file sizes against a 'suitable' size and also against header file (Eagle + Hawk)
      • Check number of files against logsheet
      • Check number of logsheets
      • Check GPS start/stop times in header file (Eagle + Hawk)
      • Check .raw, .nav, .log, .hdr for each Eagle + Hawk line
      • Check for nav-sync issues - THIS IS BASED ON A FALSE ASSUMPTION AND PROBABLY IS USELESS
  • in each project directory run 'generate_runscripts.py -l <admin/logsheet.txt> -n <number_of_lines> -s <sensorid - e,h,a,c>'
    • This will generate run scripts for the project, but which are still needed to be edited. Fills in azsite info, not boresight or azgcorr etc.
    • Run once for each sensor present
    • The script should ignore blank lines on the logsheet and lines which are named test.

Non-scripting method

  • ensure the directory name conforms to the standard - PROJECTCODE-YYYY_JJJxx_SITENAME, e.g. GB07_07-2007_102a_Inverclyde, boresight-2007_198, etc
  • prune empty directories (check these aren't stubs for later data first) (use rmdir * in the project directory)
  • rename any capitalised subdirectories
  • remove any spaces, brackets or other Unix-upsetting characters in filenames
    • use find -regex '.*[^-0-9a-zA-Z/._].*' | ~arsf/usr/bin/fix_naughty_chars.py
    • gives suggested commands, but check before pasting commands!
  • remove executable bit on all files
    • use find -type f -exec chmod a-x {} \;
  • move DCALM* files to applanix/Raw
  • bzip lidar files
    • bzip lidar/*
  • remove group & other write permission *chmod go-w . -R

In some cases, the flight crew may fly two projects back-to-back but enter all the data onto a single logsheet. If so, you may need to split the project directory into two, particularly if there's a large time gap (navigation needs separate processing) or the PIs are different (different delivery addresses/tracking). If you do need to split a project, ensure both copies have copies of common files (logsheets, rinex, etc), but that non-common files are not duplicated (ie. don't include hawk data for part 1 in part 2..). Also note in the ticket what was done for tracking purposes.

Verification

Look at the logsheet and verify that we have copies of all relevant data mentioned there.

Verify the details on the logsheet (esp. PI) are correct by cross-referencing against the ARSF application form. If we do not have the application, call Kidlington to verify.

Check the filesizes of all data files (Eagle, Hawk, ATM, CASI) to make sure none are zero bytes (or obviously broken in some way).

Create subsidiary files

Run the DEM generating script if a UK flight. nextmapdem.sh

Create the calibration sym link.

Tickets and tracking

Raise a trac ticket (type 'flight processing') for the new data, then add details to the processing status page.

  • ticket summary should be of the form BGS07/02, flight day 172/2007, Keyworth
  • add scientific summary (check ARSF application in ~arsf/arsf_data/in_progress/2007/ARSF_Applications-GB_2007)
  • note arrival time of data
  • note any specific comments that might help with processing
  • owner should be blank
  • ticket body should contain:
    Data location: ~arsf/arsf_data/in_progress/2007/flight_data/uk/GB04_19-2007_102b_Nigg_Bay
    
    Data arrived from ARSF via SATA disk LETTER OR network transfer on DATE.
    
    Scientific details: FILL IN FROM APPLICATION
    
    Priority: FILL IN FROM APPLICATION (e.g. alpha-5 low), set ticket priority appropriately
    
    PI: A. N. Other
    
    Any other notes..
    
    == Sensors: ==
     * ATM
     * CASI
     * Eagle
     * Hawk
    
    == Quicklook ==
    
    TBD.
    

Finally..

Email the PI to inform them that their data has arrived for processing. Sample text:

  • fill in the 4 fields: <PI_NAME>, <PROJECT>, <DATE>, <TICKET_NO>
  • cc to arsf-processing
  • set reply-to to arsf-processing
    Dear <PI_NAME>,
    
    This is a notification that your ARSF data for <PROJECT> flown on <DATE>
    has arrived at the ARSF Data Analysis Node for processing.  
    
    We aim to deliver as quickly as possible and typically do so within a
    month, though we may take longer if we are in the peak season or there
    are problems with the processing.
    
    You can follow progress at the following webpages:
    
     http://www.npm.ac.uk/rsg/projects/arsf/status/status.php
      - general status page
    
     http://www.npm.ac.uk/rsg/projects/arsf/trac/ticket/<TICKETNO>
      - our notes during processing (may be technical)
    
    If you would like any more information, please feel free to contact us at arsf-processing@pml.ac.uk
    
    Regards,