Version 71 (modified by mggr, 15 years ago) (diff)

--

Arrival of new flight data

This procedure should be followed on receipt of new flight data from ARSF.

If in any doubt about something (e.g. a dataset has two project codes), contact Gary.

Mounting the SATA disk

To mount the SATA disk, put it into the computer. Then ssh onto the machine. Run dmesg to show information on the device names, the kind of thing we would be look for is:

mptsas: ioc0: attaching sata device, channel 0, id 1, phy 1
scsi 0:0:1:0: Direct-Access     ATA      ST3500630AS      E    PQ: 0 ANSI: 5
sd 0:0:1:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: 73 00 00 08
sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sd 0:0:1:0: [sdb] 976773168 512-byte hardware sectors (500108 MB)
sd 0:0:1:0: [sdb] Write Protect is off
sd 0:0:1:0: [sdb] Mode Sense: 73 00 00 08
sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
sdb: sdb1

where the name of the disk is sdb (sata disk b) and the partion is sdb1. SATA disks will be 500GB in size. If paranoid and want to look at a time stamped version run

sudo less /var/log/messages.

To mount the disk use command sudo mount -o ro /dev/sdb1 /mnt/tmp. This will mount it read only. Then check the data looks ok. If it does, remount the disk using sudo mount -o remount,rw /mnt/tmp to make it read/write and run the command chmod a+rX -R /mnt/tmp' to ensure we have full read permission. Finally, remount the SATA back as read only, sudo mount -o remount,ro /mnt/tmp`.

Copying to PML data storage

Copy to appropriate location on the filesystem (~arsf/arsf_data/in_progress/2008/flight_data/...)

  • when copying from the source media, preserve timestamps with cp -a or rsync -av
  • ensure the project directory names conform to the standard - PROJECTCODE-YYYY_JJJxx_SITENAME, e.g. GB07_07-2007_102a_Inverclyde, boresight-2007_198, etc

The next stage can be manual or semi-automatic:

Semi-scripted method

  • in the directory above the new project directories (eg '../flight_data/unpacking/') run 'unpack_folder_structure.py'
    • by default, it runs safely in a dry-run mode and will output the commands it will run to the terminal. Check these look ok.
    • if happy, either re-run with --execute (and optionally --verbose)
    • Each project directory should be re-formatted to the current standard
  • in each project directory run 'unpack_file_check.py -l <admin/logsheet.doc(.txt)>'
    • This will convert .doc logsheet to .txt, or use the .txt if one available. NOTE to convert to .doc requires ooffice macro
    • Will then do various checks of data against logsheet as listed below. Information will be output to terminal. Important (error) messages are printed again at the end.
      • Check file sizes against a 'suitable' size and also against header file (Eagle + Hawk)
      • Check number of files against logsheet
      • Check number of logsheets
      • Check GPS start/stop times in header file (Eagle + Hawk)
      • Check .raw, .nav, .log, .hdr for each Eagle + Hawk line
      • Check for nav-sync issues - THIS IS BASED ON A FALSE ASSUMPTION AND PROBABLY IS USELESS
  • in each project directory run 'generate_runscripts.py -l <admin/logsheet.txt> -n <number_of_lines> -s <sensorid - e,h,a,c> -b <boresight JDAY> -B <boresight YEAR>'
    • This will generate run scripts for the project, but which are still needed to be edited. Fills in azsite info, boresight extraction line and semi-does azgcorr.
    • Run once for each sensor present
    • The script should ignore blank lines on the logsheet and lines which are named test.

Non-scripting method

  • prune empty directories (check these aren't stubs for later data first) (use rmdir * in the project directory)
  • rename any capitalised subdirectories
  • remove any spaces, brackets or other Unix-upsetting characters in filenames
    • use find -regex '.*[^-0-9a-zA-Z/._].*' | ~arsf/usr/bin/fix_naughty_chars.py
    • gives suggested commands, but check before pasting commands!
  • remove executable bit on all files
    • use find -type f -exec chmod a-x {} \;
  • move DCALM* files to applanix/Raw
  • bzip lidar files
    • bzip lidar/*
  • remove group & other write permission
    • chmod go-w . -R
  • Convert the .doc logsheet to .pdf
    • ooffice -invisible "macro:///Standard.Module1.SaveAsPDF(FULL_PATH_TO_FILE)"

Verification

Look at the logsheet and verify that we have copies of all relevant data mentioned there.

  • In some cases, the flight crew may fly two projects back-to-back but enter all the data onto a single logsheet. If so, you may need to split the project directory into two, particularly if there's a large time gap (navigation needs separate processing) or the PIs are different (different delivery addresses/tracking). If you do need to split a project, ensure both copies have copies of common files (logsheets, rinex, etc), but that non-common files are not duplicated (ie. don't include hawk data for part 1 in part 2..). Also note in the ticket what was done for tracking purposes.

Verify the details on the logsheet (esp. PI) are correct by cross-referencing against the ARSF application form. If we do not have the application, call Kidlington to verify.

Check the filesizes of all data files (Eagle, Hawk, ATM, CASI) to make sure none are zero bytes (or obviously broken in some way).

Create subsidiary files

Run the DEM generating script if a UK flight. nextmapdem.sh

Create the calibration sym link. This is done automatically if the unpacking scripts have been run.

Run times4grafnav.py script to create a text file of time stamps and put the output into the applanix directory. May be of use for the GNSS processing.

Tickets and tracking

Raise a trac ticket (type 'flight processing') for the new data, then add details to the processing status page.

  • ticket summary should be of the form BGS07/02, flight day 172/2007, Keyworth
  • add short version of scientific purpose to guide processing (check ARSF application in ~arsf/arsf_data/2009/ARSF_Applications)
  • note arrival time of data
  • set priority of ticket from project grading (try Internal/ProjectGradings? or the application, or hassle ARSF-Ops)
  • note any specific comments that might help with processing
  • owner should be blank
  • ticket body should contain:
    Data location: ~arsf/arsf_data/2009/flight_data/..... FILL IN
    
    Data arrived from ARSF via SATA disk LETTER OR network transfer on DATE.
    
    Scientific objective: FILL IN FROM APPLICATION (just enough to guide processing choices)
    
    Priority: FILL IN FROM APPLICATION/WIKI PAGE (e.g. alpha-5 low), set ticket priority appropriately
    
    PI: A. N. Other
    
    Any other notes..
    
    == Sensors: ==
     * Eagle
     * Hawk
     * Leica LIDAR
    

Vectors

Check the requested OS vectors wiki page? to see if we have vectors for the site in question. If not then email ARSF-Ops (Gary) and ask for them, specifying either the lat/long range or the OS grid squares (get these from the DEM generation scripts). This can take a couple of weeks before we get the vectors.

Finally..

Email the PI to inform them that their data has arrived for processing. Sample text:

  • fill in the 3 fields: <PI_NAME>, <PROJECT>, <TICKET_NO>
  • cc to arsf-processing
  • set reply-to to arsf-processing
  • subject: ARSF data arrival notification (<PROJECT>)
    Dear <PI_NAME>,
    
    This is a notification that your ARSF data for <PROJECT> is at the
    ARSF Data Analysis Node for processing.
    
    We aim to deliver as quickly as possible and typically do so within a
    month, though we may take longer if we are in the peak season or there
    are problems with the processing.
    
    You can follow progress at the following webpages:
    
     http://arsf-dan.nerc.ac.uk/status/status.php
      - general status page
    
     http://arsf-dan.nerc.ac.uk/trac/ticket/<TICKETNO>
      - our notes during processing (may be technical)
    
    If you would like any more information, please feel free to contact us at arsf-processing@pml.ac.uk
    
    Regards,