Arrival of new flight data

This procedure should be followed on receipt of new flight data from NERC-ARF.

If in any doubt about something (e.g.: a dataset has two project codes), contact NERC-ARF Operations.

Project Set Up

A semi-automated procedure -> Procedures/NewDataArrival/ProjectSetUp

Tickets and tracking

Status Page

Add details to the processing status page. Under 'Data location' use the full path to the project and do NOT include server name (e.g use /users/rsg/arsf/... and not /data/visuyuan/...).

Ticket

Raise a new trac ticket (type 'flight processing') for the new data.

  • Ticket summary should be of the form EU10/03, flight day 172/2011, Dennys Wood
  • Add short version of scientific purpose to guide processing (check NERC-ARF application)
  • Note arrival time of data
  • Set priority of ticket from project grading (try the grades subpages on Projects)
  • Note any specific comments that might help with processing
  • Owner should be blank
  • Verify the sensors that were requested in the application (primary) and those that weren't (secondary). Note in the ticket which these were.
  • Ticket body should contain:
    Data location: ~arsf/arsf_data/2011/flight_data/..... FILL IN
    
    Data arrived from NERC-ARF via SATA disk LETTER OR network transfer on DATE.
    
    Scientific objective: FILL IN FROM APPLICATION (just enough to guide processing choices)
    
    Priority: FILL IN FROM APPLICATION/WIKI PAGE (e.g. alpha-5 low), set ticket priority appropriately
    
    PI: A. N. Other
    EUFAR Project ID:
    CAFAM code: 
    
    Any other notes..
    
    = Sensors: =
     * Fenix (requested/not requested, flown/not flown)
     * Leica LIDAR (requested/not requested, flown/not flown)
     * OWL (requested/not requested, flown/not flown)
     * RCD (requested/not requested, flown/not flown)
    


Create KML overview

When happy the directory is unpacked correctly run make_kmloverview.py. This will generate the web page and quicklooks of the data that are made available to the user.

make_kmloverview.py -l <top_level_proj_dir> --unpacking add --final if happy

It will also generate a password which should be included in the email below. Check the username and password works by commenting out the following line:

E-mail PI

Email the PI to inform them that their data has arrived to the Data Analysis Node for processing. Sample text:

  • fill in the 7 fields: <PI_NAME>, <PROJECT>, <EUFAR ID> <TICKET_NO>, <DATE_OF_ARRIVAL>, <USERNAME>, <PASSWORD>, <INSERTWEBLINKHERE>
    • the date of arrival should be when the disks arrived or when the download begun
    • the username and password are available in the .htaccess file in processing/kml_overview or ~arsf/usr/share/kmlpasswords.csv. Note that make_kmloverview.py will need to have been run to create a password first unless the project code already exists for previous data.
  • cc to nerc-arf-processing
  • set reply-to to nerc-arf-processing
  • subject: NERC-ARF data arrival notification (<PROJECT> [<EUFAR ID>])
    Dear <PI_NAME>,
    
    This is a notification that your NERC-ARF data for <PROJECT> [<EUFAR ID>], flown on
    <CALENDAR_DAY(S)>, are at the NERC-ARF Data Analysis Node for processing (data
    received from NERC-ARF Operations on <DATE_OF_ARRIVAL>).
    
    We aim to deliver as quickly as possible. You can follow progress at the following webpages:
    
     https://nerc-arf-dan.pml.ac.uk/status/progress/
      - general status page
    
     https://nerc-arf-dan.pml.ac.uk/trac/ticket/631 <TICKETNO>
      - our notes during processing (may be technical)
    
    Also available is the PI page which will allow you to view flight data coverage and download quicklooks. This page will be updated as the data are processed and more information becomes available. Click the following link and enter username and password details as below:
    
    <INSERTWEBLINKHERE>
    
    username: <USERNAME>
    password: <PASSWORD>
    
    If there are any specific processing requirements, or if you have additional data that may help with the processing (e.g. GPS basestation data, ground or atmospheric measurements), please contact us directly so we can take account of this before beginning processing.
    
    If you would like any more information, please feel free to contact us at nerc-arf-processing@pml.ac.uk
    
    Regards,
    
    

Once the PI has been emailed commit the updated KML password file to ~arsf/live_git_repos/config_files/ (no need to push changes).

Basestation

Check there is basestation data for the flight. If not, ask ops if they have some or if we need to download it.

Logsheet and Remaining Steps

  • Look at the logsheet included and verify that we have copies of all relevant data mentioned there.
    • In some cases, the flight crew may fly two projects back-to-back but enter all the data onto a single logsheet. If so, you may need to split the project directory into two, particularly if there's a large time gap (navigation needs separate processing) or the PIs are different (different delivery addresses/tracking). If you do need to split a project, ensure both copies have copies of common files (logsheets, rinex, etc), but that non-common files are not duplicated (ie. don't include hawk data for part 1 in part 2..). Also write in the ticket any changes made for tracking purposes.
  • Verify all details on the logsheet (esp. PI) by calling/emailing NERC-ARF Ops making them aware of the projects that have been unpacked - the application form and logsheet are not sufficient proof, nor do they track any changes in PI over the lifetime of the application.

To use the 2021 logsheet generator follow these steps here, or scroll further to find instructions for the old version of logsheet generator.

To create a logsheet is run in 3 stages to give greatest control of input/output. The first stage is to generate csv tables of the information to go into the logsheet. Then you create an image of the overview of flightlines, and finally you can render an HTML and PDF version of the data.

Example commands:

To create the CSV tables and get all the data together you pass the project directory, a sensor list to include in the logsheet and output directory (e.g. processing/logsheet)

generate_logsheet.py -p /users/rsg/arsf/arsf_data/2021/flight_data/sweden/Magic-2021_236_Kiruna -o /tmp/logsheet -s fenix -v

Important note

The hyperspectral config file must be found as /processing/hyperspectral/yyyyjjjs.cfg and might need to remove either fenix1k when creating a readme file for the owl sensor.

To create an overview it is recommended to use the byline option to colour the flight lines, --inset to generate an inset in the image and -A to create an animated GIF (for the website)

generate_overview_for_logsheet.py -p /users/rsg/arsf/arsf_data/2021/flight_data/sweden/Magic-2021_236_Kiruna -o /tmp/logsheet/overview.png -s fenix -c byline -A /tmp/logsheet/overview.gif --inset

To render into an HTML and PDF you need to pass the pickled dict output from generate_logsheet.py and the image created from generate_overview_for_logsheet.py

render_json_logsheet.py -d /tmp/logsheet/logsheet_dict.pickle -o /tmp/logsheet/ -I /tmp/logsheet/overview.png

To add a "Notes" section to the logsheet include an ascii text file of the notes to add e.g. -n /tmp/logsheet/notes.txt

The outputs are:

report.pdf The report generated from the HTML as a PDF file (using wkhtmltopdf). This is the version to put in admin and deliveries
index.html The report as an HTML file. This version should go in processing/kml_overviews/logsheet together with the animated gif file.
rendered_template.json The JSON file that the report is generated from. This can be edited with a text editor to easily amend or add details. Needs to be understandable by the report_generator. Some hints ... to move a table to a new page in the PDF add the "paging": "new-page" into the JSON for that csv table section. Likewise, remove it to follow the previous section without a page break.
*.csv The CSV files that contain the information for the various tables. These can easily be edited with libreoffice or a text editor.
*dict A pickled dict that contains project information to go into the render_json_logsheet.py

Instructions for old version of logsheet scripts

Make a logsheet that is suitable for delivery to end users. Use the generateLogsheet.py script and follow instructions:

Using config file (recommended)

  • Manually fill out a config file with the example format below
  • Run generateLogsheet.py, passing the config file path with the -c option
  • Suppress the GUI with the option -m autoonly

For more advance running, you may specify a particular logsheet template e.g. -x .../logsheet_template_modular.xml
A different output directory may be specified with the -o modifier
Do not use the --guifree option unless you really know what you are doing as this will suppress error messages such as missing sensors (which will terminate the script)
You may still use the GUI with the config file if desired, just don't use the -m flag

If sensors are missing a dialogue box will pop up. Select 'No' to continue without them if they are not expected.

Example command:

generateLogsheet.py -m autoonly -c ../pathto/configfile

proj_code =
proj_name =
pi_name =
aircraft =
pilot = 
copilot =
operator =
observer =

base =
log = 
engineon =
engineoff =
alignin =
alignout =
base_station =
weather =
takeoff_time =
land_time =
# leave blank or enter letter
sortie = 

flight_description =

Using GUI

  • Run from top level - writes into 'admin' dir. You may wish to rename this dir before running to prevent any accidental overwriting.
  • Fill in the yellow boxes with information from supplied logsheet.
  • Blue boxes should be automatically filled in when you start the program but check the values are correct.
  • Click 'fill post manual'. This will calculate the values for the green fields based on the values you have entered into the yellow fields. You should also check that these fields look correct. If the flight was flown over midnight, you may need to correct them.


Last modified 2 weeks ago Last modified on Mar 5, 2024, 2:23:57 PM