Hyperspectral data delivery

Once the data has been processed, it needs to be put into a delivery directory. This is the final file structure in which it will be sent to the customer.

Prior to creating a delivery

You can run a script check_processed_hyperspectral_data_is_ready.py which will check to see if there are the correct number of files in the delivery before you try and run the delivery scripts. This script will summarise what it finds versus what it thinks should be there.

Example: check_processed_hyperspectral_data_is_ready.py --fenixonly

Delivery Script

From the base directory of the project generate the structure using:

make_arsf_delivery.py --projectlocation $PWD \
                      --deliverytype hyperspectral --steps STRUCTURE

If everything looks OK run again with --final

Once the structure has been generated run the other steps using:

make_arsf_delivery.py --projectlocation $PWD \
                      --deliverytype hyperspectral --notsteps STRUCTURE

Again pass in --final if the output all looks OK.

See the delivery library guide for more details on advanced options.

Create checksum list

We can create a list of checksums to aid users to know if their downloads are successful. Run from with in the delivery directory (or change the paths in the command to be correct):

find flightlines/ -type f -exec md5sum {} \; | tee doc/checksums

Readme Generation

To make the readme first generate the readme config file using

generate_readme_config.py -d <delivery directory> -r hyper -c <config_file>

If you get ImportError: No module named pyhdf.HDF then run source ~utils/python_venvs/pyhdf/bin/activate and try again.

The readme config file will be saved as hyp_genreadme-airbone.cfg in the processing directory, do not delete this file as it is required for delivery checking. Check all the information in the readme config file is correct, if not change it.

Then create a readme tex file using: create_latex_hyperspectral_apl_readme.py -f <readme_config_file> -o <output_dir>

Finally run pdflatex <readme_tex_file> to create the PDF readme (you probably need to run this twice). This readme should be placed in the main delivery folder.

Manual procedure (if above fails)

The following is a more manual procedure that is only required if the scripts above fail.

  1. Create quicklook jpegs.
    1. Create a directory called 'jpgs' in the main project folder to hold the images that will be created.
    2. There are several ways to make the jpegs:
      • Use make_mosaic.sh:
        1. Open a terminal window in the lev3 directory.
        2. Ensure the only geotiffs in the lev3 directory are ones you want to convert to jpgs - either delete unwanted ones or move them to a subdirectory
        3. make_mosaic.sh will generate jpgs for each individual line and also a mosaic of all lines. If vectors are given then a mosaic with vector overlay will also be generated.
        4. Usage: make_mosaic.sh -d <tif-directory> -s <sensor-flag> -o <output-directory> [-v <vector-directory>] [-z <UTMZONE>]
        5. Example: make_mosaic.sh -d ./ -s e -o ../jpgs/ -v ~arsf/vectors/from_os/EX01_01/
      • Use convert:
        1. Steps 1 and 2 from above.
        2. for filename in `ls`; do convert $filename ../jpgs/`echo $filename | sed 's/.tif/.jpg/'`; done
        3. Use ENVI to create mosaics manually.
      • Convert doesn't always produce images that are scaled sensibly. If so, use the old script:
        1. Steps 1 and 2 from above.
        2. gtiff2jpg.py -d ./ -a - If this runs out of memory try again without the -a. You can also run on individual files instead of on a directory by using -s <filename> instead of -d ./ (knpa: I don't think this script works anymore)
        3. Create mosaics separately using ENVI (or whatever other method).
        4. Move images into jpgs directory.
      • If all else fails, open tifs in ENVI and manually take screenshots and crop them using gimp. Envi also has mosaicking functions. Move images into jpgs directory.
  2. Create the delivery directory: run make_delivery_folder.sh. Check it's done all the steps below correctly and do them yourself if not:
    1. In the project directory, create a directory called "delivery". Within this create a directory named after the date as YYYYMMDD, and within this create one named after the project code.
    2. Copy the contents of ~arsf/arsf_data/<year>/delivery/template into your new delivery directory
      • No longer need to copy bin or COPYRIGHT.txt
    3. Ensure the copy of the data quality report in the doc directory is the most recent version from ~arsf/doc/
    4. Copy the pdf logsheet into the logsheet directory
    5. Move the level 1 files from the directory they were processed into (<project_dir>/lev1/) into the lev1 directory in the delivery directory.
    6. Ensure that the filenames of the level 1 files and flightline jpegs are correct - they should be [eh]JJJaff1b.*, where [eh] is e for eagle or h for hawk, JJJ is the julian day of the flight, a is a flight letter if appropriate (usually a, b or occasionally c), and ff is the flightline number. There should be one level 1 file (or set of files, if there are .bil and .bil.hdr files in addition to the HDF) per flightline. If any are missing for particular sensors (eg. because the sensor failed), this should be explained in the readme file later.
    7. If the dem file was generated with non-NEXTMAP data then make a dem directory in the delivery and copy this to it.
    8. In the delivery directory create a directory called "misc". Copy files into it as follows:
      • For UK flights, copy in ~arsf/dems/geoid-spheroid/osgb02.cgrf
      • For non-UK flights, copy in ~arsf/dems/geoid-spheroid/sphsep15lx.grd UNLESS we supply a LIDAR DEM in which case they will not need this file.
    9. Copy the mosaics and jpegs of flightlines created above into the screenshots directory.
    10. Run update_delivery_mask_headers.py to update the level-1 file name in the mask.bil.hdr files.

NOTE: Ensure all files you place in the delivery are named to conform to the formats described here

  1. Create Readme:
    1. Create a config file for the read me using the generate_readme_config.py script. Use a command such as generate_readme_config.py -d <delivery_directory> -r hyper
    2. Edit the config file and check all the items are filled in:
      1. If an instrument has no dark frames for all flight lines then enter instrument name in "dark_frames"
      2. Any remarks about the data should be entered as a sentence in the "data_quality_remarks" section.
      3. If vectors have been used then the accuracy should be entered in "vectors" (e.g. '5-10' if they're within 5m to 10m)
      4. line_numbering should contain a space separated list of line names linking the logsheet to the processed files.
      5. All "compulsory" items should contain data
    3. Use the config file to generate the readme Tex and PDF
      1. Create a TeX file. Use the script create_latex_hyperspectral_apl_readme.py -f <config filename> -o <output_dir>
      2. This file can be reviewed and edited in any text editor if needed.
      3. Create a PDF file by running latex <tex_filename>
      4. Review the read me and check carefully to see if it looks OK with all relevant information present.
      5. Copy it to the delivery directory and remove any temporary files. Recommended to keep the TeX file until after delivery checking in case any edits are required.

If you have LIDAR data to make into a delivery, go to the LIDAR delivery page.

If not, or if you've done that already, the delivery is ready for checking.

Last modified 11 months ago Last modified on Feb 6, 2024, 11:43:05 AM