Version 33 (modified by edfi, 11 years ago) (diff)

--

Hyperspectral data delivery

Once the data has been processed, it needs to be put into a delivery directory. This is the final file structure in which it will be sent to the customer.

Scripted procedure

Use the make_hyper_delivery.py script to make the delivery directory. Run it from within the main project directory. By default it runs in dry run mode. Make sure the only lev3's in the georeferencing/mapped directory are the sct correct versions.

Use --final if happy with what it says it will do. Use -m <config> to generate screenshots and mosaics

Note: this script automatically moves over the contents of the DEM directory. You will need to revert this if this is a nextmap DEM as these shouldn't be delivered, and include an ASTER DEM instead.

Making the Readme

To make the readme first generate the readme config file using

generate_readme_config.py -d <delivery directory> -r hyper -c <config_file>

The readme config file will be saved as hyp_genreadme-airbone.cfg in the processing directory, do not delete this file as it is required for delivery checking. Check all the information in the readme config file is correct, if not change it.

Then create a readme tex file using: create_latex_hyperspectral_apl_readme.py -f <readme_config_file>

Finally run latex <readme_tex_file> to create the PDF readme. This readme should be placed in the main delivery folder.

Manual procedure

The following is a more manual procedure that is only required if the scripts above fail.

  1. Create quicklook jpegs.
    1. Create a directory called 'jpgs' in the main project folder to hold the images that will be created.
    2. There are several ways to make the jpegs:
      • Use make_mosaic.sh:
        1. Open a terminal window in the lev3 directory.
        2. Ensure the only geotiffs in the lev3 directory are ones you want to convert to jpgs - either delete unwanted ones or move them to a subdirectory
        3. make_mosaic.sh will generate jpgs for each individual line and also a mosaic of all lines. If vectors are given then a mosaic with vector overlay will also be generated.
        4. Usage: make_mosaic.sh -d <tif-directory> -s <sensor-flag> -o <output-directory> [-v <vector-directory>] [-z <UTMZONE>]
        5. Example: make_mosaic.sh -d ./ -s e -o ../jpgs/ -v ~arsf/vectors/from_os/EX01_01/
      • Use convert:
        1. Steps 1 and 2 from above.
        2. for filename in `ls`; do convert $filename ../jpgs/`echo $filename | sed 's/.tif/.jpg/'`; done
        3. Use ENVI to create mosaics manually.
      • Convert doesn't always produce images that are scaled sensibly. If so, use the old script:
        1. Steps 1 and 2 from above.
        2. gtiff2jpg.py -d ./ -a - If this runs out of memory try again without the -a. You can also run on individual files instead of on a directory by using -s <filename> instead of -d ./ (knpa: I don't think this script works anymore)
        3. Create mosaics separately using ENVI (or whatever other method).
        4. Move images into jpgs directory.
      • If all else fails, open tifs in ENVI and manually take screenshots and crop them using gimp. Envi also has mosaicking functions. Move images into jpgs directory.
  2. Create the delivery directory: run make_delivery_folder.sh. Check it's done all the steps below correctly and do them yourself if not:
    1. In the project directory, create a directory called "delivery". Within this create a directory named after the date as YYYYMMDD, and within this create one named after the project code.
    2. Copy the contents of ~arsf/arsf_data/<year>/delivery/template into your new delivery directory
      • No longer need to copy bin or COPYRIGHT.txt
    3. Ensure the copy of the data quality report in the doc directory is the most recent version from ~arsf/doc/
    4. Copy the pdf logsheet into the logsheet directory
    5. Move the level 1 files from the directory they were processed into (<project_dir>/lev1/) into the lev1 directory in the delivery directory.
    6. Ensure that the filenames of the level 1 files and flightline jpegs are correct - they should be [eh]JJJaff1b.*, where [eh] is e for eagle or h for hawk, JJJ is the julian day of the flight, a is a flight letter if appropriate (usually a, b or occasionally c), and ff is the flightline number. There should be one level 1 file (or set of files, if there are .bil and .bil.hdr files in addition to the HDF) per flightline. If any are missing for particular sensors (eg. because the sensor failed), this should be explained in the readme file later.
    7. If the dem file was generated with non-NEXTMAP data then make a dem directory in the delivery and copy this to it.
    8. In the delivery directory create a directory called "misc". Copy files into it as follows:
      • For UK flights, copy in ~arsf/dems/geoid-spheroid/osgb02.cgrf
      • For non-UK flights, copy in ~arsf/dems/geoid-spheroid/sphsep15lx.grd UNLESS we supply a LIDAR DEM in which case they will not need this file.
    9. Copy the mosaics and jpegs of flightlines created above into the screenshots directory.
    10. Run update_delivery_mask_headers.py to update the level-1 file name in the mask.bil.hdr files.

NOTE: Ensure all files you place in the delivery are named to conform to the formats described here

  1. Create Readme:
    1. Create a config file for the read me using the generate_readme_config.py script. Use a command such as generate_readme_config.py -d <delivery_directory> -r hyper
    2. Edit the config file and check all the items are filled in:
      1. If an instrument has no dark frames for all flight lines then enter instrument name in "dark_frames"
      2. Any remarks about the data should be entered as a sentence in the "data_quality_remarks" section.
      3. If vectors have been used then the accuracy should be entered in "vectors" (e.g. '5-10' if they're within 5m to 10m)
      4. line_numbering should contain a space separated list of line names linking the logsheet to the processed files.
      5. All "compulsory" items should contain data
    3. Use the config file to generate the readme Tex and PDF
      1. Create a TeX file. Use the script create_latex_hyperspectral_apl_readme.py -f <config filename>
      2. This file can be reviewed and edited in any text editor if needed.
      3. Create a PDF file by running latex <tex_filename>
      4. Review the read me and check carefully to see if it looks OK with all relevant information present.
      5. Copy it to the delivery directory and remove any temporary files. Recommended to keep the TeX file until after delivery checking in case any edits are required.

If you have LIDAR data to make into a delivery, go to the LIDAR delivery page.

If not, or if you've done that already, the delivery is ready for checking.