Version 1 (modified by mggr, 9 years ago) (diff)

--

Hyperspectral data delivery

Once the data has been processed, it needs to be put into a delivery directory. This is the final file structure in which it will be sent to the customer.

  1. Create quicklook jpgs. This is the new improved scripted approach, if for any reason this fails do it manually by using ENVI, taking screenshots and cropping them using gimp.
    1. For UK flights only (because it works on a BNG area)
      1. Open a terminal window in the lev3 directory under the project workspace
      2. Ensure the only geotiffs in the lev3 directory are ones you want to convert to jpgs - either delete unwanted ones or move them to a subdirectory
      3. Use the make_mosaic.sh script, which will generate jpgs for each individual line and also a mosaic of all lines. If vectors are given then a mosaic with vector overlay will also be generated.
      4. Usage make_mosaic.sh -d <tif-directory> -s <sensor-flag> -o <output-directory> [-v <vector-directory>] [-z <UTMZONE>]
      5. Example from within lev3 dir to create eagle images: make_mosaic.sh -d ./ -s e -o ../jpgs/ -v ~arsf/vectors/from_os/EX01_01/
    2. Otherwise (or if that fails) you may be able to use convert
      1. for filename in `ls`; do convert $filename `echo $filename | sed 's/.tif/.jpg/'`; done
      2. Use ENVI to create mosaics manually.
    3. Convert doesn't always produce images that are scaled sensibly. If so, use the old scripted method.
      1. Open a terminal window in the lev3 directory under the project workspace
      2. Ensure the only geotiffs in the lev3 directory are ones you want to convert to jpgs - either delete unwanted ones or move them to a subdirectory
      3. Run gtiff2jpg.py -d ./ -a - If this runs out of memory try again without the -a. You can also run on individual files instead of on a directory by using -s <filename> instead of -d ./
      4. Create mosaics separately using ENVI (or whatever other method).
  2. Create the delivery directory: run make_delivery_folder.sh. Check it's created everything you expect correctly. If it fails, you can create the directory manually as follows:
    1. In the project directory in the workspace, create a directory called "delivery". Within this create a directory named after the date as YYYYMMDD, and within this create one named after the project code.
    2. Copy the contents of ~arsf/arsf_data/<year>/delivery/template into your new delivery directory
    3. Ensure the copy of the data quality report in the doc directory is the most recent version from ~arsf/doc/
    4. Copy the pdf logsheet into the logsheet directory
    5. Move the level 1 files from the directory they were processed into (<project_dir>/lev1/) into the lev1 directory in the delivery directory.
  3. In the delivery directory create a directory called "misc". Copy files into it as follows:
    • For UK flights, copy in ~arsf/dems/geoid-spheroid/osgb02.cgrf
    • For non-UK flights, copy in ~arsf/dems/geoid-spheroid/sphsep15lx.grd UNLESS we supply a LIDAR DEM in which case they will not need this file.
  4. Copy the mosaics and jpegs of flightlines created above into the screenshots directory
  5. Ensure that the filenames of the level 1 files and flightline jpegs are correct - they should be [eh]JJJaff1b.*, where [eh] is e for eagle or h for hawk, JJJ is the julian day of the flight, a is a flight letter if appropriate (usually a, b or occasionally c), and ff is the flightline number. There should be one level 1 file (or set of files, if there are .bil and .bil.hdr files in addition to the HDF) per flightline. If any are missing for particular sensors (eg. because the sensor failed), this should be explained in the readme file.
  6. Ensure that an ASCII version of the logsheet exists in the project admin/ directory. To create one, start open office and save logsheet as .txt format. Also ensure no other file in the admin dir as a .txt extension.
  7. Generate the readme file. Run create_readme.py -d <project_workspace_dir> -D <dem_type (One of either LIDAR, SRTM or NEXTMAP)>. I suggest you first run 'create_readme.py -h` to check the usage of the script. If using the new style config run scripts then you must use the switch -N to disable the azgcorr example command search (since runh/rune do not exist).
  8. Edit the readme file to ensure that it is correct and to add any desired notes (eg. explanations on why there is no data for a particular sensor for a particular line).

Necessary comments to make in the readme file

  1. Update anything marked TODO in the template
  2. Appropriate pixel size
  3. Appropriate bands for the sensor
  4. Comments on the quality of the data (accuracy vs vectors, any bad bands, etc) and any specific problems encountered
  5. Include a tested example command to convert the data from level 1 to level 3
  6. Ensure the text file is in Windows format (run unix2dos on it if necessary)

This list may not be exhaustive, check the readme carefully.


If you have LIDAR data to make into a delivery, go to the LIDAR delivery page.

If not, or if you've done that already, the delivery is ready for checking.