Version 46 (modified by mark1, 16 years ago) (diff) |
---|
Data Delivery
Preparation
What should be included - Hyperspectral deliveries
- level 1 data that has been mapped to level 3 successfully and verified as correctly geolocated (as far as possible).
- a readme file describing the contents of the delivery
- flight logsheet
- screenshots (level 3)
- one of each line (no vectors)
- mosaic of all lines for a single sensor, one with and one without vectors overlaid
- each screenshot should try to show typical or noteworthy characteristics of the image - e.g. any distortion or error, or a typical part of the scene
- (2008+?) a DEM file where we have one and can give it to the user
- any ancillary files (spheroid separation grids for non-UK flights, OS GB correction factors, etc)
Scripted Approach
- Run the script to convert the level 3 Tiff files to jpgs.
- This will convert all tiff files in directory to jpegs (keeping the tiffs as well)
- gtiff2jpg.py -d <dir containg tiffs> -a
- This will only convert the file named filename
- gtiff2jpg.py -f <filename> -a
- This will convert all tiff files in directory to jpegs (keeping the tiffs as well)
- Create the delivery directory using make_delivery_folder.sh
- make_delivery_folder.sh <main project dir> <year> <proj-code>
- e.g. make_delivery_folder.sh ~mark/scratch/GB08_35-2008_Example/ 2008 GB08-35
- This will create the delivery directory and populate it with the your lev1 directory files and the other usual directories. It will not copy the misc directory or separation grids nor a DEM as these are not supplied as default in the delivery. These can be manually copied from the below template directories. The lev1 files will be renamed as follows. If the filename is xxx_gpt0.1.zzz the renamed file will be xxx.zzz. This is based on the default file name structure as example e129a121b_gpt0.0.bil. File names with more than 1 '_' will be renamed incorrectly. (I'll make this more robust at some point).
- Copy the misc directory or other files (such as the jpgs) if required for the delivery.
- Ensure that an ASCII version of the logsheet exists in the project admin/ directory. To create one, start open office and save logsheet as .txt format. Also ensure no other file in the admin dir as a .txt extension. Run the Read Me generation script
- create_readme.py -d <main project directory> -D <DEM>
- This should create a file named ReadMe.txt in your delivery folder. It WILL still need to be edited. Flight line numbers will need to be added to the table at the top of the read me, also any reference to vectors will have to changed as required. Please still read through the final read me to check for errors.
Manual Approach
There are delivery template directories available (copy these and populate):
- 2006
- ~arsf/arsf_data/in_progress/2006/delivery_template/
- 2007
- ~arsf/arsf_data/in_progress/2007/delivery/template/
- 2008
- ~arsf/arsf_data/in_progress/2008/delivery/template/
The delivery data should be created in the project directory named delivery/YYYYMMDD/PROJECTCODE/... (e.g. delivery/20071215/GB07-05 for a delivery of GB07-05 created on 15/Dec/2007).
Necessary comments to make in the readme file
- Update anything marked TODO in the template
- Appropriate pixel size
- Appropriate bands for the sensor
- Comments on the quality of the data (accuracy vs vectors, any bad bands, etc) and any specific problems encountered
- Include a tested example command to convert the data from level 1 to level 3
- Ensure the text file is in Windows format (run unix2dos on it if necessary)
- (list of other things to change)
What should be included - LIDAR deliveries
- ASCII files of the LIDAR data
- flight logsheet
- readme file describing the data set + copyright notice
- screenshot of mosaic of all lines (full resolution) and zoomed image with vector overlay
- data quality report and further processing scripts
- DEM of the LIDAR data - usually gridded to 2m resolution
Procedure for creating a LIDAR data set delivery
- Copy the template directory over to the project directory
- Convert the LAS binary files into ASCII files, ensuring to output all the appropriate information
- run las2txt --parse txyzicra <lidarfilename> <outputfilename> for each file, outputting to the ascii_laser directory
- Create a full resolution JPEG of a mosaic of all the LIDAR lines + a separate one with vector overlays and put in screenshot directory
- Go through the readme file and edit as required
- Enter all the project details into the table at the top
- Fill in the contents and change the line numbers as appropriate
- Enter the projection and datum details - get these from ALS PP output dialog when processing.
- UK flights should be Horizontal datum: ETRS89 Vertical Datum: Newlyn Projection: OSTN02 British National Grid
- Insert statistics for each line: lasinfo --check <lidarfilename> then cut/paste the required information
- Check the accuracy of data vs vectors and estimate the average offset, also the average elevation offset between neighbouring lines
- ensure readme file is in Windows format (run unix2dos on it)
- Make sure correct upto date data quality report (pdf version) is included in docs directory
- Include the flight logsheet with the delivery
- If DEM was created for processing then include in delivery. Else create one and include in delivery.
Verification
Once a dataset has been prepared for burning and delivery, a second person should go over it and:
For Hyperspectral Data
- Verify we have the correct PI (check against application, call Kidlington if unsure) and address (application, google, Kidlington)
- Check against the logsheet that all data that should be present is.
- Check for typos, etc in the documentation.
- Ensure the readme and other text files are Windows compatible (use the 'file' command on the .txt files: they should be ASCII with CRLF terminators) (run unix2dos ReadMe.txt)
- Test out the level 3 processing command for one or more of the level 1 files (definitely one per sensor).
- All level 1 files should be looked at visually (ideally in all bands, though this is only practical for ATM and casi).
For LIDAR Data
- Verify we have the correct PI (check against application, call Kidlington if unsure) and address (application, google, Kidlington)
- Check against the logsheet that all data that should be present is.
- Check for typos, etc in the documentation.
- Ensure the readme and other text files are Windows compatible (use the 'file' command on the .txt files: they should be ASCII with CRLF terminators) (run unix2dos ReadMe.txt)
- Open all the ASCII LIDAR files into a GIS and check they open/load ok
- Check overlaps with neighbouring lines for obvious errors
- Check against vectors if available
- use tail/head to check the ASCII LIDAR files have all the required columns in the order specified in the readme.txt
- head <lidarfilename>
Final steps
Burning the data
- create an iso image with mkisofs -R -udf -verbose -o /tmp/dvd.iso -V <PROJECTCODE-JULIANDAY> <delivery_directory>
- test the iso with sudo mount -o ro,loop /tmp/dvd.iso /mnt/tmp - mounts to /mnt/tmp, go look at it, then unmount with sudo umount /mnt/tmp (yes, that is umount not unmount)
- if it's ok, burn to CD/DVD with sudo cdrecord dev=/dev/dvd -sao fs=100m driveropts=burnfree -v /tmp/dvd.iso
- clean up with rm /tmp/dvd.iso
- check the dvd on another machine (ideally Windows)
Lightscribe
- open the CD template from ~arsf/arsf_data/in_progress/2007/delivery/cd_covers/arsf_cd_label_gimp_editable.xcf in gimp
- click on the text button in the toolbox, then click the GB... text and edit to reflect the correct project code
- save as a .bmp file somewhere, e.g. ~/ipy0708214ef.bmp
- on the lightscribe machine (currently pmpc974), run 4L-cli print /dev/sr0 ~/ipy0708214ef.bmp
If it appears to be stuck on "Starting up", it may still be burning - give it time to complete. For a dense full disk picture, wait at least 45mins!
If the drive seems wrong, try 4L-cli enumerate to list attached lightscribe compatible drives.
Preparing hard disks for delivery
You will need to have root permissions before you start.
To create a partition:
Unmount disk run dmesg to find device name (listed at the bottom if just plugged in)
run fdisk /dev/device name (probably sdb)
enter 'p' to print partition table and check you have selected the correct disk
enter 'd' to delete the current partition
enter 'n' to create a new partition
enter 'p' to make new partition the primary partition
enter 'p' to print new partition table – if all seems fine enter 'w' to write
To format the disk:
ensure disk is unmounted again
to be on the safe side, run dmesg again to make sure device name hasn't changed
run mke2fs -j /dev/partition (probably sdb1)
Change permissions:
make writable for everyone – chmod a+rwx /media/disk
Finalising hard disk
Set permissions and owner:
chmod a+rX,a-w -R .
chown root.root -R .
Cover letter
There should be a cover letter with all deliveries. Templates can be found in ~arsf/arsf_data/in_progress/YYYY/delivery/
Trac and website updating
- Add a comment on the trac ticket for this project that this data has been delivered (describe what was sent and when).
- Also add a quicklook to the ticket.
- Update the status database to reflect the delivered data (http://www.npm.ac.uk/rsg/projects/arsf/status/addflight/editflight.php)
- Update the page saying who has our disks? (if sent on disk)
Email the PI
- email them that you're putting the disk in the post and ask for confirmation of receipt
Subject:
Notification of ARSF data delivery for <PROJECT CODE>
Body:
(note you need to fill in PI, DATE, NOTES, TICKETNO, also mention if you aren't sending all their data at once)
Dear <PI>, --PICK ONE OF THE NEXT TWO SENTENCES (DVD or HDD)-- This is to notify you that we have just dispatched your ARSF data on a USB hard disk formatted with the Linux ext3 filesystem. Please let us know when the data arrive so we know not to resend. We'd appreciate it if you could copy the data onto your system and return the disk to us (see below for address) for reuse. This is to notify you that we have just dispatched your ARSF data on DVDs. Please let us know when the data arrive so we know not to resend. The delivery comprises: - ATM flight lines taken on <DATE> - CASI flight lines taken on <DATE> - Specim Eagle flight lines taken on <DATE> - Specim Hawk flight lines taken on <DATE> <NOTES - any other notes, including what data is held back> Information about the processing of the project is included with the delivery, but our internal notes are also available at: http://www.npm.ac.uk/rsg/projects/arsf/trac/ticket/<TICKETNO> Regards, ARSF Data Analysis Node. Plymouth Marine Laboratory, Prospect Place, Plymouth. PL1 3DH. UK. Email: arsf-processing@pml.ac.uk Tel: +44 (0)1752 633432 Fax: +44 (0)1752 633101 Web (NERC): http://arsf.nerc.ac.uk/ Web (processing wiki): http://www.npm.ac.uk/rsg/projects/arsf/trac/