Version 49 (modified by knpa, 14 years ago) (diff) |
---|
NEODC Data Delivery
This page documents the procedure to follow when delivering data to NEODC.
First, we need to have some data to send - these should be datasets that are 'completed'.
All sensors need to be delivered and r-synced. Check with whoever processed each sensor if need be.
- Move workspace project into workspace/being_archived
- Prepare the repository version for archiving:
- Make sure everything is present and where it should be! (see Processing/FilenameConventions for the layout)
Things to look out for: Delivery folders, applanix/rinex data, las files, DEMs.
Use proj_tidy.sh to highlight any problems:proj_tidy.sh -d <project directory> [-e] Options: -e Extended check (may be verbosey)
- Add a copy of the relevant trac ticket(s) ; run:
mkdir -p admin/trac_ticket pushd admin/trac_ticket wget --recursive --level 1 --convert-links --html-extension http://arsf-dan.nerc.ac.uk/trac/ticket/TICKETNUMBER popd
- Delete the contents of the lev1/ subdirectory, where these are duplicates of the delivery directory.
- Scan the filesystem for any 'bad' things and fix them:
- delete any unnecessary files - backups of DEMs that weren't used, temp files created by gedit (~ at end of filename), hidden files etc
- Remove executable bit on all files (except the point cloud filter and the run[aceh] scripts) -> find -type f -not -wholename '*pt_cloud_filter*' -and -not -regex '.*/run[aceh]/.*sh' -and -perm /a=x -exec chmod a-x {} \;
- Find all files/dirs with unusual characters (space, brackets, etc) - use find -regex '.*[^-0-9a-zA-Z/._].*' | ~arsf/usr/bin/fix_naughty_chars.py to give suggested commands, but check first
- Give everyone read permissions (and execute if it has user execute) for the current directory and below - chmod -R a+rX .
- Make sure everything is present and where it should be! (see Processing/FilenameConventions for the layout)
- Create an archive tarball for NEODC to download:
su - arsf cd ~/arsf_data/archived/ ./qsub_archiver.sh <path to project in repository> (e.g. ~arsf/arsf_data/2008/flight_data/uk/CEH08_01) # Note you need to specify dirs at the project level # To run the archiving locally rather than via the grid engine, use: ./archive_helper-justdoit.sh ~arsf/arsf_data/2008/flight_data/uk/CEH08_01
When complete, this will have dumped the data into ~arsf/arsf_data/archived/neodc_transfer_area/staging/. Check it looks ok then move it up one level so NEODC can rsync it.
- Notify NEODC they can download the data.
- When NEODC have the data:
- Remove it from the transfer area
- Note on ticket that it's been sent to neodc (with date.)
- When NEODC confirm they have backed up the data:
- Move the repository project to non-backed-up space at: ~arsf/arsf_data/archived/<original path from ~arsf/arsf_data/>
e.g. mv ~arsf/arsf_data/2008/flight_data/uk/CEH08_01/ ~arsf/arsf_data/archived/2008/flight_data/uk/CEH08_01
You may need to create parent directories if they don't yet exist. - Create a symlink to the project in it's original location. Point the symlink through ~arsf/arsf_data/archived rather than directly to larsen.
e.g. ln -s ~arsf/arsf_data/archived/2008/flight_data/uk/CEH08_01 ~arsf/arsf_data/2008/flight_data/uk/CEH08_01 - Note in ticket that it has been backed up NEODC and give new data location.
- Move the repository project to non-backed-up space at: ~arsf/arsf_data/archived/<original path from ~arsf/arsf_data/>
- When NEODC confirm that everything appears to be in order (maybe wait a week):
- Close the ticket
- Delete the workspace copy in being_archived