Version 73 (modified by knpa, 13 years ago) (diff) |
---|
Archiving projects with NEODC
This page documents the procedure to follow when sending data to NEODC.
A project is ready to be archived when all sensors have been delivered. If it is 2010 or earlier then it will need to be fully r-synced from the workspace. Also, check the ticket and make sure there is nothing on there which suggests something still needs to be done with the dataset (ask the processor if need be).
- If there is a workspace version, move it into workspace/being_archived
- Prepare the repository version for archiving:
- Make sure everything is present and where it should be! (see Processing/FilenameConventions for the required layout and name formats)
Things to look out for: Delivery folders, applanix/rinex data, las files, DEMs
Use proj_tidy.sh to highlight any problems:proj_tidy.sh -p <project directory> -c
- Add a copy of the relevant trac ticket(s) ; run:
mkdir -p admin/trac_ticket pushd admin/trac_ticket wget --recursive --level 1 --convert-links --html-extension http://arsf-dan.nerc.ac.uk/trac/ticket/TICKETNUMBER popd
- Scan the filesystem for any 'bad' things and fix them:
- Delete any unnecessary files - backups of DEMs that weren't used, temp files created by gedit (~ at end of filename), hidden files, duplicates in lev1 dir etc
- Find all files/dirs with unusual characters (space, brackets, etc), ignoring the admin/trac_ticket folder:
find -regex '.*[^-0-9a-zA-Z/._].*' -o -path './admin/trac_ticket' -prune | ~arsf/usr/bin/fix_naughty_chars.py
This will give suggested commands, but check first.
- Set permissions:
- Remove executable bit on all files (except the point cloud filter and the run[aceh] scripts):
find -type f -not -wholename '*pt_cloud_filter*' -and -not -regex '.*/run[aceh]/.*sh' -and -perm /a=x -exec chmod a-x {} \;
- Give everyone read permissions (and execute if it has user execute) for the current directory and below:
chmod -R a+rX .
- Remove executable bit on all files (except the point cloud filter and the run[aceh] scripts):
- Make sure everything is present and where it should be! (see Processing/FilenameConventions for the required layout and name formats)
- Create the tarballs for NEODC to download:
If AIMMS or GRIMM data is present then you will first need to separate these and put into separate tarballs.
Use: tar czf <TARBALL NAME> <DIRECTORY TO TARBALL>
Tarball name should be in format: GB09_05-2009_278b_Leighton_moss-AIMMS.tar.gz
Create a md5sum file for the AIMMS/GRIMM data also
Use: md5sum <TARBALL NAME> > <TARBALL NAME>-MD5SUM.txtsu - arsf ~/arsf_data/archived/qsub_archiver.sh <path to project in repository> <optional additional projects> (e.g. ~arsf/arsf_data/2011/flight_data/spain_portugal/EU11_03-2011_142_Jimena/) # To run the archiving locally rather than via the grid engine, use: ~arsf/usr/bin/archiving_tarballer.sh <path to project>
When complete, this will have dumped the data into ~arsf/arsf_data/archived/neodc_transfer_area/staging/. Check it looks OK then move it up one level so NEODC can rsync it. Logs will be in ~arsf/arsf_data/archived/archiver_logs/.
- Notify NEODC they can download the data (Current contact is: wendy.garland@…) and record the date in the ticket.
- When NEODC confirm they have backed up the data:
- Remove tarball from the transfer area
- Move the repository project to archive disk at: ~arsf/arsf_data/archived/<original path from ~arsf/arsf_data/>
e.g. mv ~arsf/arsf_data/2008/flight_data/uk/CEH08_01/ ~arsf/arsf_data/archived/2008/flight_data/uk/CEH08_01
You may need to create parent directories if they don't yet exist. - Create a symlink to the project in it's original location. Point the symlink through ~arsf/arsf_data/archived rather than directly to specific disk.
e.g. ln -s ~arsf/arsf_data/archived/2008/flight_data/uk/CEH08_01 ~arsf/arsf_data/2008/flight_data/uk/CEH08_01 - Note in ticket that it has been backed up by NEODC and moved to archive disk.
- Final steps - maybe wait a month:
- If workspace version present, delete from being_archived.
- Close the ticket