Changes between Version 104 and Version 105 of Procedures/EagleHawkProcessing
- Timestamp:
- Jan 30, 2024, 3:47:55 PM (10 months ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Procedures/EagleHawkProcessing
v104 v105 40 40 * pixel size using [https://nerc-arf-dan.pml.ac.uk/pixelsize/pixelsize.html Pixel size (and swath width) calculator] but please round to nearest 0.5m. 41 41 * project metadata details such as pilot, co-pilot, etc 42 * mailto email address (the email address to send grid job emails to if you intend to use the `-m` option of `specim_qsub.py`)43 42 * change the logdir location to processing/hyperspectral/logfiles or processing/owl/logfiles 44 43 … … 90 89 === Submitting processing to gridnodes === 91 90 92 To submit jobs to the grid, from the top level directory use: {{{specim_ qsub.py<config_file>}}}91 To submit jobs to the grid, from the top level directory use: {{{specim_slurm.py -s <sensor> -c <config_file>}}} 93 92 94 The actual script which does the processing of each job is: process_specim_apl_line.py 93 Add the `--final` flag to actually submit the jobs. 95 94 96 Once submitted, you can keep an eye on your jobs using qmon.95 `specim_slurm.py` will generate a sbatch file (be default into the logfiles directory). This sbatch file is used to submit jobs to Slurm. The sbatch file will run locally if you run it like a bash script. If more than one flightline needs processing (i.e more than one line has `process_line = True` in the APL config, then the sbatch file is configured to submit an array job (one job containing multiple tasks). 97 96 98 The number of APL jobs which can run at once is limited using a complex consumable (apl_throttle) as the I/O usage can be high, particularly for mapping all bands. To increase / decrease the number of jobs run (as rsgcode): 99 {{{ 100 qconf -mattr exechost complex_values apl_throttle=5 global 101 }}} 102 Replacing '5' with the maximum number of APL jobs allowed to run at once. For more details see [https://rsg.pml.ac.uk/intranet/trac/wiki/Processing/Grid_Engine/Maintenance#Throttlingyourself throttling yourself] 97 To interact with slurm you need the slurm client installed and configured. It is easier to just ssh to the host rsg-slurm-login-1. 98 99 You can submit jobs using `sbatch [PATH TO SBATCH SCRIPT]`. 100 101 Monitor jobs using squeue --me to view all your own jobs. A specific job can be monitored with `squeue -j [JOB ID]`. 102 103 By default, array jobs will display as a single job, with additional taks only displaying if they are processing rather than queing, the `--array` flag will expand this. 104 105 Remove a job with `scancel [JOB ID] ` 103 106 104 107