Changes between Version 104 and Version 105 of Procedures/EagleHawkProcessing


Ignore:
Timestamp:
Jan 30, 2024, 3:47:55 PM (10 months ago)
Author:
wja
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Procedures/EagleHawkProcessing

    v104 v105  
    4040 * pixel size using [https://nerc-arf-dan.pml.ac.uk/pixelsize/pixelsize.html Pixel size (and swath width) calculator] but please round to nearest 0.5m.
    4141 * project metadata details such as pilot, co-pilot, etc
    42  * mailto email address (the email address to send grid job emails to if you intend to use the `-m` option of `specim_qsub.py`)
    4342 * change the logdir location to processing/hyperspectral/logfiles or processing/owl/logfiles
    4443
     
    9089=== Submitting processing to gridnodes ===
    9190
    92 To submit jobs to the grid, from the top level directory use: {{{specim_qsub.py <config_file>}}}
     91To submit jobs to the grid, from the top level directory use: {{{specim_slurm.py -s <sensor> -c <config_file>}}}
    9392
    94 The actual script which does the processing of each job is: process_specim_apl_line.py
     93Add the `--final` flag to actually submit the jobs.
    9594
    96 Once submitted, you can keep an eye on your jobs using qmon.
     95`specim_slurm.py` will generate a sbatch file (be default into the logfiles directory). This sbatch file is used to submit jobs to Slurm. The sbatch file will run locally if you run it like a bash script. If more than one flightline needs processing (i.e more than one line has `process_line = True` in the APL config, then the sbatch file is configured to submit an array job (one job containing multiple tasks).
    9796
    98 The number of APL jobs which can run at once is limited using a complex consumable (apl_throttle) as the I/O usage can be high, particularly for mapping all bands. To increase / decrease the number of jobs run (as rsgcode):
    99 {{{
    100 qconf -mattr exechost complex_values apl_throttle=5 global
    101 }}}
    102 Replacing '5' with the maximum number of APL jobs allowed to run at once. For more details see [https://rsg.pml.ac.uk/intranet/trac/wiki/Processing/Grid_Engine/Maintenance#Throttlingyourself throttling yourself]
     97To interact with slurm you need the slurm client installed and configured. It is easier to just ssh to the host rsg-slurm-login-1.
     98
     99You can submit jobs using `sbatch [PATH TO SBATCH SCRIPT]`.
     100
     101Monitor jobs using squeue --me to view all your own jobs. A specific job can be monitored with `squeue -j [JOB ID]`.
     102
     103By default, array jobs will display as a single job, with additional taks only displaying if they are processing rather than queing, the `--array` flag will expand this.
     104
     105Remove a job with `scancel [JOB ID] `
    103106
    104107