Cluster Information
Details on how to request resources to run your jobs are available here.
The current usage policy (that can be seen running qconf -ssconf) implements a balanced approach to job scheduling, with significant consideration given to CPU usage and job priorities. Briefly, jobs are executed in a first-come first-serve basis, once you submit a job, given that the requested resources are available, your job will be executed by the scheduler. If submitting several jobs you can adjust their priorities (with qalter), and this will be taken into account by the scheduler.
Queue Information
Last update: Tue Sep 23 11:00:47 2025
- all.q@elysium.bioinfo.cluster: Slots Total: 80, Slots Used: 30, Load Avg: 12.36865
- all.q@figsrv.bioinfo.cluster: Slots Total: 56, Slots Used: 15, Load Avg: 1.00000
- all.q@neotera.bioinfo.cluster: Slots Total: 48, Slots Used: 0, Load Avg: 0.00342
- all.q@telura.bioinfo.cluster: Slots Total: 80, Slots Used: 40, Load Avg: 0.06543
- introns.q@elysium.bioinfo.cluster: Slots Total: 60, Slots Used: 0, Load Avg: 12.36865
Running Jobs Information
Queue | Job Number | Job Name | Job Owner | State | Slots | Start Time |
---|---|---|---|---|---|---|
all.q@elysium.bioinfo.cluster | 5844 | OR_blast | angie.quinhones | running | 15 | 2025-09-18T16:14:43 |
all.q@elysium.bioinfo.cluster | 5845 | BR_blast | angie.quinhones | running | 15 | 2025-09-18T16:16:13 |
all.q@figsrv.bioinfo.cluster | 5563 | TEdtll.sh | maria.camila | running | 15 | 2025-09-04T11:39:30 |
all.q@telura.bioinfo.cluster | 5818 | run_interpro_Shy.sh | gustavo.lelli | running | 30 | 2025-09-17T15:53:08 |
all.q@telura.bioinfo.cluster | 5923 | run_minimap2.sh | marilia.manuppella | running | 10 | 2025-09-23T11:00:13 |
Pending Jobs Information
Job Number | Job Name | Job Owner | State | Slots | Submission Time |
---|
Waiting and running times
The figure below shows the distribution of waiting/pending and running times in minutes. Waiting or pending time is the interval between submitting a job to the queueing system and when it actually starts running on the computing node. In our cluster, most jobs, in average, wait less than 2.18 minutes (approximately 0.036 hours) to start running. Once the jobs start running, most of them, in average, finish within 26.97 minutes, based on 3892 jobs.
