Slurm Accounting with slurmdbd
I set up a slurm test cluster and want to enable accounting. I can submit and run jobs. However sacct
returns an empty table. The table cluster_job_table in the my slurm_acct_db database is empty as well.
Slurm Accounting with slurmdbd
I set up a slurm test cluster and want to enable accounting. I can submit and run jobs. However sacct
returns an empty table. The table cluster_job_table in the my slurm_acct_db database is empty as well.
Adding a node with different CPUs, Cores*Threadspercore to SLURM (Ex: i9-1300 having 8P+16E cores)
I am building a test cluster with nodes having i9-1300 processors. But I get an error saying the cores and CPUs doesn’t match.
how to make job distribution to nodes depend on partition
We have a heterogenous cluster with some small nodes (64 cores) in partition_smallnodes and some larger nodes (256 cores) in partition_largenodes.
Multiple sequential tasks per node with slurm batch script
I’m using slurm on a shared compute cluster. I’d like to know if the following is possible:
slurmd error: port already in use, resulting in slaves not being able to communicate with master slurmctld
I’m trying to set up a Slurm (version 22.05.8) cluster consisting of 3 nodes with these hostnames and local IP addresses:
slurmd network port already in use resulting in slurmd on slave nodes not able to communicate with master slurmctld
I’m trying to set up a Slurm cluster consisting of 3 nodes with these hostnames and local IP addresses:
Slurm rest api, plugin 101 not found
When submit or get a job everything works fine, but when i want to hold a job i always get this error in the screeshot.
Can’t submit GRES value from slurm REST API
I have been trying to submit the slurm GRES flag through the REST API however, I couldn’t find a way to do the same through the REST APIs. I am using the parser version 0.0.40
SLURM interactive job assigned to the worker nodes but effectively is running on the login node
when I start an interactive job is activated in one of the worker nodes – I see it in the logs on the terminal and running squeue, but then when I run my commands in the terminal they are using the RAM/CPUs of the login node. I checked it using both htop and glances.