00-base

every node has this in /etc/condor_config

01-negotiator-groups

the group quota config, generated via a script we maintain

01-schedd-extra

not strictly related to multicore, but has some scalability improvements

01-startd

Our local startd configuration that sets up partitionable slots

02-defrag

Our condor_defrag configuration

Notes

The base config references an attribute called RACF_Group that we inject into each job that dates from before we used group-quotas to manage queues, so we leave it in place since our monitoring scripts use it to classify jobs.

In the base config the most important parts for multicore are the last few lines, including the NEGOTIATOR_POST_JOB_RANK that fills the farm depth-first and NEGOTIATOR_CONSIDER_PREEMPTION and NEGOTIATOR_USE_WEIGHTED_DEMAND, the first of which is important since we don’t want any preemption (group quotas handle resource allocation), and the second of which is necessary to make group quotas work with multicore slots.

Our schedd config is entirely vanilla and needed no modifications to make multicore work.

The group quota config is auto-generated but this is a recent snapshot from our pool. Notice that quotas and usage are all in units of CPUs, so for example a quota of 2400 would allow 2400 single-core jobs or 300 eight-core jobs, or any combination of jobs such that the weighted-sum of job_i*cpus_i equals 2400.

In the startd configuration the important lines are the ones I put in the meeting’s twiki page -- we simply define 1 slot per node to be partitionable and that contains 100% of all resources on that node.

Our defragmentation configuration is very modest, we don’t allow more than 4/hr to defragment and we stop defragmenting when the machine has >= 10 CPUs available, enough for at least one 8-core job to grab next cycle. These parameters will need to change if/when the species of jobs and relative quantities of each vary.