Jump to content
  • Announcements

    • admin

      PBS Forum Has Closed   06/12/17

      The PBS Works Support Forum is no longer active.  For PBS community-oriented questions and support, please join the discussion at http://community.pbspro.org.  Any new security advisories related to commercially-licensed products will be posted in the PBS User Area (https://secure.altair.com/UserArea/). 
Sign in to follow this  
meuser

Job priority and preemption Issues

Recommended Posts

Hi Everyone;


 


I have been experiencing on-going challenges with getting preemption to work as I need for our site.  I have set up soft limits to limit a particular user group to a max of 50% of the system resources at any given time.


 


----


set server scheduling = True


set server max_run_res_soft.mem = [g:PBS_GENERIC=15tb]


set server max_run_res_soft.ncpus += [g:PBS_GENERIC=992]


set server max_run_res_soft.mem += [g:group1=8tb]


set server max_run_res_soft.ncpus += [g:group1=491]


set server max_run_res_soft.mem += [g:group2=8tb]


set server max_run_res_soft.ncpus += [g:group=491]


 

 

---

I have a user job who is part of group1 who is preempting and exceeding these limits regardless.  I have put that user's job on 'suspend' and other user jobs begin to execute.  If I release it, it grabs resources and somehow manages to force other users jobs to suspend.

 

So, I took a look at pbsf, and this user 

 

------

I then ran a trace job on one of the individual array jobs that keeps getting preempted and I saw this. Notice the odd date?? Feb 22 ??

 

Where is this date coming from?

 

Thanks!!!

 

10/04/2015 11:10:26  L    Failed to update estimated attrs.

10/04/2015 11:10:26  L    Fairshare usage of entity dbodi increased due to job becoming a top job.


10/04/2015 11:10:26  L    Job is a top job and will run at Mon Feb 22 04:32:18 2027


10/04/2015 11:10:26  L    Host set host=itmiuv2 has too few free resources or is too small


 

The system date is correct

Sun Oct  4 11:16:17 EDT 2015

 

 

Share this post


Link to post
Share on other sites

Hello,


 


It may simply be a misunderstanding about what soft limits are for.  Soft limits are used only to make jobs eligible for pre-emption.  And this depends on some settings in sched_config to make it so. 


 


This directive "set server max_run_res_soft.ncpus += [g:group1=491]"  effectively says that jobs from this group will be eligible to be preempted (suspended) once the group exceeds 491 cpus.


 


If you really want to limit total usage instead of affecting preemption behavior, then the proper limit for this is:


"set server max_run_res.ncpus += [g:group1=491]"


(remove soft)


 


That will cap group1's total cpu usage to 491.


 


Moving on to the next topic, "top job" status is granted to jobs in two scenarios:


1. They've been "starving" for too long


2. Strict ordering is on and the job can't run immediately.


 


Starving is on by default and strict ordering is not.  Both are specified in sched_config.


 


The strange date message is likely due to the way that the scheduler predicts when a job will eventually be able to run.  It looks at all the jobs in the system and finds jobs which, when finished, will free up enough resources to allow the top job to run.  It makes this prediction based on the "wall time" requested by the jobs.  If there is no wall time requested, then PBS uses an implicit walltime of five years.  What you're likely seeing is the build-up of a few jobs without walltime specified, so PBS assumes that the jobs will end in 5 years.


 


You may not want strict ordering or starving job handling.  If you turn both them off, then you won't see these messages about top job and far-future dates.  It will also make the scheduling cycle quicker, since calendaring top jobs is an expensive operation.


 


Hope that helps.  If I've misunderstood anything, let me know.


 


Thanks


Steve

Share this post


Link to post
Share on other sites

Please sign in to comment

You will be able to leave a comment after signing in



Sign In Now
Sign in to follow this  

×