Jump to content
  • Announcements

    • admin

      PBS Forum Has Closed   06/12/17

      The PBS Works Support Forum is no longer active.  For PBS community-oriented questions and support, please join the discussion at http://community.pbspro.org.  Any new security advisories related to commercially-licensed products will be posted in the PBS User Area (https://secure.altair.com/UserArea/). 


  • Content count

  • Joined

  • Last visited

  1. iptables and interactive jobs

    Okies What I am trying to do is to allow interactive jobs via PBS but not allow users the ability to use ssh to login to other execution nodes. For the admin to login to execution nodes they login to the login node and then via the head node they can login to an exec node.
  2. iptables and interactive jobs

    Hi I have been tightening down our cluster. Our login node accepts ssh from anywhere. Execution nodes accept ssh only from the head node. That all seems to work fine for submitting normal non-interactive jobs. Exec nodes: From the head node allow: proto (tcp udp) mod state state NEW dport 15001:15004 ACCEPT; Login node: From the head node allow: proto (tcp udp) mod state state NEW dport 15001:15004 ACCEPT; This allows normal jobs to be submitted but interactive jobs fail with "Job cannot be executed" and an exit status of -1 If I set for the login node a rule to allow from a specific node NEW ACCEPT then the interactive job will work on that node. I thought all PBS communications would be via the head node and not direct node-to-node like exec node to/from login node. Are there ports that need to be allowed to let interactive jobs run? A netstat during an interactive job showed "login node:33796 to exec node:39424 ESTABLISHED Mike
  3. How to prevent new jobs from starting on specific nodes

    Thankyou Scott. I have now done that for the node. It was better I check here before doing something wrong and ending users jobs :-) Thanks
  4. How to prevent new jobs from starting on specific nodes

    If I put a node offline what will happen to existing jobs? I thought that would stop existing jobs when they tried to communicate back to the head node. Also by keeping the node online I can add a comment "Note not accepting new jobs." and users with jobs on that node wont worry about their jobs as they wont see the node is down.
  5. Hi I have some nodes that need a reboot. I want to stop new jobs from starting on those specific nodes so when the currently running jobs are finished there will be no jobs running on those nodes and I can reboot them. So I don't want existing jobs to be ended. Just new jobs not started. The manual for qhold suggests that might be what I should use but its not clear. Also what's the best way to stop ALL new jobs from starting but allow current jobs to continue? Setting dedicated_time to a short time in the future still allows new jobs to start. Thanks Mike
  6. Not getting resources_used from array jobs

    Hi Okies now I understand. I can see that the E record is in this file for example (server_priv/accounting/20150730) and I can see the record in there. The PBSPro manual says "The tracejob command can read both event logs and accounting logs." so thats why I could the resources_used from trace_job as it reads that file but not qstat. Thanks Scott
  7. Not getting resources_used from array jobs

    Hi all When a job runs to completion users can get the memory used from qstat: $ qstat -fx 326958.hpcnode1 | grep resources_used resources_used.cpupercent = 0 resources_used.cput = 00:00:29 resources_used.mem = 10620kb but it looks like for array jobs there are no resources_used.* attributes shown. so any command like the following: $ qstat -fx 326959[].hpcnode1 <-- I don't expect to see resources_used.* here $ qstat -fx 326959[1].hpcnode1 <-- but for the actual subjobs it should be available $ qstat -fx 326959[2].hpcnode1 <-- ditto I can get the info from tracejob for each job in the array: $ tracejob 326959[1].hpcnode1 ... blah ... 07/28/2015 15:53:02 S Exit_status=0 resources_used.cpupercent=0 resources_used.cput=00:00:00 resources_used.mem=3772kb resources_used.ncpus=1 resources_used.vmem=432572kb resources_used.walltime=00:00:01 So why doesn't qstat show resources_used for array jobs? Mike