Difference between revisions of "FurtherTopics/FurtherInteractive"

From HPC
Jump to: navigation , search
(Created page with "==Node Resources== ==Node Reservations==")
 
m (Exclude a node)
 
(10 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
[[Quickstart/Interactive| Back to Interactive Quickstart]]
 +
 +
[[FurtherTopics/FurtherTopics #Interactive Sessions| Back to Further Topics]]
 +
 
==Node Resources==
 
==Node Resources==
 +
By default the ''interactive'' command will give you an allocation to a single compute core on a node for 12 hours and a standard 4GB of RAM.
 +
 +
===Increase CPU resources===
 +
To increase CPU resources use the flag ''-n<Number of cores>''.
 +
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01 ~]$ interactive -n24
 +
salloc: Granted job allocation 306849
 +
Job ID 306849 connecting to c174, please wait...
 +
</pre>
 +
 +
===Increase Memory===
 +
For more memory use the flag ''--mem<amount>G''. If a job exceeds the requested about of memory, it will terminate with an error message.
 +
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01 ~]$ interactive --mem=24G
 +
salloc: Granted job allocation 306852
 +
Job ID 306852 connecting to c068, please wait...
 +
</pre>
 +
 
==Node Reservations==
 
==Node Reservations==
 +
This example is for a reservation of 327889 and the partition (queue) '''GPU''', missing the partition name will default to the '''compute''' queue.
 +
 +
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01 ~]$ interactive -pgpu --reservation=327889
 +
salloc: Granted job allocation 306353
 +
Job ID 306353 connecting to gpu04, please wait...
 +
</pre>
 +
 +
==Exclude a node==
 +
Use ''-x[node]'' to exclude a node.
 +
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01 ~]$ interactive -xc100,c136
 +
</pre>
 +
This excludes node 100 and c136.
 +
 +
==Request a node==
 +
Use ''-w[node]'' to request a node.
 +
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01 ~]$ interactive -wc100
 +
</pre>
 +
This requests node 100.
 +
 +
==More Information==
 +
 +
More information can be found by typing the following:
 +
 +
<pre style="background-color: #000000; color: white; border: 2px solid black; font-family: monospace, sans-serif;">
 +
[username@login01 ~]$ interactive --help
 +
 +
Parallel run options:
 +
  -A, --account=name          charge job to specified account
 +
      --begin=time            defer job until HH:MM MM/DD/YY
 +
      --bell                  ring the terminal bell when the job is allocated
 +
      --bb=<spec>            burst buffer specifications
 +
      --bbf=<file_name>      burst buffer specification file
 +
  -c, --cpus-per-task=ncpus  number of cpus required per task
 +
      --comment=name          arbitrary comment
 +
      --cpu-freq=min[-max[:gov]] requested cpu frequency (and governor)
 +
  -d, --dependency=type:jobid defer job until condition on jobid is satisfied
 +
  -D, --chdir=path            change working directory
 +
      --get-user-env          used by Moab.  See srun man page.
 +
      --gid=group_id          group ID to run job as (user root only)
 +
      --gres=list            required generic resources
 +
  -H, --hold                  submit job in held state
 +
  -I, --immediate[=secs]      exit if resources not available in "secs"
 +
      --jobid=id              specify jobid to use
 +
  -J, --job-name=jobname      name of job
 +
  -k, --no-kill              do not kill job on node failure
 +
  -K, --kill-command[=signal] signal to send terminating job
 +
  -L, --licenses=names        required license, comma separated
 +
  -m, --distribution=type    distribution method for processes to nodes
 +
                              (type = block|cyclic|arbitrary)
 +
      --mail-type=type        notify on state change: BEGIN, END, FAIL or ALL
 +
      --mail-user=user        who to send email notification for job state
 +
                              changes
 +
  -n, --tasks=N              number of processors required
 +
      --nice[=value]          decrease scheduling priority by value
 +
      --no-bell              do NOT ring the terminal bell
 +
      --ntasks-per-node=n    number of tasks to invoke on each node
 +
  -N, --nodes=N              number of nodes on which to run (N = min[-max])
 +
  -O, --overcommit            overcommit resources
 +
      --power=flags          power management options
 +
      --priority=value        set the priority of the job to value
 +
      --profile=value        enable acct_gather_profile for detailed data
 +
                              value is all or none or any combination of
 +
                              energy, lustre, network or task
 +
  -p, --partition=partition  partition requested
 +
      --qos=qos              quality of service
 +
  -Q, --quiet                quiet mode (suppress informational messages)
 +
      --reboot                reboot compute nodes before starting job
 +
  -s, --share                share nodes with other jobs
 +
      --sicp                  If specified, signifies job is to receive
 +
                              job id from the incluster reserve range.
 +
      --signal=[B:]num[@time] send signal when time limit within time seconds
 +
      --switches=max-switches{@max-time-to-wait}
 +
                              Optimum switches and max time to wait for optimum
 +
  -S, --core-spec=cores      count of reserved cores
 +
      --thread-spec=threads  count of reserved threads
 +
  -t, --time=minutes          time limit
 +
      --time-min=minutes      minimum time limit (if distinct)
 +
      --uid=user_id          user ID to run job as (user root only)
 +
  -v, --verbose              verbose mode (multiple -v's increase verbosity)
 +
      --wckey=wckey          wckey to run job under
 +
 +
Constraint options:
 +
      --contiguous            demand a contiguous range of nodes
 +
  -C, --constraint=list      specify a list of constraints
 +
  -F, --nodefile=filename    request a specific list of hosts
 +
      --mem=MB                minimum amount of real memory
 +
      --mincpus=n            minimum number of logical processors (threads)
 +
                              per node
 +
      --reservation=name      allocate resources from named reservation
 +
      --tmp=MB                minimum amount of temporary disk
 +
  -w, --nodelist=hosts...    request a specific list of hosts
 +
  -x, --exclude=hosts...      exclude a specific list of hosts
 +
 +
Consumable resources related options:
 +
      --exclusive[=user]      allocate nodes in exclusive mode when
 +
                              cpu consumable resource is enabled
 +
      --mem-per-cpu=MB        maximum amount of real memory per allocated
 +
                              cpu required by the job.
 +
                              --mem >= --mem-per-cpu if --mem is specified.
 +
 +
Affinity/Multi-core options: (when the task/affinity plugin is enabled)
 +
  -B  --extra-node-info=S[:C[:T]]            Expands to:
 +
      --sockets-per-node=S  number of sockets per node to allocate
 +
      --cores-per-socket=C  number of cores per socket to allocate
 +
      --threads-per-core=T  number of threads per core to allocate
 +
                              each field can be 'min' or wildcard '*'
 +
                              total cpus requested = (N x S x C x T)
 +
 +
      --ntasks-per-core=n    number of tasks to invoke on each core
 +
      --ntasks-per-socket=n  number of tasks to invoke on each socket
 +
 +
 +
Help options:
 +
  -h, --help                  show this help message
 +
  -u, --usage                display brief usage message
 +
 +
Other options:
 +
  -V, --version              output version information and exit
 +
 +
</pre>
 +
 +
 +
 +
 +
 +
[[FurtherTopics/FurtherTopics #Interactive Sessions| Back to Further Topics]]  /  [[Quickstart/Interactive| Back to Interactive Sessions Quickstart]]  /    [[Main Page #Quickstart| Main Page]]

Latest revision as of 10:24, 29 November 2022

Back to Interactive Quickstart

Back to Further Topics

Node Resources

By default the interactive command will give you an allocation to a single compute core on a node for 12 hours and a standard 4GB of RAM.

Increase CPU resources

To increase CPU resources use the flag -n<Number of cores>.

[username@login01 ~]$ interactive -n24
salloc: Granted job allocation 306849
Job ID 306849 connecting to c174, please wait...

Increase Memory

For more memory use the flag --mem<amount>G. If a job exceeds the requested about of memory, it will terminate with an error message.

[username@login01 ~]$ interactive --mem=24G
salloc: Granted job allocation 306852
Job ID 306852 connecting to c068, please wait...

Node Reservations

This example is for a reservation of 327889 and the partition (queue) GPU, missing the partition name will default to the compute queue.

[username@login01 ~]$ interactive -pgpu --reservation=327889
salloc: Granted job allocation 306353
Job ID 306353 connecting to gpu04, please wait...

Exclude a node

Use -x[node] to exclude a node.

[username@login01 ~]$ interactive -xc100,c136

This excludes node 100 and c136.

Request a node

Use -w[node] to request a node.

[username@login01 ~]$ interactive -wc100

This requests node 100.

More Information

More information can be found by typing the following:

[username@login01 ~]$ interactive --help

Parallel run options:
  -A, --account=name          charge job to specified account
      --begin=time            defer job until HH:MM MM/DD/YY
      --bell                  ring the terminal bell when the job is allocated
      --bb=<spec>             burst buffer specifications
      --bbf=<file_name>       burst buffer specification file
  -c, --cpus-per-task=ncpus   number of cpus required per task
      --comment=name          arbitrary comment
      --cpu-freq=min[-max[:gov]] requested cpu frequency (and governor)
  -d, --dependency=type:jobid defer job until condition on jobid is satisfied
  -D, --chdir=path            change working directory
      --get-user-env          used by Moab.  See srun man page.
      --gid=group_id          group ID to run job as (user root only)
      --gres=list             required generic resources
  -H, --hold                  submit job in held state
  -I, --immediate[=secs]      exit if resources not available in "secs"
      --jobid=id              specify jobid to use
  -J, --job-name=jobname      name of job
  -k, --no-kill               do not kill job on node failure
  -K, --kill-command[=signal] signal to send terminating job
  -L, --licenses=names        required license, comma separated
  -m, --distribution=type     distribution method for processes to nodes
                              (type = block|cyclic|arbitrary)
      --mail-type=type        notify on state change: BEGIN, END, FAIL or ALL
      --mail-user=user        who to send email notification for job state
                              changes
  -n, --tasks=N               number of processors required
      --nice[=value]          decrease scheduling priority by value
      --no-bell               do NOT ring the terminal bell
      --ntasks-per-node=n     number of tasks to invoke on each node
  -N, --nodes=N               number of nodes on which to run (N = min[-max])
  -O, --overcommit            overcommit resources
      --power=flags           power management options
      --priority=value        set the priority of the job to value
      --profile=value         enable acct_gather_profile for detailed data
                              value is all or none or any combination of
                              energy, lustre, network or task
  -p, --partition=partition   partition requested
      --qos=qos               quality of service
  -Q, --quiet                 quiet mode (suppress informational messages)
      --reboot                reboot compute nodes before starting job
  -s, --share                 share nodes with other jobs
      --sicp                  If specified, signifies job is to receive
                              job id from the incluster reserve range.
      --signal=[B:]num[@time] send signal when time limit within time seconds
      --switches=max-switches{@max-time-to-wait}
                              Optimum switches and max time to wait for optimum
  -S, --core-spec=cores       count of reserved cores
      --thread-spec=threads   count of reserved threads
  -t, --time=minutes          time limit
      --time-min=minutes      minimum time limit (if distinct)
      --uid=user_id           user ID to run job as (user root only)
  -v, --verbose               verbose mode (multiple -v's increase verbosity)
      --wckey=wckey           wckey to run job under

Constraint options:
      --contiguous            demand a contiguous range of nodes
  -C, --constraint=list       specify a list of constraints
  -F, --nodefile=filename     request a specific list of hosts
      --mem=MB                minimum amount of real memory
      --mincpus=n             minimum number of logical processors (threads)
                              per node
      --reservation=name      allocate resources from named reservation
      --tmp=MB                minimum amount of temporary disk
  -w, --nodelist=hosts...     request a specific list of hosts
  -x, --exclude=hosts...      exclude a specific list of hosts

Consumable resources related options:
      --exclusive[=user]      allocate nodes in exclusive mode when
                              cpu consumable resource is enabled
      --mem-per-cpu=MB        maximum amount of real memory per allocated
                              cpu required by the job.
                              --mem >= --mem-per-cpu if --mem is specified.

Affinity/Multi-core options: (when the task/affinity plugin is enabled)
  -B  --extra-node-info=S[:C[:T]]            Expands to:
       --sockets-per-node=S   number of sockets per node to allocate
       --cores-per-socket=C   number of cores per socket to allocate
       --threads-per-core=T   number of threads per core to allocate
                              each field can be 'min' or wildcard '*'
                              total cpus requested = (N x S x C x T)

      --ntasks-per-core=n     number of tasks to invoke on each core
      --ntasks-per-socket=n   number of tasks to invoke on each socket


Help options:
  -h, --help                  show this help message
  -u, --usage                 display brief usage message

Other options:
  -V, --version               output version information and exit



Back to Further Topics / Back to Interactive Sessions Quickstart / Main Page