location: compute_resources

Institute of Mathematics - PublicMathWiki:

Upload page content

You can upload content for the page named below. If you change the page name, you can also upload content for another page. If the page name is empty, we derive the page name from the file name.

File to load page content from
Page name
Comment

Compute Resources

  • In general, the offers are limited to members of I-MATH and their students.
  • Other UZH members are welcome to contact support@math.uz.ch .

  • First time user: contact support@math.uzh.ch first.

Reservations

  • Short (<1day) computations don't need a reservation,

  • Longer (>1day) are reccomended to be done with a reservation.

  • With reservation: In case of maintenance need, we typically schedule the maintenance according to the needs of the user.
  • Without reservation: We try to contact the user and wait several hours for an answer.
  • General: if we see running computations, we always contact the user first (independent of whether or not a reservation exists). If the user doesn't answer (weekend, holiday, ...) we decide on our own what to do.

Offers

  • For small / ad hoc computations, just start the compute software via the thinlinc environment.
  • For medium computations, please contact support@math.uzh.ch to ask for the best fitting possibilty and to reserve nodes.

  • For large computations and professional support please check http://www.s3it.uzh.ch/.

Resources at IMATH

host

CPU type

cores

RAM

local disk

date

purpose

CPU specs

asprey

2 x Intel Xeon 4C E5-2643 3.30 GHz

16 (HT enabled)

128 GB

73 GB

12/2012

student: matlab, mathematica, maple, R

E5-2643

crous

8 x Intel Xeon 10C E7-2850 2.0 GHz

80 (HT disabled)

2 TB

300 GB

05/2013

research

E7-2850

david

"

"

"

"

"

magma host (temporary)

"

estonia0

2 x Intel Xeon 6C E5-2640 2.50 GHz

12 (HT disabled)

256 GB

2TB

05/2013

research

E5-2640

estonia1

"

"

"

"

"

"

"

estonia2

"

"

"

"

"

"

"

estonia3

"

"

"

"

"

"

"

georgia0

"

"

"

"

"

"

"

georgia1

"

"

"

"

"

"

"

georgia2

"

"

"

"

"

"

"

georgia3

"

"

"

"

"

"

"

iran0

"

"

"

"

"

"

"

iran1

"

"

"

"

"

"

"

iran2

"

"

"

"

"

"

"

iran3

"

"

"

"

"

"

"

jordan0

"

"

"

"

"

professor & assistant: matlab, mathematica, maple, R

"

jordan1

"

"

"

"

"

"

"

jordan2

"

"

"

"

"

courses

"

jordan3

"

"

"

"

"

courses

"

brady

1 x Intel Core 6C i7-8700K 3.70GHz

12 (HT enabled)

64GB

80GB,2TB

02/2018

i7-8700K

Concept

Access

  • The compute nodes can be accessed by SSH only from inside I-MATH.

  • Use thinlinc

    • matlab, mathematica, maple, R, rstudio: just start the application via menu 'Applications > Science > ...' - you'll automatically be redirected to the appropriate compute server.

    • For non standard compute software please first log on the compute node via 'ssh', then start the program.
  • SSH: Log onto 'ssh.math.uzh.ch' and jump to the compute node again via 'ssh'.

Operating System

  • Ubuntu LTS (Linux) - all compute servers.
  • Very very limited software might be installed on some Windows virtual machines.

Storage

  • If you need more disk space, please contact support@math.uzh.ch .

  • Remote (NFS)
    • Personal Home directory
    • /compute/<account>

  • Local
    • /export/user/<account>

Programs / Software

  • All compute servers use the same applications and versions as installed on the thinlinc terminals.
  • Please report missing or outdated software or wishes to support@math.uzh.ch .

  • There is no Intel C or Intel Fortran Compiler.
  • If you need other program versions than the default one: open a terminal, type the program name followed by two 'TAB' presses: this will show all available installed versions of the specified program.

Parallel computing

  • Cluster software
    • Not installed / offered at I-MATH.
    • Various MPI packages and libs are installed. Compiling and preparation to run programs later on a distributed compute cluster are possible.
  • Some programs, like matlab, offer limited built-in auto parallelization.

  • All of the compute nodes are shared memory machines. It's reasonable to start as many programs in parallel as long the load is less or equal the number of cores.

    • Determine the load by using the command uptime via a terminal. The last three numbers are the average number of jobs (=load) in the run queue over the last 1, 5 and 15 minutes.

      $ uptime
       15:55:20 up 22 days, 24 min,  1 user,  load average: 11.97, 11.73, 11.62

Limitations

  • Memory
    • Soft: 64GB
    • Hard: unlimited
  • Number of processes per user
    • Soft: 2048
    • Hard: 4096
  • Change limitations

$ ulimit -m unlimited
$ ulimit -u 4096
  • Make sure to increase a hard limit only one time. If you try to increase it a second time, you'll get an error. You have to log out and log in again to get a new chance.