location: Diff for "compute_resources"

Institute of Mathematics - PublicMathWiki:

Differences between revisions 18 and 19
Revision 18 as of 2015-04-24 17:07:18
Size: 5948
Editor: crose
Comment:
Revision 19 as of 2015-04-27 12:55:09
Size: 5934
Editor: crose
Comment:
Deletions are marked like this. Additions are marked like this.
Line 17: Line 17:
|| estonia0 || 2 x Intel Xeon 6C E5-2640 2.50 GHz || 24 (HT enabled) || 256 GB || 2TB || 05/2013 || research || [[http://www.intel.com/buy/us/en/product/components/intel-xeon-processor-e5-2640-15m-cache-250-ghz-720-gts-intel-qpi-250678?wapkw=e5-2640#tech_specs|E5-2640]] ||
|| estonia1 || " || 12 (HT disabled) || " || " || " || " ||
|| estonia0 || 2 x Intel Xeon 6C E5-2640 2.50 GHz || 12 (HT disabled) || 256 GB || 2TB || 05/2013 || research || [[http://www.intel.com/buy/us/en/product/components/intel-xeon-processor-e5-2640-15m-cache-250-ghz-720-gts-intel-qpi-250678?wapkw=e5-2640#tech_specs|E5-2640]] ||
|| estonia1 || " ||" || " || " || " || " ||

Compute Resources

  • In general, the offers are limited to members of I-MATH and their students.
  • Other UZH members are welcome to contact support@math.uz.ch.

Offers

  • For small / ad hoc computations, just start the compute software via the thinlinc environment.
  • For medium computations, please contact support@math.uzh.ch to ask for the best fitting possibilty and to reserve nodes.

  • For large computations and professional support please check http://www.s3it.uzh.ch/.

Resources at IMATH

host

CPU type

cores

RAM

local disk

date

purpose

CPU specs

asprey

2 x Intel Xeon 4C E5-2643 3.30 GHz

16 (HT enabled)

128 GB

73 GB

12/2012

student: matlab, mathematica, maple, R

E5-2643

baxter

8 x Intel Xeon 8C X7550 2.00 GHz

64 (HT disabled)

512 GB

300 GB

10/2010

research, magma

x7550

crous

8 x Intel Xeon 10C E7-2850 2.0 GHz

80 (HT disabled)

2 TB

300 GB

05/2013

research

E7-2850

david

"

"

"

"

"

"

estonia0

2 x Intel Xeon 6C E5-2640 2.50 GHz

12 (HT disabled)

256 GB

2TB

05/2013

research

E5-2640

estonia1

"

"

"

"

"

"

estonia2

"

"

"

"

"

"

estonia3

"

"

"

"

"

"

georgia0

"

"

"

"

"

"

georgia1

"

"

"

"

"

"

georgia2

"

"

"

"

"

"

georgia3

"

"

"

"

"

"

iran0

"

"

"

"

"

"

iran1

"

"

"

"

"

"

iran2

"

"

"

"

"

"

iran3

"

"

"

"

"

"

jordan0

"

"

"

"

"

professor & assistant: matlab, mathematica, maple, R

jordan1

"

"

"

"

"

"

jordan2

"

"

"

"

"

courses

jordan3

"

"

"

"

"

courses

Concept

Access

  • The compute nodes can be accessed by SSH only from inside I-MATH.

  • Use thinlinc

    • matlab, mathematica, maple, R, rstudio: just start the application via menu 'Applications > Science > ...' - you'll automatically redirected to the appropriate compute server.

    • For non standard compute software please first log on the compute node via 'ssh', than start the program.
  • SSH: Log on 'ssh.math.uzh.ch' and jump to the compute node again via 'ssh'.

Operating System

  • Ubuntu LTS (Linux) - all compute servers.
  • Very very limited software might be installed on some Windows virtual machines.

Storage

  • If you need more than 1GB disk space, please contact support@math.uzh.ch.

  • Remote (NFS)
    • <1GB disk space: Personal Home directory

    • >1GB disk space: /compute/<account>

  • Local
    • /export/user/<account>

Programs / Software

  • All compute servers use the same applications and versions as installed on the thinlinc terminals.
  • Please report missing or outdated software or wishes to support@math.uzh.ch.

  • There is no Intel C or Fortran Compiler.
  • If you need other program versions than the default one: open a terminal, type the program name followed by two 'TAB' presses: this will show all available versions of the specified program.

Parallel computing

  • Cluster software
    • Not installed / offered at I-MATH.
    • Various MPI packages and libs are installed. Compiling and preparation to run programs later on a distributed compute cluster are possible.
  • Some programs, like matlab, offers limited builtin auto parallelization.

  • All of the compute nodes are shared memory machines. It's reasonable to start as many programs in parallel as long the load is less or equal the number of cores.

    • Determine the load by using the command uptime via a terminal. The last three numbers are the average number of jobs (=load) in the run queue over the last 1, 5 and 15 minutes.

      $ uptime
       15:55:20 up 22 days, 24 min,  1 user,  load average: 11.97, 11.73, 11.62

PublicMathWiki: compute_resources (last edited 2024-01-24 14:32:47 by alrutz)