6990
Comment:
|
6734
|
Deletions are marked like this. | Additions are marked like this. |
Line 26: | Line 26: |
|| baxter || 8 x Intel Xeon 8C X7550 2.00 GHz || 64 (HT disabled) || 512 GB || 300 GB || 10/2010 || research, magma || [[http://ark.intel.com/products/46498/Intel-Xeon-Processor-X7550-18M-Cache-2_00-GHz-6_40-GTs-Intel-QPI?wapkw=x7550|x7550]] || |
Compute Resources
Contents
- In general, the offers are limited to members of I-MATH and their students.
Other UZH members are welcome to contact support@math.uz.ch.
First time user: contact support@math.uzh.ch first.
Reservations
Short (<1day) computations don't need a reservation,
Longer (>1day) are reccomended to do reservation.
- With reservation: In case of maintenance need, we typically schedule the maintenance according the needs of the user.
- Without reservation: We try to contact the user and wait several hours for an answer.
- General: if we see running computations, we always contact the user first (independent if a reservation exist). If the user doesn't answer (weekend, holiday, ...) we decide by our own what to do.
Offers
- For small / ad hoc computations, just start the compute software via the thinlinc environment.
For medium computations, please contact support@math.uzh.ch to ask for the best fitting possibilty and to reserve nodes.
For large computations and professional support please check http://www.s3it.uzh.ch/.
Resources at IMATH
host |
CPU type |
cores |
RAM |
local disk |
date |
purpose |
CPU specs |
asprey |
2 x Intel Xeon 4C E5-2643 3.30 GHz |
16 (HT enabled) |
128 GB |
73 GB |
12/2012 |
student: matlab, mathematica, maple, R |
|
crous |
8 x Intel Xeon 10C E7-2850 2.0 GHz |
80 (HT disabled) |
2 TB |
300 GB |
05/2013 |
research |
|
david |
" |
" |
" |
" |
" |
" |
" |
estonia0 |
2 x Intel Xeon 6C E5-2640 2.50 GHz |
12 (HT disabled) |
256 GB |
2TB |
05/2013 |
research |
|
estonia1 |
" |
" |
" |
" |
" |
" |
" |
estonia2 |
" |
" |
" |
" |
" |
" |
" |
estonia3 |
" |
" |
" |
" |
" |
" |
" |
georgia0 |
" |
" |
" |
" |
" |
" |
" |
georgia1 |
" |
" |
" |
" |
" |
" |
" |
georgia2 |
" |
" |
" |
" |
" |
" |
" |
georgia3 |
" |
" |
" |
" |
" |
" |
" |
iran0 |
" |
" |
" |
" |
" |
" |
" |
iran1 |
" |
" |
" |
" |
" |
" |
" |
iran2 |
" |
" |
" |
" |
" |
" |
" |
iran3 |
" |
" |
" |
" |
" |
" |
" |
jordan0 |
" |
" |
" |
" |
" |
professor & assistant: matlab, mathematica, maple, R |
" |
jordan1 |
" |
" |
" |
" |
" |
" |
" |
jordan2 |
" |
" |
" |
" |
" |
courses |
" |
jordan3 |
" |
" |
" |
" |
" |
courses |
" |
Concept
Access
The compute nodes can be accessed by SSH only from inside I-MATH.
Use thinlinc
matlab, mathematica, maple, R, rstudio: just start the application via menu 'Applications > Science > ...' - you'll automatically redirected to the appropriate compute server.
- For non standard compute software please first log on the compute node via 'ssh', than start the program.
- SSH: Log on 'ssh.math.uzh.ch' and jump to the compute node again via 'ssh'.
Operating System
- Ubuntu LTS (Linux) - all compute servers.
- Very very limited software might be installed on some Windows virtual machines.
Storage
If you need more disk space, please contact support@math.uzh.ch.
- Remote (NFS)
- Personal Home directory
/compute/<account>
- Local
/export/user/<account>
Programs / Software
- All compute servers use the same applications and versions as installed on the thinlinc terminals.
Please report missing or outdated software or wishes to support@math.uzh.ch.
- There is no Intel C or Fortran Compiler.
- If you need other program versions than the default one: open a terminal, type the program name followed by two 'TAB' presses: this will show all available versions of the specified program.
Parallel computing
- Cluster software
- Not installed / offered at I-MATH.
- Various MPI packages and libs are installed. Compiling and preparation to run programs later on a distributed compute cluster are possible.
Some programs, like matlab, offers limited builtin auto parallelization.
All of the compute nodes are shared memory machines. It's reasonable to start as many programs in parallel as long the load is less or equal the number of cores.
Determine the load by using the command uptime via a terminal. The last three numbers are the average number of jobs (=load) in the run queue over the last 1, 5 and 15 minutes.
$ uptime 15:55:20 up 22 days, 24 min, 1 user, load average: 11.97, 11.73, 11.62
Limitations
- Memory
- Soft: 64GB
- Hard: unlimited
- Number of processes per user
- Soft: 2048
- Hard: 4096
- Change limitations
$ ulimit -m unlimited $ ulimit -u 4096
- Be aware to increase a hard limit only one time. If you try to increase it a second time, you'll get an error. You have to log out and log in again to get a new chance.