<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.sternwarte.uni-erlangen.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kreuzer</id>
	<title>Remeis-Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.sternwarte.uni-erlangen.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kreuzer"/>
	<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php/Special:Contributions/Kreuzer"/>
	<updated>2026-04-09T14:18:16Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.7</generator>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=General_organisation_of_the_lab&amp;diff=1818</id>
		<title>General organisation of the lab</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=General_organisation_of_the_lab&amp;diff=1818"/>
		<updated>2019-02-21T12:03:43Z</updated>

		<summary type="html">&lt;p&gt;Kreuzer: /* Who is involved in which experiment? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:internal]]&lt;br /&gt;
This page is up to date for the Spring Lab 2019.&lt;br /&gt;
=== Who is involved in which experiment? ===&lt;br /&gt;
'''Azimuth''': Katya, Philipp (T.), Thomas, Ole &amp;lt;br/&amp;gt;&lt;br /&gt;
'''CCD''': Maria, Matthias, Dominic, Melanie, Katrin, Christian &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Error propagation''': Max, Basti &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Imaging''': Simon, Roberto &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Observing''': David &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Radio''': Ralf, Jonathan, (Jakob), Andrea, (Konstantin, Florian), Stefan &amp;lt;br/&amp;gt;&lt;br /&gt;
'''Spectroscopy''': Andreas, Matti, Markus D. &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Labtable ===&lt;br /&gt;
This is the link to our labtable: https://www.sternwarte.uni-erlangen.de/internal-area/labtable/ &amp;lt;br/&amp;gt;&lt;br /&gt;
Please insert the dates of both experiment and final protocol as well as your scores in this table. You can also leave comments. &amp;lt;br/&amp;gt;&lt;br /&gt;
This link is a permanent link, so for simplicity you might want to save it as a bookmark in your favorite browser.&lt;br /&gt;
&lt;br /&gt;
=== Attendance ===&lt;br /&gt;
In order to have an overview of the individual attendances during the lab, there is a Doodle poll, where you have sign in for the days you will be at the observatory. At the days marked with 'yes', you are expected to be in Bamberg. &amp;lt;br/&amp;gt;&lt;br /&gt;
First block: &amp;lt;br/&amp;gt;&lt;br /&gt;
Second block: &amp;lt;br/&amp;gt; &amp;lt;br/&amp;gt;&lt;br /&gt;
Additionally, it is mandatory for everyone to help during the observing night(s) in the garden. In order to have enough people around and to distribute this task fairly, you need to sign into additional Doodle polls, whose outcome Ingo is using to create a list containing the night shifts for everyone. &amp;lt;br/&amp;gt;&lt;br /&gt;
The decision whether observing is planned for that night will be made until 5 pm by the profs. &amp;lt;br/&amp;gt;&lt;br /&gt;
Night Doodle first block: &amp;lt;br/&amp;gt;&lt;br /&gt;
Night Doodle second block: &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Kreuzer</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Slurm&amp;diff=1793</id>
		<title>Slurm</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Slurm&amp;diff=1793"/>
		<updated>2019-02-12T13:00:54Z</updated>

		<summary type="html">&lt;p&gt;Kreuzer: /* About */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Slurm]]&lt;br /&gt;
&lt;br /&gt;
= About =&lt;br /&gt;
&lt;br /&gt;
In order to spread the workload of scientific computations on our compute nodes the resource manager SLURM is used.&lt;br /&gt;
&lt;br /&gt;
From [https://slurm.schedmd.com/overview.html official SLURM website]:&lt;br /&gt;
&lt;br /&gt;
''Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.''&lt;br /&gt;
[[File:entities_slurm.gif|frame|Different entities of Slurm]]&lt;br /&gt;
From [https://slurm.schedmd.com/overview.html official SLURM website]:&lt;br /&gt;
&lt;br /&gt;
''SLURM manages the cluster in partitions, which are a set of compute nodes. Note, that partitions may overlap, e.g. one compute node may be in two or more partitions. A node is a physical computer which provides consumable resources: CPUs and Memory. A CPU does not necessarily have to be a physical processor but is more like a virtual CPU to run one single task on. A dual core with hyper threading technology, for instance, would show up as a node with 4 CPUs consisting of two cores with the capability of running two threads on each core. Physical memory is defined in MB.''&lt;br /&gt;
&lt;br /&gt;
The following partitions exist in the current setup:&lt;br /&gt;
&lt;br /&gt;
* remeis: default partition, all machines, timelimit: 7days&lt;br /&gt;
* erosita: only available for selected people involved in the project, timelimit: infinite&lt;br /&gt;
* debug: very high priority partition for software development, timelimit: 1h&lt;br /&gt;
&lt;br /&gt;
= Quick users tutorial =&lt;br /&gt;
&lt;br /&gt;
This tutorial will give you a quick overview over the most important commands. The [https://slurm.schedmd.com/overview.html official SLURM website] provides more detailed information.&lt;br /&gt;
&lt;br /&gt;
== Get cluster status ==&lt;br /&gt;
In order to get an overview of the cluster, type&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  sinfo&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command offers a variety of options how to format the output. In order to get a detailed output while focusing on the nodes rather than the partitions, type&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  sinfo -N -l&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
An overview over the available partitions can be shown with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  scontrol show partition&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
an the current queued and running jobs can be displayed using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Executing a job in real-time ==&lt;br /&gt;
In order to allocate for instance 1 CPUs and 100MB of memory for real-time work, type &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  salloc --mem-per-cpu=100 -n1 bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Your bash is now connected to the compute nodes. In order to execute a script use the srun-command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  srun my_script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Each srun-command you execute now is interpreted as a job step. The currently running job step of submitted jobs can be displayed using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  squeue -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can also start a job by simply use the srun command and specify your requirements. In the following case, srun will allocate 100MB of memory and 1 CPU(s) for 1 task, only for the duration of execution.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  srun --mem-per-cpu=100 -n1 my_script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If resources are available your job will start immediately.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
== Submitting a job for later execution ==&lt;br /&gt;
&lt;br /&gt;
The most convenient way is to submit a job script for later execution. The top part of the script contains scheduling information for SLURM, the more information you provide here, the better.&lt;br /&gt;
&lt;br /&gt;
First of all, a job name is specified, followed by a maximum time. If your job exceeds this time, it will be killed. However, do not overestimate too much because short jobs might start earlier. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. The output file is set to be test-job(jobID).out and the partition to run the job on is &amp;quot;remeis&amp;quot;.&lt;br /&gt;
The sbatch-script itself will not initiate any job but only allocate the resources. The ''ntasks'' and ''mem-per-cpu'' options advise the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources.&lt;br /&gt;
&lt;br /&gt;
The ''srun'' commands in the job script launch the job steps. The example below thus consists of two job steps. Each of the ''srun'' commands may have own requirements concerning memory any may also spawn less tasks than given in the header of the script file. However, the values in the header may never be exceeded!&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  #SBATCH --job-name my_first_job&lt;br /&gt;
  #SBATCH --time 05:00&lt;br /&gt;
  #SBATCH --output test-job_%A_%a.out&lt;br /&gt;
  #SBATCH --error test-job_%A_%a.err&lt;br /&gt;
  #SBATCH --partition=remeis&lt;br /&gt;
  #SBATCH --ntasks=4&lt;br /&gt;
  #SBATCH --mem-per-cpu=100&lt;br /&gt;
  srun -l my_script1.sh&lt;br /&gt;
  srun -l my_script2.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The ''-l'' parameter of ''srun'' will print the task number in front of a line of stdout/err. You can submit this script by saving it in a file, e.g. ''my_first_job.slurm'', and sumitting it using &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cepheus:~&amp;gt; sbatch my_first_job.slurm &lt;br /&gt;
  Submitted batch job 144&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the estimated starting time of your job using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  squeue --start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting a job array ==&lt;br /&gt;
&lt;br /&gt;
In order to submit a array of jobs with the same requirements you have to modify your script file. &lt;br /&gt;
The script above is going to spawn 4 jobs each consisting of one srun command. Note the presence of the new environment variable ''${SLURM_ARRAY_TASK_ID}'' which might be useful for your work. In this example we start an isis-script with different input values. You can also simply use different scripts.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash                     &lt;br /&gt;
  #SBATCH --partition remeis&lt;br /&gt;
  #SBATCH --job-name important_job                                                                   &lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --time 00:05:00                                                         &lt;br /&gt;
  #SBATCH --output /home/dauser/tmp/jobscript_beta.%A_%a.out          &lt;br /&gt;
  #SBATCH --error /home/dauser/tmp/jobscript_beta.%A_%a.err          &lt;br /&gt;
  #SBATCH --array 0-3&lt;br /&gt;
  &lt;br /&gt;
  cd /home/user/script/&lt;br /&gt;
  &lt;br /&gt;
  COMMAND[0]=&amp;quot;./sim_script.sl 0.00&amp;quot;                                       &lt;br /&gt;
  COMMAND[1]=&amp;quot;./sim_script.sl 0.10&amp;quot;                                       &lt;br /&gt;
  COMMAND[2]=&amp;quot;./sim_script.sl 0.20&amp;quot;                                       &lt;br /&gt;
  COMMAND[3]=&amp;quot;./sim_script.sl 0.30&amp;quot;                                       &lt;br /&gt;
  &lt;br /&gt;
  srun /usr/bin/nice -n +19 ${COMMAND[$SLURM_ARRAY_TASK_ID]} &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As above, this code might be saved in a file, for example ''job.slurm'' can be executed using &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  sbatch job.slurm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need a specific machine to run your job on, you can use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --nodelist=leo,draco&lt;br /&gt;
&amp;lt;/pre&amp;gt;  &lt;br /&gt;
If you have a job with high I/O and/or traffic on the network you can limit the number of jobs running simultaneously (to 2 in this example) by&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --array 0-3%2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
SLURM will only allocate resources on the given nodes. However, if all nodes in 'nodelist' cannot fulfill the job requirements, SLURM will also allocate other machines.&lt;br /&gt;
&lt;br /&gt;
If you would like to cancel jobs 1, 2 and 3 from job array 20 use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  scancel 20_[1-3]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that you might have to escape the brackets when using the above command, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  tcsh:~&amp;gt; scancel 20_\[1-3\]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to cancel the whole array, ''scancel'' works as usual&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  scancel 20&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, there is also the option to modify requirements of single jobs later using ''scontrol update job=101_1 ...''.&lt;br /&gt;
&lt;br /&gt;
If you have jobs which are dependent on the result of others or if you want a more detailed description concerning job arrays you can find it in the official SLURM manual: [[https://slurm.schedmd.com/job_array.html]]&lt;br /&gt;
&lt;br /&gt;
== Submitting a job array where each command needs to change into a different directory ==&lt;br /&gt;
&lt;br /&gt;
In order to allow each command of the job array to change into an individual directory (as opposed to all into the same directory as above), modify the script as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash                     &lt;br /&gt;
  #SBATCH --partition remeis&lt;br /&gt;
  #SBATCH --job-name important_job                                                                   &lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --time 00:05:00                                                         &lt;br /&gt;
  #SBATCH --output /home/dauser/tmp/jobscript_beta.%A_%a.out          &lt;br /&gt;
  #SBATCH --error /home/dauser/tmp/jobscript_beta.%A_%a.err          &lt;br /&gt;
  #SBATCH --array 0-3&lt;br /&gt;
  &lt;br /&gt;
  DIR[0]=&amp;quot;/home/user/dir1&amp;quot;&lt;br /&gt;
  DIR[1]=&amp;quot;/home/user/dir2&amp;quot;&lt;br /&gt;
  DIR[2]=&amp;quot;/userdata/user/dir3&amp;quot;&lt;br /&gt;
  DIR[3]=&amp;quot;/userdata/user/dir4&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  cd ${DIR[$SLURM_ARRAY_TASK_ID]}&lt;br /&gt;
  &lt;br /&gt;
  COMMAND[0]=&amp;quot;./sim_script.sl 0.00&amp;quot;                                       &lt;br /&gt;
  COMMAND[1]=&amp;quot;./sim_script.sl 0.10&amp;quot;                                       &lt;br /&gt;
  COMMAND[2]=&amp;quot;./sim_script.sl 0.20&amp;quot;                                       &lt;br /&gt;
  COMMAND[3]=&amp;quot;./sim_script.sl 0.30&amp;quot;                                       &lt;br /&gt;
  &lt;br /&gt;
  srun /usr/bin/nice -n +19 ${COMMAND[$SLURM_ARRAY_TASK_ID]} &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This also works with paths relative to the directory where the slurm script was submitted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a job array with varying number of tasks ==&lt;br /&gt;
&lt;br /&gt;
This is not really a job &amp;quot;array&amp;quot;. But to execute multiple jobs with different number of tasks one can use multiple&lt;br /&gt;
srun calls chained with an '&amp;amp;'. This will submit the jobs at once but allow one to specify job parameters individually&lt;br /&gt;
for each job.&lt;br /&gt;
&lt;br /&gt;
Example: Simultaneous fit of multiple datasets with different functions&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line='line'&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name my_simultaneous_fit_%n&lt;br /&gt;
#SBATCH --time 05:00&lt;br /&gt;
#SBATCH --output test-job_%A_%a.out&lt;br /&gt;
#SBATCH --error test-job_%A_%a.err&lt;br /&gt;
#SBATCH --partition=remeis&lt;br /&gt;
#SBATCH --mem-per-cpu=100&lt;br /&gt;
srun -l my_complicated_fit.sh 2 --ntasks=2 &amp;amp; # my_complicated_fit fits 2 line centers -&amp;gt; needs 2 tasks&lt;br /&gt;
srun -l my_complicated_fit.sh 4 --ntasks=4   # my_complicated_fit fits 4 line centers -&amp;gt; needs 4 tasks&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Graphical jobs (srun.x11) ==&lt;br /&gt;
Not all applications run only on the commandline. Slurm does not support graphical applications natively but there is a wrapper script available which allocates the resources on the cluster and then provides a [[screen]] session inside a running [[SSH|SSH-session]] to the host where the resources have been allocated on. For example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@lynx:~$ srun.x11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
results in a new shell :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@messier15:~$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which forwards the window if you start a graphical program for example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@messier15:~$ kate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
would open the text editor [[https://de.wikipedia.org/wiki/Kate_(KDE) kate]]. However, this only uses the standard resources set for the remeis partition. If you have other requirements you can also specify these in exactly the same way as for ''srun'':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@lynx:~$ srun.x11 --mem=2G&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
would allocate 2GB of memory for the application.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using the ''erosita'' partition (serpens)==&lt;br /&gt;
&lt;br /&gt;
If you are allowed to use the eRosita Partition, contact a SLURM admin (eg. [mailto:simon.kreuzer@fau.de simon.kreuzer@fau.de]). Once your username is added to the list of privileged users, you just have to add &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --partition=erosita&lt;br /&gt;
  #SBATCH --account=erosita&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to your jobfiles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other useful commands ==&lt;br /&gt;
&lt;br /&gt;
* ''sstat'' Real-time status information of your running jobs&lt;br /&gt;
&lt;br /&gt;
* ''sattach &amp;lt;jobid.stepid&amp;gt;'' Attach to stdI/O of one of your running jobs &lt;br /&gt;
&lt;br /&gt;
* ''scancel [OPTIONS...] [job_id[_array_id][.step_id]] [job_id[_array_id][.step_id]...]'' Cancel the execution of one of your job arrays/jobs/job steps.&lt;br /&gt;
&lt;br /&gt;
* ''scontrol'' Administration tool, you can for example use this to modify the requirements of your jobs. You can for exaple show your jobs ''show jobs'' or update the time limit ''update JobId=  TimeLimit=2''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* ''smap'' graphically view information about Slurm jobs, partitions, and set configurations parameters.&lt;br /&gt;
&lt;br /&gt;
* ''sview'' graphical user interface for those who prefer clicking over typing. X-Server required. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= SLURM and MPI =&lt;br /&gt;
== About MPI ==&lt;br /&gt;
MPI (the Message Passing Interface) makes it possible to run parallel processes on CPUs of different hosts. To do so it uses TCP packets to communicate via the normal network connection. Some tasks can profit a lot of using more cores for computation.&lt;br /&gt;
At Remeis MPICH2 is used for initialisation of MPI tasks which is well supported within Slurm. The process manager is called '''pmi2''' and is set as default for srun. If an older MPI process manager is needed, for example for older MPI applications used in '''torque''', it can be set with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --mpi=&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
in the submission script.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  srun --mpi=list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
provides a list of supported MPI process managers. &lt;br /&gt;
&lt;br /&gt;
The implementation of MPI for SLang/ISIS is called '''SLMPI'''.&lt;br /&gt;
&lt;br /&gt;
== Best practice for MPI tasks ==&lt;br /&gt;
&lt;br /&gt;
The usage of MPI might cause continuously high network traffic especially on the host which holds the master process. Please consider this when deciding which nodes are used for the job. It's a good idea to provide servers (e.g. leo or lupus) with the ''--nodelist='' option one of which is then used to hold the master process since nobody is sitting in front of it and trying to use a browser. Additional nodes are allocated automatically by Slurm if required to fit the ''--ntasks'' / ''-n'' option.&lt;br /&gt;
&lt;br /&gt;
MPI jobs depend on all allocated nodes to be up and running properly, so I'd like to use this opportunity to remind about shutting down/rebooting PCs on your own without any permission can abort a whole MPI job.&lt;br /&gt;
&lt;br /&gt;
== Requirements and Tips ==&lt;br /&gt;
To use MPI obviously the application or function used should support MPI. Examples range from programs written in C using some MPI features and compiled with the ''mpicc'' compiler to common ISIS-functions such as ''mpi_emcee'' or ''mpi_fit_pars''.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that everything in the compiled programs/scripts which is not an MPI compatible function is executed on each node on its own. For example in ISIS with ''-n 20'':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  fit_counts;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
would fit the defined function to the dataset 20 times at once. That's not very helpful so think about which tasks should be performed in the actual MPI process. Special care has to be taken if something has to be saved as a file. Consider:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  save_par(&amp;quot;test.par&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
with ''-n 20''. This would save the current fit parameters to '''test.par''' in the working directory 20 times at exactly the same time. This might be helpful if the file is needed on the scratch disk of each node, but doing this on for example ''/userdata'' can cause serious trouble. The function ''mpi_master_only'' can be used to perform a user defined task in an MPI job only once. Best way is to only submit an MPI job to Slurm which only contains actual MPI functions. If some models in ISIS are used which output something to stdout or stderr while loading these messages are also generated 20 times since it's loaded in each process individually.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
If the job is a valid MPI process then the submission works exactly like for any other job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  #SBATCH --job-name my_first_mpi_job&lt;br /&gt;
  #SBATCH ...&lt;br /&gt;
  #SBATCH --ntasks=20&lt;br /&gt;
  cd /my/working/dir&lt;br /&gt;
  srun /usr/bin/nice -n +15 ./my_mpi_script&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It might be necessary to set a higher memory usage than for the according non MPI job since some applications try to limit the network traffic by just copying the required data to each node in the first place.&lt;br /&gt;
&lt;br /&gt;
Also make sure that if it is necessary to specify the number of child processes in the application itself, set it to the same as with the ''--ntasks'' / ''-n'' option in the submission. An example would be the ''num_slaves'' qualifier in ''mpi_emcee''.&lt;br /&gt;
&lt;br /&gt;
Note that the ''srun'' command does not contain ''mpiexec'' or ''mpirun'' which were used in older versions of MPI to launch the processes. The processes manager ''pmi2'' is built into Slurm and makes it possible that Slurm itself can initialize the network communication with the ''srun'' command only.&lt;br /&gt;
&lt;br /&gt;
Of course it's also possible to run the MPI process directly from the commandline. As an example let's have a look at the calculation of pi with the MPI program ''cpi''. The program comes with the source code of MPICH2 and is compiled in the ''check'' rule. It's located in&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /data/system/software/mpich/mpich-3.2/examples&lt;br /&gt;
&amp;lt;/pre&amp;gt;  &lt;br /&gt;
To run the calculation in 10 parallel processes directly from the commandline use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  [1:11]weber@lynx:/data/system/software/mpich/mpich-3.2/examples&amp;gt; srun -n 10 ./cpi&lt;br /&gt;
  Process 0 of 10 is on aquarius&lt;br /&gt;
  Process 1 of 10 is on ara&lt;br /&gt;
  Process 6 of 10 is on asterion&lt;br /&gt;
  Process 2 of 10 is on ara&lt;br /&gt;
  Process 8 of 10 is on asterion&lt;br /&gt;
  Process 7 of 10 is on asterion&lt;br /&gt;
  Process 3 of 10 is on aranea&lt;br /&gt;
  Process 5 of 10 is on aranea&lt;br /&gt;
  Process 4 of 10 is on aranea&lt;br /&gt;
  Process 9 of 10 is on cancer&lt;br /&gt;
  pi is approximately 3.1415926544231256, Error is 0.0000000008333325&lt;br /&gt;
  wall clock time = 0.010601&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As we can see Slurm launched 10 processes distributed to aquarius, ara, asterion, aranea and cancer. Keep in mind that running MPI interactively doesn't really make sense. The best way to go is to write a submission script like explained above and let Slurm handle the initialisation.&lt;/div&gt;</summary>
		<author><name>Kreuzer</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Slurm&amp;diff=1792</id>
		<title>Slurm</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Slurm&amp;diff=1792"/>
		<updated>2019-02-12T13:00:37Z</updated>

		<summary type="html">&lt;p&gt;Kreuzer: /* About */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Slurm]]&lt;br /&gt;
&lt;br /&gt;
= About =&lt;br /&gt;
&lt;br /&gt;
In order to spread the workload of scientific computations on our compute nodes the resource manager SLURM is used.&lt;br /&gt;
&lt;br /&gt;
From [https://slurm.schedmd.com/overview.html official SLURM website]:&lt;br /&gt;
&lt;br /&gt;
''Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters. Slurm requires no kernel modifications for its operation and is relatively self-contained. As a cluster workload manager, Slurm has three key functions. First, it allocates exclusive and/or non-exclusive access to resources (compute nodes) to users for some duration of time so they can perform work. Second, it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes. Finally, it arbitrates contention for resources by managing a queue of pending work.''&lt;br /&gt;
[[File:entities_slurm.gif|frame|Different entities of Slurm]]&lt;br /&gt;
From [https://slurm.schedmd.com/overview.html official SLURM website]:&lt;br /&gt;
&lt;br /&gt;
''SLURM manages the cluster in partitions, which are a set of compute nodes. Note, that partitions may overlap, e.g. one compute node may be in two or more partitions. A node is a physical computer which provides consumable resources: CPUs and Memory. A CPU does not necessarily have to be a physical processor but is more like a virtual CPU to run one single task on. A dual core with hyper threading technology, for instance, would show up as a node with 4 CPUs consisting of two cores with the capability of running two threads on each core. Physical memory is defined in MB.''&lt;br /&gt;
&lt;br /&gt;
The following partitions exist in the current setup:&lt;br /&gt;
&lt;br /&gt;
* remeis: default partition, all machines, timelimit: 7days&lt;br /&gt;
* erosita: only available for selected people involved in the project, timelimit: infinite&lt;br /&gt;
* power: only the newest machines and servers, higher priority than 'remeis' (e.g. if you submit via power you will get the power machines as soon as possible and not compete with 'remeis' jobs but only other jobs submitted to 'power'), timelimit: 1day&lt;br /&gt;
* messier: only messier cluster, also higher priority than 'remeis', timelimit: 7 days &lt;br /&gt;
* debug: very high priority partition for software development, timelimit: 1h&lt;br /&gt;
&lt;br /&gt;
= Quick users tutorial =&lt;br /&gt;
&lt;br /&gt;
This tutorial will give you a quick overview over the most important commands. The [https://slurm.schedmd.com/overview.html official SLURM website] provides more detailed information.&lt;br /&gt;
&lt;br /&gt;
== Get cluster status ==&lt;br /&gt;
In order to get an overview of the cluster, type&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  sinfo&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This command offers a variety of options how to format the output. In order to get a detailed output while focusing on the nodes rather than the partitions, type&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  sinfo -N -l&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
An overview over the available partitions can be shown with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  scontrol show partition&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
an the current queued and running jobs can be displayed using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  squeue&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Executing a job in real-time ==&lt;br /&gt;
In order to allocate for instance 1 CPUs and 100MB of memory for real-time work, type &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  salloc --mem-per-cpu=100 -n1 bash&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Your bash is now connected to the compute nodes. In order to execute a script use the srun-command&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  srun my_script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Each srun-command you execute now is interpreted as a job step. The currently running job step of submitted jobs can be displayed using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  squeue -s&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
You can also start a job by simply use the srun command and specify your requirements. In the following case, srun will allocate 100MB of memory and 1 CPU(s) for 1 task, only for the duration of execution.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  srun --mem-per-cpu=100 -n1 my_script.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If resources are available your job will start immediately.&lt;br /&gt;
&lt;br /&gt;
  &lt;br /&gt;
== Submitting a job for later execution ==&lt;br /&gt;
&lt;br /&gt;
The most convenient way is to submit a job script for later execution. The top part of the script contains scheduling information for SLURM, the more information you provide here, the better.&lt;br /&gt;
&lt;br /&gt;
First of all, a job name is specified, followed by a maximum time. If your job exceeds this time, it will be killed. However, do not overestimate too much because short jobs might start earlier. Acceptable time formats include &amp;quot;minutes&amp;quot;, &amp;quot;minutes:seconds&amp;quot;, &amp;quot;hours:minutes:seconds&amp;quot;, &amp;quot;days-hours&amp;quot;, &amp;quot;days-hours:minutes&amp;quot; and &amp;quot;days-hours:minutes:seconds&amp;quot;. The output file is set to be test-job(jobID).out and the partition to run the job on is &amp;quot;remeis&amp;quot;.&lt;br /&gt;
The sbatch-script itself will not initiate any job but only allocate the resources. The ''ntasks'' and ''mem-per-cpu'' options advise the SLURM controller that job steps run within the allocation will launch a maximum of number tasks and to provide for sufficient resources.&lt;br /&gt;
&lt;br /&gt;
The ''srun'' commands in the job script launch the job steps. The example below thus consists of two job steps. Each of the ''srun'' commands may have own requirements concerning memory any may also spawn less tasks than given in the header of the script file. However, the values in the header may never be exceeded!&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  #SBATCH --job-name my_first_job&lt;br /&gt;
  #SBATCH --time 05:00&lt;br /&gt;
  #SBATCH --output test-job_%A_%a.out&lt;br /&gt;
  #SBATCH --error test-job_%A_%a.err&lt;br /&gt;
  #SBATCH --partition=remeis&lt;br /&gt;
  #SBATCH --ntasks=4&lt;br /&gt;
  #SBATCH --mem-per-cpu=100&lt;br /&gt;
  srun -l my_script1.sh&lt;br /&gt;
  srun -l my_script2.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The ''-l'' parameter of ''srun'' will print the task number in front of a line of stdout/err. You can submit this script by saving it in a file, e.g. ''my_first_job.slurm'', and sumitting it using &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  cepheus:~&amp;gt; sbatch my_first_job.slurm &lt;br /&gt;
  Submitted batch job 144&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can check the estimated starting time of your job using&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  squeue --start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Submitting a job array ==&lt;br /&gt;
&lt;br /&gt;
In order to submit a array of jobs with the same requirements you have to modify your script file. &lt;br /&gt;
The script above is going to spawn 4 jobs each consisting of one srun command. Note the presence of the new environment variable ''${SLURM_ARRAY_TASK_ID}'' which might be useful for your work. In this example we start an isis-script with different input values. You can also simply use different scripts.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash                     &lt;br /&gt;
  #SBATCH --partition remeis&lt;br /&gt;
  #SBATCH --job-name important_job                                                                   &lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --time 00:05:00                                                         &lt;br /&gt;
  #SBATCH --output /home/dauser/tmp/jobscript_beta.%A_%a.out          &lt;br /&gt;
  #SBATCH --error /home/dauser/tmp/jobscript_beta.%A_%a.err          &lt;br /&gt;
  #SBATCH --array 0-3&lt;br /&gt;
  &lt;br /&gt;
  cd /home/user/script/&lt;br /&gt;
  &lt;br /&gt;
  COMMAND[0]=&amp;quot;./sim_script.sl 0.00&amp;quot;                                       &lt;br /&gt;
  COMMAND[1]=&amp;quot;./sim_script.sl 0.10&amp;quot;                                       &lt;br /&gt;
  COMMAND[2]=&amp;quot;./sim_script.sl 0.20&amp;quot;                                       &lt;br /&gt;
  COMMAND[3]=&amp;quot;./sim_script.sl 0.30&amp;quot;                                       &lt;br /&gt;
  &lt;br /&gt;
  srun /usr/bin/nice -n +19 ${COMMAND[$SLURM_ARRAY_TASK_ID]} &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As above, this code might be saved in a file, for example ''job.slurm'' can be executed using &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  sbatch job.slurm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you need a specific machine to run your job on, you can use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --nodelist=leo,draco&lt;br /&gt;
&amp;lt;/pre&amp;gt;  &lt;br /&gt;
If you have a job with high I/O and/or traffic on the network you can limit the number of jobs running simultaneously (to 2 in this example) by&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --array 0-3%2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
SLURM will only allocate resources on the given nodes. However, if all nodes in 'nodelist' cannot fulfill the job requirements, SLURM will also allocate other machines.&lt;br /&gt;
&lt;br /&gt;
If you would like to cancel jobs 1, 2 and 3 from job array 20 use&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  scancel 20_[1-3]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that you might have to escape the brackets when using the above command, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  tcsh:~&amp;gt; scancel 20_\[1-3\]&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you want to cancel the whole array, ''scancel'' works as usual&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  scancel 20&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note, there is also the option to modify requirements of single jobs later using ''scontrol update job=101_1 ...''.&lt;br /&gt;
&lt;br /&gt;
If you have jobs which are dependent on the result of others or if you want a more detailed description concerning job arrays you can find it in the official SLURM manual: [[https://slurm.schedmd.com/job_array.html]]&lt;br /&gt;
&lt;br /&gt;
== Submitting a job array where each command needs to change into a different directory ==&lt;br /&gt;
&lt;br /&gt;
In order to allow each command of the job array to change into an individual directory (as opposed to all into the same directory as above), modify the script as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash                     &lt;br /&gt;
  #SBATCH --partition remeis&lt;br /&gt;
  #SBATCH --job-name important_job                                                                   &lt;br /&gt;
  #SBATCH --ntasks=1&lt;br /&gt;
  #SBATCH --time 00:05:00                                                         &lt;br /&gt;
  #SBATCH --output /home/dauser/tmp/jobscript_beta.%A_%a.out          &lt;br /&gt;
  #SBATCH --error /home/dauser/tmp/jobscript_beta.%A_%a.err          &lt;br /&gt;
  #SBATCH --array 0-3&lt;br /&gt;
  &lt;br /&gt;
  DIR[0]=&amp;quot;/home/user/dir1&amp;quot;&lt;br /&gt;
  DIR[1]=&amp;quot;/home/user/dir2&amp;quot;&lt;br /&gt;
  DIR[2]=&amp;quot;/userdata/user/dir3&amp;quot;&lt;br /&gt;
  DIR[3]=&amp;quot;/userdata/user/dir4&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  cd ${DIR[$SLURM_ARRAY_TASK_ID]}&lt;br /&gt;
  &lt;br /&gt;
  COMMAND[0]=&amp;quot;./sim_script.sl 0.00&amp;quot;                                       &lt;br /&gt;
  COMMAND[1]=&amp;quot;./sim_script.sl 0.10&amp;quot;                                       &lt;br /&gt;
  COMMAND[2]=&amp;quot;./sim_script.sl 0.20&amp;quot;                                       &lt;br /&gt;
  COMMAND[3]=&amp;quot;./sim_script.sl 0.30&amp;quot;                                       &lt;br /&gt;
  &lt;br /&gt;
  srun /usr/bin/nice -n +19 ${COMMAND[$SLURM_ARRAY_TASK_ID]} &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This also works with paths relative to the directory where the slurm script was submitted.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Submitting a job array with varying number of tasks ==&lt;br /&gt;
&lt;br /&gt;
This is not really a job &amp;quot;array&amp;quot;. But to execute multiple jobs with different number of tasks one can use multiple&lt;br /&gt;
srun calls chained with an '&amp;amp;'. This will submit the jobs at once but allow one to specify job parameters individually&lt;br /&gt;
for each job.&lt;br /&gt;
&lt;br /&gt;
Example: Simultaneous fit of multiple datasets with different functions&lt;br /&gt;
&lt;br /&gt;
&amp;lt;syntaxhighlight lang=&amp;quot;bash&amp;quot; line='line'&amp;gt;&lt;br /&gt;
#!/bin/bash&lt;br /&gt;
#SBATCH --job-name my_simultaneous_fit_%n&lt;br /&gt;
#SBATCH --time 05:00&lt;br /&gt;
#SBATCH --output test-job_%A_%a.out&lt;br /&gt;
#SBATCH --error test-job_%A_%a.err&lt;br /&gt;
#SBATCH --partition=remeis&lt;br /&gt;
#SBATCH --mem-per-cpu=100&lt;br /&gt;
srun -l my_complicated_fit.sh 2 --ntasks=2 &amp;amp; # my_complicated_fit fits 2 line centers -&amp;gt; needs 2 tasks&lt;br /&gt;
srun -l my_complicated_fit.sh 4 --ntasks=4   # my_complicated_fit fits 4 line centers -&amp;gt; needs 4 tasks&lt;br /&gt;
&amp;lt;/syntaxhighlight&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Graphical jobs (srun.x11) ==&lt;br /&gt;
Not all applications run only on the commandline. Slurm does not support graphical applications natively but there is a wrapper script available which allocates the resources on the cluster and then provides a [[screen]] session inside a running [[SSH|SSH-session]] to the host where the resources have been allocated on. For example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@lynx:~$ srun.x11&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
results in a new shell :&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@messier15:~$&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
which forwards the window if you start a graphical program for example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@messier15:~$ kate&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
would open the text editor [[https://de.wikipedia.org/wiki/Kate_(KDE) kate]]. However, this only uses the standard resources set for the remeis partition. If you have other requirements you can also specify these in exactly the same way as for ''srun'':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[12:06]weber@lynx:~$ srun.x11 --mem=2G&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
would allocate 2GB of memory for the application.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Using the ''erosita'' partition (serpens)==&lt;br /&gt;
&lt;br /&gt;
If you are allowed to use the eRosita Partition, contact a SLURM admin (eg. [mailto:simon.kreuzer@fau.de simon.kreuzer@fau.de]). Once your username is added to the list of privileged users, you just have to add &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --partition=erosita&lt;br /&gt;
  #SBATCH --account=erosita&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
to your jobfiles.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Other useful commands ==&lt;br /&gt;
&lt;br /&gt;
* ''sstat'' Real-time status information of your running jobs&lt;br /&gt;
&lt;br /&gt;
* ''sattach &amp;lt;jobid.stepid&amp;gt;'' Attach to stdI/O of one of your running jobs &lt;br /&gt;
&lt;br /&gt;
* ''scancel [OPTIONS...] [job_id[_array_id][.step_id]] [job_id[_array_id][.step_id]...]'' Cancel the execution of one of your job arrays/jobs/job steps.&lt;br /&gt;
&lt;br /&gt;
* ''scontrol'' Administration tool, you can for example use this to modify the requirements of your jobs. You can for exaple show your jobs ''show jobs'' or update the time limit ''update JobId=  TimeLimit=2''.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* ''smap'' graphically view information about Slurm jobs, partitions, and set configurations parameters.&lt;br /&gt;
&lt;br /&gt;
* ''sview'' graphical user interface for those who prefer clicking over typing. X-Server required. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= SLURM and MPI =&lt;br /&gt;
== About MPI ==&lt;br /&gt;
MPI (the Message Passing Interface) makes it possible to run parallel processes on CPUs of different hosts. To do so it uses TCP packets to communicate via the normal network connection. Some tasks can profit a lot of using more cores for computation.&lt;br /&gt;
At Remeis MPICH2 is used for initialisation of MPI tasks which is well supported within Slurm. The process manager is called '''pmi2''' and is set as default for srun. If an older MPI process manager is needed, for example for older MPI applications used in '''torque''', it can be set with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #SBATCH --mpi=&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
in the submission script.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  srun --mpi=list&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
provides a list of supported MPI process managers. &lt;br /&gt;
&lt;br /&gt;
The implementation of MPI for SLang/ISIS is called '''SLMPI'''.&lt;br /&gt;
&lt;br /&gt;
== Best practice for MPI tasks ==&lt;br /&gt;
&lt;br /&gt;
The usage of MPI might cause continuously high network traffic especially on the host which holds the master process. Please consider this when deciding which nodes are used for the job. It's a good idea to provide servers (e.g. leo or lupus) with the ''--nodelist='' option one of which is then used to hold the master process since nobody is sitting in front of it and trying to use a browser. Additional nodes are allocated automatically by Slurm if required to fit the ''--ntasks'' / ''-n'' option.&lt;br /&gt;
&lt;br /&gt;
MPI jobs depend on all allocated nodes to be up and running properly, so I'd like to use this opportunity to remind about shutting down/rebooting PCs on your own without any permission can abort a whole MPI job.&lt;br /&gt;
&lt;br /&gt;
== Requirements and Tips ==&lt;br /&gt;
To use MPI obviously the application or function used should support MPI. Examples range from programs written in C using some MPI features and compiled with the ''mpicc'' compiler to common ISIS-functions such as ''mpi_emcee'' or ''mpi_fit_pars''.&lt;br /&gt;
&lt;br /&gt;
Keep in mind that everything in the compiled programs/scripts which is not an MPI compatible function is executed on each node on its own. For example in ISIS with ''-n 20'':&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  fit_counts;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
would fit the defined function to the dataset 20 times at once. That's not very helpful so think about which tasks should be performed in the actual MPI process. Special care has to be taken if something has to be saved as a file. Consider:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  save_par(&amp;quot;test.par&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
with ''-n 20''. This would save the current fit parameters to '''test.par''' in the working directory 20 times at exactly the same time. This might be helpful if the file is needed on the scratch disk of each node, but doing this on for example ''/userdata'' can cause serious trouble. The function ''mpi_master_only'' can be used to perform a user defined task in an MPI job only once. Best way is to only submit an MPI job to Slurm which only contains actual MPI functions. If some models in ISIS are used which output something to stdout or stderr while loading these messages are also generated 20 times since it's loaded in each process individually.&lt;br /&gt;
&lt;br /&gt;
== Usage ==&lt;br /&gt;
If the job is a valid MPI process then the submission works exactly like for any other job:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  #!/bin/bash&lt;br /&gt;
  #SBATCH --job-name my_first_mpi_job&lt;br /&gt;
  #SBATCH ...&lt;br /&gt;
  #SBATCH --ntasks=20&lt;br /&gt;
  cd /my/working/dir&lt;br /&gt;
  srun /usr/bin/nice -n +15 ./my_mpi_script&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
It might be necessary to set a higher memory usage than for the according non MPI job since some applications try to limit the network traffic by just copying the required data to each node in the first place.&lt;br /&gt;
&lt;br /&gt;
Also make sure that if it is necessary to specify the number of child processes in the application itself, set it to the same as with the ''--ntasks'' / ''-n'' option in the submission. An example would be the ''num_slaves'' qualifier in ''mpi_emcee''.&lt;br /&gt;
&lt;br /&gt;
Note that the ''srun'' command does not contain ''mpiexec'' or ''mpirun'' which were used in older versions of MPI to launch the processes. The processes manager ''pmi2'' is built into Slurm and makes it possible that Slurm itself can initialize the network communication with the ''srun'' command only.&lt;br /&gt;
&lt;br /&gt;
Of course it's also possible to run the MPI process directly from the commandline. As an example let's have a look at the calculation of pi with the MPI program ''cpi''. The program comes with the source code of MPICH2 and is compiled in the ''check'' rule. It's located in&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  /data/system/software/mpich/mpich-3.2/examples&lt;br /&gt;
&amp;lt;/pre&amp;gt;  &lt;br /&gt;
To run the calculation in 10 parallel processes directly from the commandline use:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
  [1:11]weber@lynx:/data/system/software/mpich/mpich-3.2/examples&amp;gt; srun -n 10 ./cpi&lt;br /&gt;
  Process 0 of 10 is on aquarius&lt;br /&gt;
  Process 1 of 10 is on ara&lt;br /&gt;
  Process 6 of 10 is on asterion&lt;br /&gt;
  Process 2 of 10 is on ara&lt;br /&gt;
  Process 8 of 10 is on asterion&lt;br /&gt;
  Process 7 of 10 is on asterion&lt;br /&gt;
  Process 3 of 10 is on aranea&lt;br /&gt;
  Process 5 of 10 is on aranea&lt;br /&gt;
  Process 4 of 10 is on aranea&lt;br /&gt;
  Process 9 of 10 is on cancer&lt;br /&gt;
  pi is approximately 3.1415926544231256, Error is 0.0000000008333325&lt;br /&gt;
  wall clock time = 0.010601&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
As we can see Slurm launched 10 processes distributed to aquarius, ara, asterion, aranea and cancer. Keep in mind that running MPI interactively doesn't really make sense. The best way to go is to write a submission script like explained above and let Slurm handle the initialisation.&lt;/div&gt;</summary>
		<author><name>Kreuzer</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2019&amp;diff=1768</id>
		<title>Skivergnuegen 2019</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2019&amp;diff=1768"/>
		<updated>2019-01-10T16:42:39Z</updated>

		<summary type="html">&lt;p&gt;Kreuzer: /* Allgemeines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Remeis Skifahren 2019''' The same procedure as every year ;-) &lt;br /&gt;
Hier werden die wichtigsten organisatorischen Dinge festgehalten. Jeder darf hier gerne editieren; wer keinen Account für das Wiki hat, soll sich bitte an jemanden wenden der Zugang hat.&lt;br /&gt;
&lt;br /&gt;
[[File:154854 silvretta-montafon-panorama.jpeg|right|700px]]&lt;br /&gt;
&lt;br /&gt;
== Allgemeine Infos ==&lt;br /&gt;
*Wann: 12.01.19 (Sa) bis 19.01.19 (Sa)&lt;br /&gt;
&lt;br /&gt;
*Ferienwohnung(en):&lt;br /&gt;
**Wir kommen wieder unter im [https://www.ferienwohnungen-montafon.com/ferienhaus-enzian/ Haus Enzian] des [https://www.grandau.at/ Sporthotels Grandau], sowie dem 5-er Apartment [https://www.grandau.at/zimmer/ferienwohnungen/Ferienwohnung-Edelweiss-Top-5/ TOP5] nebenan.&lt;br /&gt;
**Adresse des Sporthotel: Montafonerstraße 274a, 6791 St. Gallenkirch, Österreich. Unsere Unterkünfte liegen im Türkeiweg.&lt;br /&gt;
&lt;br /&gt;
*Skigebiet: Montafon (Vorarlberg)&lt;br /&gt;
**Skigebiet Karte: https://winter.intermaps.com/montafon?lang=de&lt;br /&gt;
**Skipass Preise: https://www.montafon.at/de/Service/Bergbahn-Preise-Tickets/Mehrtageskarte-Winter&lt;br /&gt;
**Wer nicht alle 7 Tage Skifahren möchte: Es gibt Angebote wie z.B. 5 aus 7, mit einem solchen Skipass kann man innerhalb von 7 Tagen an 5 beliebigen Tagen skifahren.&lt;br /&gt;
&lt;br /&gt;
*Skiverleih: [http://www.sportharry.at/ Sport Harry], direkt an der Talstation. Promotioncode ALPISKI auf alpinresorts.com spart bis zu 35% auf die Preise.&lt;br /&gt;
&lt;br /&gt;
== Unterkunft ==&lt;br /&gt;
=== Aufteilung ===&lt;br /&gt;
Dieses Jahr gibt es neben der Hütte noch zusätzlich das 5-er Appartment (TOP5).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Zimmer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Insassen&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 2-er&lt;br /&gt;
| Basti, Andrea &lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 3-er&lt;br /&gt;
| Ralf, Flo, Thomas&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 4-er&lt;br /&gt;
| Nela, Ohle, Eva, Katya&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 5-er&lt;br /&gt;
| Christian, Person 2, Person 3, Person 4, Person 5&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Apartment (5)&lt;br /&gt;
| Eugenia, Michi, Johannes Veh, Katrin, Person 5&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Bettwäsche/Handtücher sind dort vorhanden und müssen nicht mitgenommen werden. Kaffeemaschine, Waffeleisen sind ebenfalls vorhanden.&lt;br /&gt;
Brötchen können wir vorbestellen im Hotel. Dazu müssen wir sie am Vortag vorbestellen und am nächsten Morgen bei der Besitzerin bzw. im Hotel gegen 7:15 Uhr abholen.&lt;br /&gt;
&lt;br /&gt;
== Fahrer ==&lt;br /&gt;
Fahrer können sich hier eintragen und Eckdaten angeben. Diejenige, die mitfahren möchten, sprechen sich mit den Fahrern ab und tragen sich ebenfalls ein. Klärt bitte auch die Gepäcklage, ggf. kann ein anderer Fahrer z.B. Skiausrüstung mitnehmen.&lt;br /&gt;
&lt;br /&gt;
Auf der Suche nach einer Mitfahrgelegenheit: Katya, Fritz&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Fahrer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | # Plätze&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Skimitnahme&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Kommentar&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Eugenia&lt;br /&gt;
| 2(3)&lt;br /&gt;
| 12.1., Vormittags/Mittags los in Fürth&lt;br /&gt;
| Michi, kleiner Johannes&lt;br /&gt;
| 19.1. Vormittags, wahrscheinlich keine Mitnahmemöglichkeit von Passagieren&lt;br /&gt;
| Michi, kleiner Johannes&lt;br /&gt;
| 2 Paar&lt;br /&gt;
| Evtl ein Platz fuer einen Mitfahrer, falls es sonst nicht aufgeht&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Basti&lt;br /&gt;
| 3 (4)&lt;br /&gt;
| Sat, noon, Erlangen&lt;br /&gt;
| Andrea, Veh&lt;br /&gt;
| Sat, morning&lt;br /&gt;
| Andrea, Veh&lt;br /&gt;
| no&lt;br /&gt;
| 2&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Ralf&lt;br /&gt;
| 3&lt;br /&gt;
| Sat, noon&lt;br /&gt;
| Katya, Max&lt;br /&gt;
| Abreise Auto 3&lt;br /&gt;
| Mitfahrer Abreise Auto 3&lt;br /&gt;
| Skimitnahme Auto 3&lt;br /&gt;
| Kommentar Auto 3&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Simon&lt;br /&gt;
| 3&lt;br /&gt;
| Sat noon&lt;br /&gt;
| Christian H., Katrin&lt;br /&gt;
| Abreise Auto 4&lt;br /&gt;
| Mitfahrer Abreise Auto 4&lt;br /&gt;
| Skimitnahme Auto 4&lt;br /&gt;
| Kommentar Auto 4&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Johannes H (???)&lt;br /&gt;
| 4&lt;br /&gt;
| Sat noon&lt;br /&gt;
| Flo, Eva, Fritz&lt;br /&gt;
| Abreise Auto 5&lt;br /&gt;
| Mitfahrer Abreise Auto 5&lt;br /&gt;
| Skimitnahme Auto 5&lt;br /&gt;
| Kommentar Auto 5&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Schwarzes Brett ==&lt;br /&gt;
&lt;br /&gt;
=== Essen ===&lt;br /&gt;
Hier gibts den Essensplan, soweit übernommen von den letzten Jahren. Da wir sehr viele Leute sind, sollten wir wieder vorab grob planen, was wir kochen wollen. Wir können einige Dinge in Deutschland besorgen und mitnehmen, es gib aber auch einen nahe gelegen Supermarkt. Generelle Dinge, wie z.B. Gewürze oder Aufstriche für Frühstücksbrötchen könnte man auch mitbringen.&lt;br /&gt;
&lt;br /&gt;
Wer Kochvorschläge oder andere Ideen hat, gerne unten eintragen und kommentieren. Bedenkt, dass die Zubereitung (relativ) einfach sein sollte. Ich (Thomas) habe jetzt mal eine Liste unten erstellt, aber vor allem die Reihenfolge/Tage kann man sicher noch hin und her bewegen. Einige (wenigere) Sachen müssen wir dann manchmal trotzdem noch beim Spar kaufen, da ja auch der Kühlschrank nur begrenzt Platz hat.&lt;br /&gt;
&lt;br /&gt;
''VEGANE Optionen für Christian: vegane Sahne, Quark, Leinöl''&lt;br /&gt;
&lt;br /&gt;
*Samstag: Spaghetti Bolognese + veg. Bolognese&lt;br /&gt;
*Sonntag: Käse Spätzle&lt;br /&gt;
*Montag: Burritos &lt;br /&gt;
*Dienstag: Risotto&lt;br /&gt;
*Mittwoch: Gulasch &lt;br /&gt;
*Donnerstag: Kartoffeln mit Kräuter-Quark&lt;br /&gt;
*Freitag: Curry with Sweet Potatoes &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Einkaufliste Abendessen ====&lt;br /&gt;
'' Wird von X &amp;amp; Y eingekauft''&lt;br /&gt;
*2.5kg Hackfleisch &lt;br /&gt;
*3.5kg Spaghetti&lt;br /&gt;
*2.5kg Risotto Reis &lt;br /&gt;
*4.0kg Basmati Reis (für Burritos und Curry)&lt;br /&gt;
*12 Dosen gestückelte Tomaten &lt;br /&gt;
*6.0kg Spätzle (vorgekocht aus dem Kühlregal) &lt;br /&gt;
*1 Olivenöl (extra nativ) &lt;br /&gt;
*3kg geriebenen Käse &lt;br /&gt;
*1kg Emmentaler am Stück &lt;br /&gt;
*1kg Parmesan / Grana Padano&lt;br /&gt;
*1 Flaschen Weißwein (2-3€ pro Flasche dürfen es schon sein) &lt;br /&gt;
*2 Flasche Rotwein (2-3€ pro Flasche dürfen es schon sein)&lt;br /&gt;
*2 Knollen Sellerie (od 4 mal Staudensellerie)&lt;br /&gt;
*6kg Karotten&lt;br /&gt;
*3 Netze Zwiebeln &lt;br /&gt;
*2kg Pilze &lt;br /&gt;
*3kg Paprika&lt;br /&gt;
*1.5 kg Zucchini (2-3 Packungen)&lt;br /&gt;
*10kg (große) festkochende Kartoffeln&lt;br /&gt;
*ca. 50 größere Tortillas (2-3 pro Person)&lt;br /&gt;
*12 Dose schwarze Bohnen&lt;br /&gt;
*1-2 Gläser Jalapenos (je nach Größe)&lt;br /&gt;
*2kg Quark (halbfett, 20% sollte genügen)&lt;br /&gt;
*Tacos-Salsa-Soße für die Burritos (gerne scharf)&lt;br /&gt;
*10 Avocados&lt;br /&gt;
*Knoblauch&lt;br /&gt;
*Salz&lt;br /&gt;
*Pfeffer&lt;br /&gt;
*Rosmarin&lt;br /&gt;
*Basilikum&lt;br /&gt;
*Paprika-Gewürz (scharf)&lt;br /&gt;
*roter Chilli (fürs Gulasch)&lt;br /&gt;
*Zutaten für Curry???&lt;br /&gt;
&lt;br /&gt;
==== Früh und Mittags ====&lt;br /&gt;
'' Wird von X &amp;amp; Y eingekauft''&lt;br /&gt;
*Obst (Bananen, Äpfel, Orangen, Mandarinen) &lt;br /&gt;
*5 Gurken &lt;br /&gt;
*einige Tomaten (für die Brotzeit) &lt;br /&gt;
*5 Butter &lt;br /&gt;
*3kg Wurst &amp;amp; Käseaufschnitt (Auswahl, gerne gute Qualität und von der Theke) &lt;br /&gt;
*Obstriegel &lt;br /&gt;
*Frischkäse (natur und Kräuter)&lt;br /&gt;
*2 Gläser Essiggurken &lt;br /&gt;
*mehr Marmelade (Himbeer und Hagebutte) &lt;br /&gt;
*Nutella &lt;br /&gt;
*kernige Haferflocken (für Porridge)&lt;br /&gt;
*Honig &lt;br /&gt;
*40 Eier (Freilandhaltung!!!)&lt;br /&gt;
*Auswahl an Müsli-Riegel / Snickers&lt;br /&gt;
&lt;br /&gt;
====Allgemeines ====&lt;br /&gt;
'' Wird von X &amp;amp; Y eingekauft''&lt;br /&gt;
*Spülmittel &lt;br /&gt;
*Spülschwamm &lt;br /&gt;
*Seife &lt;br /&gt;
*1 Packung Kaffee (1 Pfund mittelstarker Kaffee)&lt;br /&gt;
*[Tee (Ceylon &amp;amp; Früchtetee) noch vorhanden!] &lt;br /&gt;
*O-Saft &amp;amp; anderer Saft (&amp;gt; 10 Liter)&lt;br /&gt;
*24 Liter Milch &lt;br /&gt;
*2 Liter Sojamilch (Natur)&lt;br /&gt;
*[Kaffeefilter noch vorhanden von letztem Jahr]&lt;br /&gt;
*Zucker&lt;br /&gt;
*[Toilettenpapier schon vorhanden!]&lt;br /&gt;
*Müllsäcke &lt;br /&gt;
*Lappen &lt;br /&gt;
*Spülmaschinen Tabs&lt;br /&gt;
*Schnapsglaeser fuer B52&lt;br /&gt;
*Strohhalms&lt;br /&gt;
&lt;br /&gt;
==== Persönliches ====&lt;br /&gt;
Bitte bringt mit was ihr selbst gerne esst, aber schreibt es trotzdem hier rein (kann dann von mir aus auch gerne am Schluss abgerechnet werden)&lt;br /&gt;
* '' bitte hier persönliches, wie z.B. Süßigkeiten, etc. auflisten''&lt;br /&gt;
&lt;br /&gt;
=== Alkohol/Party ===&lt;br /&gt;
Wir werden dieses mal sicher wieder feiern und dafür sollten wir auch vorsorgen :)&lt;br /&gt;
Vorschlag: jeder bringt mit, was er/sie gern möchte mit Vermerk im Wiki?!&lt;br /&gt;
* '' bitte hier Alkohol auflisten''&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
=== Spiele ===&lt;br /&gt;
 &lt;br /&gt;
Immer lustig sind gemeinsame Spieleabende. Also, wer im Besitz von Gesellschaftsspielen ist, bringt diese gerne mit! Um eine Übersicht zu bekommen auch bitte hier eintragen:&lt;br /&gt;
*''bitte hier Spiele auflisten''&lt;br /&gt;
&lt;br /&gt;
=== Sonstiges ===&lt;br /&gt;
* Audiokabel für Anschluss Laptop - Aux-Anschluss Stereoanlage '''wer hat sowas?''' Glaube Michi hatte eins (Eugenia)&lt;br /&gt;
&lt;br /&gt;
=== Alternativprogramm ===&lt;br /&gt;
  *  Rodeln (http://www.montafon.at/de/urlaubswelten/echte_naturliebhaber/rodeln)&lt;br /&gt;
  *  Schneeschuhwandern (http://www.montafon.at/schneeschuhwanderungen)&lt;br /&gt;
  *  Therme (http://www.montafon.at/schwimmen), z.B. http://www.aqua-dome.at/de (ca. 130km entfernt!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Kreuzer</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2019&amp;diff=1767</id>
		<title>Skivergnuegen 2019</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2019&amp;diff=1767"/>
		<updated>2019-01-10T16:42:22Z</updated>

		<summary type="html">&lt;p&gt;Kreuzer: /* Allgemeines */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Remeis Skifahren 2019''' The same procedure as every year ;-) &lt;br /&gt;
Hier werden die wichtigsten organisatorischen Dinge festgehalten. Jeder darf hier gerne editieren; wer keinen Account für das Wiki hat, soll sich bitte an jemanden wenden der Zugang hat.&lt;br /&gt;
&lt;br /&gt;
[[File:154854 silvretta-montafon-panorama.jpeg|right|700px]]&lt;br /&gt;
&lt;br /&gt;
== Allgemeine Infos ==&lt;br /&gt;
*Wann: 12.01.19 (Sa) bis 19.01.19 (Sa)&lt;br /&gt;
&lt;br /&gt;
*Ferienwohnung(en):&lt;br /&gt;
**Wir kommen wieder unter im [https://www.ferienwohnungen-montafon.com/ferienhaus-enzian/ Haus Enzian] des [https://www.grandau.at/ Sporthotels Grandau], sowie dem 5-er Apartment [https://www.grandau.at/zimmer/ferienwohnungen/Ferienwohnung-Edelweiss-Top-5/ TOP5] nebenan.&lt;br /&gt;
**Adresse des Sporthotel: Montafonerstraße 274a, 6791 St. Gallenkirch, Österreich. Unsere Unterkünfte liegen im Türkeiweg.&lt;br /&gt;
&lt;br /&gt;
*Skigebiet: Montafon (Vorarlberg)&lt;br /&gt;
**Skigebiet Karte: https://winter.intermaps.com/montafon?lang=de&lt;br /&gt;
**Skipass Preise: https://www.montafon.at/de/Service/Bergbahn-Preise-Tickets/Mehrtageskarte-Winter&lt;br /&gt;
**Wer nicht alle 7 Tage Skifahren möchte: Es gibt Angebote wie z.B. 5 aus 7, mit einem solchen Skipass kann man innerhalb von 7 Tagen an 5 beliebigen Tagen skifahren.&lt;br /&gt;
&lt;br /&gt;
*Skiverleih: [http://www.sportharry.at/ Sport Harry], direkt an der Talstation. Promotioncode ALPISKI auf alpinresorts.com spart bis zu 35% auf die Preise.&lt;br /&gt;
&lt;br /&gt;
== Unterkunft ==&lt;br /&gt;
=== Aufteilung ===&lt;br /&gt;
Dieses Jahr gibt es neben der Hütte noch zusätzlich das 5-er Appartment (TOP5).&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Zimmer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Insassen&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 2-er&lt;br /&gt;
| Basti, Andrea &lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 3-er&lt;br /&gt;
| Ralf, Flo, Thomas&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 4-er&lt;br /&gt;
| Nela, Ohle, Eva, Katya&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 5-er&lt;br /&gt;
| Christian, Person 2, Person 3, Person 4, Person 5&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Apartment (5)&lt;br /&gt;
| Eugenia, Michi, Johannes Veh, Katrin, Person 5&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Bettwäsche/Handtücher sind dort vorhanden und müssen nicht mitgenommen werden. Kaffeemaschine, Waffeleisen sind ebenfalls vorhanden.&lt;br /&gt;
Brötchen können wir vorbestellen im Hotel. Dazu müssen wir sie am Vortag vorbestellen und am nächsten Morgen bei der Besitzerin bzw. im Hotel gegen 7:15 Uhr abholen.&lt;br /&gt;
&lt;br /&gt;
== Fahrer ==&lt;br /&gt;
Fahrer können sich hier eintragen und Eckdaten angeben. Diejenige, die mitfahren möchten, sprechen sich mit den Fahrern ab und tragen sich ebenfalls ein. Klärt bitte auch die Gepäcklage, ggf. kann ein anderer Fahrer z.B. Skiausrüstung mitnehmen.&lt;br /&gt;
&lt;br /&gt;
Auf der Suche nach einer Mitfahrgelegenheit: Katya, Fritz&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Fahrer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | # Plätze&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Skimitnahme&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Kommentar&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Eugenia&lt;br /&gt;
| 2(3)&lt;br /&gt;
| 12.1., Vormittags/Mittags los in Fürth&lt;br /&gt;
| Michi, kleiner Johannes&lt;br /&gt;
| 19.1. Vormittags, wahrscheinlich keine Mitnahmemöglichkeit von Passagieren&lt;br /&gt;
| Michi, kleiner Johannes&lt;br /&gt;
| 2 Paar&lt;br /&gt;
| Evtl ein Platz fuer einen Mitfahrer, falls es sonst nicht aufgeht&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Basti&lt;br /&gt;
| 3 (4)&lt;br /&gt;
| Sat, noon, Erlangen&lt;br /&gt;
| Andrea, Veh&lt;br /&gt;
| Sat, morning&lt;br /&gt;
| Andrea, Veh&lt;br /&gt;
| no&lt;br /&gt;
| 2&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Ralf&lt;br /&gt;
| 3&lt;br /&gt;
| Sat, noon&lt;br /&gt;
| Katya, Max&lt;br /&gt;
| Abreise Auto 3&lt;br /&gt;
| Mitfahrer Abreise Auto 3&lt;br /&gt;
| Skimitnahme Auto 3&lt;br /&gt;
| Kommentar Auto 3&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Simon&lt;br /&gt;
| 3&lt;br /&gt;
| Sat noon&lt;br /&gt;
| Christian H., Katrin&lt;br /&gt;
| Abreise Auto 4&lt;br /&gt;
| Mitfahrer Abreise Auto 4&lt;br /&gt;
| Skimitnahme Auto 4&lt;br /&gt;
| Kommentar Auto 4&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | Johannes H (???)&lt;br /&gt;
| 4&lt;br /&gt;
| Sat noon&lt;br /&gt;
| Flo, Eva, Fritz&lt;br /&gt;
| Abreise Auto 5&lt;br /&gt;
| Mitfahrer Abreise Auto 5&lt;br /&gt;
| Skimitnahme Auto 5&lt;br /&gt;
| Kommentar Auto 5&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Schwarzes Brett ==&lt;br /&gt;
&lt;br /&gt;
=== Essen ===&lt;br /&gt;
Hier gibts den Essensplan, soweit übernommen von den letzten Jahren. Da wir sehr viele Leute sind, sollten wir wieder vorab grob planen, was wir kochen wollen. Wir können einige Dinge in Deutschland besorgen und mitnehmen, es gib aber auch einen nahe gelegen Supermarkt. Generelle Dinge, wie z.B. Gewürze oder Aufstriche für Frühstücksbrötchen könnte man auch mitbringen.&lt;br /&gt;
&lt;br /&gt;
Wer Kochvorschläge oder andere Ideen hat, gerne unten eintragen und kommentieren. Bedenkt, dass die Zubereitung (relativ) einfach sein sollte. Ich (Thomas) habe jetzt mal eine Liste unten erstellt, aber vor allem die Reihenfolge/Tage kann man sicher noch hin und her bewegen. Einige (wenigere) Sachen müssen wir dann manchmal trotzdem noch beim Spar kaufen, da ja auch der Kühlschrank nur begrenzt Platz hat.&lt;br /&gt;
&lt;br /&gt;
''VEGANE Optionen für Christian: vegane Sahne, Quark, Leinöl''&lt;br /&gt;
&lt;br /&gt;
*Samstag: Spaghetti Bolognese + veg. Bolognese&lt;br /&gt;
*Sonntag: Käse Spätzle&lt;br /&gt;
*Montag: Burritos &lt;br /&gt;
*Dienstag: Risotto&lt;br /&gt;
*Mittwoch: Gulasch &lt;br /&gt;
*Donnerstag: Kartoffeln mit Kräuter-Quark&lt;br /&gt;
*Freitag: Curry with Sweet Potatoes &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Einkaufliste Abendessen ====&lt;br /&gt;
'' Wird von X &amp;amp; Y eingekauft''&lt;br /&gt;
*2.5kg Hackfleisch &lt;br /&gt;
*3.5kg Spaghetti&lt;br /&gt;
*2.5kg Risotto Reis &lt;br /&gt;
*4.0kg Basmati Reis (für Burritos und Curry)&lt;br /&gt;
*12 Dosen gestückelte Tomaten &lt;br /&gt;
*6.0kg Spätzle (vorgekocht aus dem Kühlregal) &lt;br /&gt;
*1 Olivenöl (extra nativ) &lt;br /&gt;
*3kg geriebenen Käse &lt;br /&gt;
*1kg Emmentaler am Stück &lt;br /&gt;
*1kg Parmesan / Grana Padano&lt;br /&gt;
*1 Flaschen Weißwein (2-3€ pro Flasche dürfen es schon sein) &lt;br /&gt;
*2 Flasche Rotwein (2-3€ pro Flasche dürfen es schon sein)&lt;br /&gt;
*2 Knollen Sellerie (od 4 mal Staudensellerie)&lt;br /&gt;
*6kg Karotten&lt;br /&gt;
*3 Netze Zwiebeln &lt;br /&gt;
*2kg Pilze &lt;br /&gt;
*3kg Paprika&lt;br /&gt;
*1.5 kg Zucchini (2-3 Packungen)&lt;br /&gt;
*10kg (große) festkochende Kartoffeln&lt;br /&gt;
*ca. 50 größere Tortillas (2-3 pro Person)&lt;br /&gt;
*12 Dose schwarze Bohnen&lt;br /&gt;
*1-2 Gläser Jalapenos (je nach Größe)&lt;br /&gt;
*2kg Quark (halbfett, 20% sollte genügen)&lt;br /&gt;
*Tacos-Salsa-Soße für die Burritos (gerne scharf)&lt;br /&gt;
*10 Avocados&lt;br /&gt;
*Knoblauch&lt;br /&gt;
*Salz&lt;br /&gt;
*Pfeffer&lt;br /&gt;
*Rosmarin&lt;br /&gt;
*Basilikum&lt;br /&gt;
*Paprika-Gewürz (scharf)&lt;br /&gt;
*roter Chilli (fürs Gulasch)&lt;br /&gt;
*Zutaten für Curry???&lt;br /&gt;
&lt;br /&gt;
==== Früh und Mittags ====&lt;br /&gt;
'' Wird von X &amp;amp; Y eingekauft''&lt;br /&gt;
*Obst (Bananen, Äpfel, Orangen, Mandarinen) &lt;br /&gt;
*5 Gurken &lt;br /&gt;
*einige Tomaten (für die Brotzeit) &lt;br /&gt;
*5 Butter &lt;br /&gt;
*3kg Wurst &amp;amp; Käseaufschnitt (Auswahl, gerne gute Qualität und von der Theke) &lt;br /&gt;
*Obstriegel &lt;br /&gt;
*Frischkäse (natur und Kräuter)&lt;br /&gt;
*2 Gläser Essiggurken &lt;br /&gt;
*mehr Marmelade (Himbeer und Hagebutte) &lt;br /&gt;
*Nutella &lt;br /&gt;
*kernige Haferflocken (für Porridge)&lt;br /&gt;
*Honig &lt;br /&gt;
*40 Eier (Freilandhaltung!!!)&lt;br /&gt;
*Auswahl an Müsli-Riegel / Snickers&lt;br /&gt;
&lt;br /&gt;
====Allgemeines ====&lt;br /&gt;
'' Wird von X &amp;amp; Y eingekauft''&lt;br /&gt;
*Spülmittel &lt;br /&gt;
*Spülschwamm &lt;br /&gt;
*Seife &lt;br /&gt;
*1 Packung Kaffee (1 Pfund mittelstarker Kaffee)&lt;br /&gt;
*[Tee (Ceylon &amp;amp; Früchtetee) noch vorhanden!] &lt;br /&gt;
*O-Saft &amp;amp; anderer Saft (&amp;gt; 10 Liter)&lt;br /&gt;
*24 Liter Milch &lt;br /&gt;
*2 Liter Sojamilch (Natur)&lt;br /&gt;
*[Kaffeefilter noch vorhanden von letztem Jahr]&lt;br /&gt;
*Zucker&lt;br /&gt;
*[Toilettenpapier schon vorhanden!]&lt;br /&gt;
*Müllsäcke &lt;br /&gt;
*Lappen &lt;br /&gt;
*Spülmaschinen Tabs&lt;br /&gt;
*Schnapsglaeser fuer B52&lt;br /&gt;
&lt;br /&gt;
==== Persönliches ====&lt;br /&gt;
Bitte bringt mit was ihr selbst gerne esst, aber schreibt es trotzdem hier rein (kann dann von mir aus auch gerne am Schluss abgerechnet werden)&lt;br /&gt;
* '' bitte hier persönliches, wie z.B. Süßigkeiten, etc. auflisten''&lt;br /&gt;
&lt;br /&gt;
=== Alkohol/Party ===&lt;br /&gt;
Wir werden dieses mal sicher wieder feiern und dafür sollten wir auch vorsorgen :)&lt;br /&gt;
Vorschlag: jeder bringt mit, was er/sie gern möchte mit Vermerk im Wiki?!&lt;br /&gt;
* '' bitte hier Alkohol auflisten''&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
=== Spiele ===&lt;br /&gt;
 &lt;br /&gt;
Immer lustig sind gemeinsame Spieleabende. Also, wer im Besitz von Gesellschaftsspielen ist, bringt diese gerne mit! Um eine Übersicht zu bekommen auch bitte hier eintragen:&lt;br /&gt;
*''bitte hier Spiele auflisten''&lt;br /&gt;
&lt;br /&gt;
=== Sonstiges ===&lt;br /&gt;
* Audiokabel für Anschluss Laptop - Aux-Anschluss Stereoanlage '''wer hat sowas?''' Glaube Michi hatte eins (Eugenia)&lt;br /&gt;
&lt;br /&gt;
=== Alternativprogramm ===&lt;br /&gt;
  *  Rodeln (http://www.montafon.at/de/urlaubswelten/echte_naturliebhaber/rodeln)&lt;br /&gt;
  *  Schneeschuhwandern (http://www.montafon.at/schneeschuhwanderungen)&lt;br /&gt;
  *  Therme (http://www.montafon.at/schwimmen), z.B. http://www.aqua-dome.at/de (ca. 130km entfernt!)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Kreuzer</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=CUPS&amp;diff=1709</id>
		<title>CUPS</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=CUPS&amp;diff=1709"/>
		<updated>2018-10-23T09:52:35Z</updated>

		<summary type="html">&lt;p&gt;Kreuzer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==== Printing Service CUPS ====&lt;br /&gt;
&lt;br /&gt;
Printing at the observatory is handled by [https://www.cups.org/index.html CUPS].&lt;br /&gt;
CUPS is installed in the virtual box auriga (which lives on taurus) and can be administered through a web interface running on port 631: http://auriga:631/ (accessible only from within the institute).&lt;br /&gt;
&lt;br /&gt;
Through this web interface, printers can be added or modified, queues can be stopped, jobs canceled, and other other maintenance jobs can be performed. To perform such tasks, CUPS requires you to login: use '''ubuntuadmin''' and the corresponding password.&lt;br /&gt;
&lt;br /&gt;
When addong new printers, do forget to enable '''sharing''' to make the printer visible in the entire cluster.&lt;br /&gt;
&lt;br /&gt;
Check out the [https://www.cups.org/documentation.html CUPS documentation] how to work with CUPS.&lt;br /&gt;
&lt;br /&gt;
=== Trouble shooting ===&lt;br /&gt;
&lt;br /&gt;
If a printer is not responding/not accessible: &lt;br /&gt;
* make sure that the printer is online (also check the network cable!), working properly, and not just busy. &lt;br /&gt;
* check the web interface of that printer, try to print a test page from the web interface.&lt;br /&gt;
* check the web interface for jobs in that queue. Remove failed jobs.&lt;br /&gt;
&lt;br /&gt;
If no printers are availble (or just a generic printer), CUPS is probably down:&lt;br /&gt;
* login to auriga as ubuntuadmin&lt;br /&gt;
* restart the CUPS service:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo service cups stop&lt;br /&gt;
sudo service cups start&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
* sometimes it might be necessary to reboot the entire virtual box (CUPS will automatically be started after the reboot):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo shutdown -r now&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== List of Printers on the Cluster ===&lt;br /&gt;
&lt;br /&gt;
[[Category:Admin]]&lt;/div&gt;</summary>
		<author><name>Kreuzer</name></author>
	</entry>
</feed>