<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.sternwarte.uni-erlangen.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Koenig</id>
	<title>Remeis-Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://www.sternwarte.uni-erlangen.de/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Koenig"/>
	<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php/Special:Contributions/Koenig"/>
	<updated>2026-05-17T10:44:39Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.7</generator>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Install_ISIS_on_Linux&amp;diff=3554</id>
		<title>Install ISIS on Linux</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Install_ISIS_on_Linux&amp;diff=3554"/>
		<updated>2025-01-14T14:35:36Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;These instructions were tested for Debian 10, Ubuntu 20.04, and openSUSE 15.3. They should help you to install both the X-ray isisscripts as well as the &amp;quot;stellar_isisscripts&amp;quot;, but will focus on the latter. It also assumes that you use the bash shell. Using the csh requires minor changes.&lt;br /&gt;
&lt;br /&gt;
If you prefer an easy way to run the X-ray isisscripts (without the &amp;quot;stellar_isisscripts&amp;quot;), you can use the singularity container:&lt;br /&gt;
&lt;br /&gt;
https://www.sternwarte.uni-erlangen.de/wiki/index.php/Isis_tutorial_installing&lt;br /&gt;
&lt;br /&gt;
== 1. Install basic dependencies and general things ==&lt;br /&gt;
&lt;br /&gt;
If you don't have root access, you might have to build some things from source, e.g. pgplot, cfitsio, or libpng. &lt;br /&gt;
However, it might be easier to rely on HEASOFT in this case.  &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo apt-get -y install libreadline-dev&lt;br /&gt;
sudo apt-get -y install libcurl4&lt;br /&gt;
sudo apt-get -y install libcurl4-gnutls-dev&lt;br /&gt;
sudo apt-get -y install libncurses5-dev&lt;br /&gt;
sudo apt-get -y install xorg-dev&lt;br /&gt;
sudo apt-get -y install gcc g++ gfortran&lt;br /&gt;
sudo apt-get -y install perl-modules-5*&lt;br /&gt;
sudo apt-get -y install libfile-slurp-perl&lt;br /&gt;
sudo apt-get -y install python3-dev&lt;br /&gt;
sudo apt-get -y install fig2dev&lt;br /&gt;
sudo apt-get -y install libpng-dev &lt;br /&gt;
sudo apt-get -y install zlib1g-dev&lt;br /&gt;
sudo apt-get -y install libpcre3-dev&lt;br /&gt;
sudo apt-get -y install libonig-dev&lt;br /&gt;
sudo apt-get -y install libgsl-dev&lt;br /&gt;
sudo apt-get -y install pgplot5&lt;br /&gt;
sudo apt-get -y install libcfitsio-dev&lt;br /&gt;
sudo apt-get -y install libx11-dev&lt;br /&gt;
sudo apt-get -y install wget&lt;br /&gt;
sudo apt-get -y install git&lt;br /&gt;
sudo apt-get -y install make&lt;br /&gt;
sudo apt-get -y install curl&lt;br /&gt;
sudo apt-get -y install texlive&lt;br /&gt;
sudo apt-get -y install texlive-latex-extra&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
or on Arch linux&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo pacman -S --noconfirm readline&lt;br /&gt;
sudo pacman -S --noconfirm curl&lt;br /&gt;
sudo pacman -S --noconfirm curl-gnutls&lt;br /&gt;
sudo pacman -S --noconfirm ncurses&lt;br /&gt;
sudo pacman -S --noconfirm libx11&lt;br /&gt;
sudo pacman -S --noconfirm gcc&lt;br /&gt;
sudo pacman -S --noconfirm gcc-fortran&lt;br /&gt;
sudo pacman -S --noconfirm perl&lt;br /&gt;
sudo pacman -S --noconfirm python&lt;br /&gt;
sudo pacman -S --noconfirm fig2dev&lt;br /&gt;
sudo pacman -S --noconfirm libpng&lt;br /&gt;
sudo pacman -S --noconfirm zlib&lt;br /&gt;
sudo pacman -S --noconfirm pcre&lt;br /&gt;
sudo pacman -S --noconfirm oniguruma&lt;br /&gt;
sudo pacman -S --noconfirm gsl&lt;br /&gt;
sudo pacman -S --noconfirm pgplot&lt;br /&gt;
sudo pacman -S --noconfirm cfitsio&lt;br /&gt;
sudo pacman -S --noconfirm wget&lt;br /&gt;
sudo pacman -S --noconfirm git&lt;br /&gt;
sudo pacman -S --noconfirm texlive-most&lt;br /&gt;
sudo pacman -S --noconfirm texlive-latexextra&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
If pgplot5 fails to install, you may have to install it manually:&lt;br /&gt;
&lt;br /&gt;
https://guaix.fis.ucm.es/~ncl/howto/howto-pgplot&lt;br /&gt;
&lt;br /&gt;
You have to define some things in your ~/.bashrc:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export CC=/usr/bin/gcc&lt;br /&gt;
export CXX=/usr/bin/g++&lt;br /&gt;
export FC=/usr/bin/gfortran&lt;br /&gt;
export PERL=/usr/bin/perl&lt;br /&gt;
export PYTHON=/usr/bin/python3&lt;br /&gt;
# if installing in linux using su&lt;br /&gt;
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/slang/v2/modules/&lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Then do &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 2. Install HEASOFT ==&lt;br /&gt;
&lt;br /&gt;
This is ''not'' required for the &amp;quot;stellar_isisscripts&amp;quot; and can be skipped if you only want to install these. That is, unless you have no other way to install pgplot and cfitsio. HEASOFT seems to be required to run the X-ray isisscripts properly. To download HEASOFT, go to:&lt;br /&gt;
&lt;br /&gt;
https://heasarc.gsfc.nasa.gov/lheasoft/download.html&lt;br /&gt;
&lt;br /&gt;
unpack the download, and then:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd heasoft-6.29/BUILD_DIR&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before I could run the configure script I had to make many files executable, &lt;br /&gt;
so it seems easiest to make all of them executable (this is not good practice):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
find .. -name '*' | xargs chmod +x&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Building HEASOFT will take a while (~&amp;gt;30min). If you don't want to use root, set a different installation location by using ./configure with &amp;lt;tt&amp;gt;'''--prefix=/path/to/installation/folder/'''&amp;lt;/tt&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
./configure&lt;br /&gt;
make&lt;br /&gt;
sudo make install&lt;br /&gt;
find .. -name 'headas-init.sh' | xargs chmod +x&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
On HPCs without root access, configure may fail to find some libaries. In case they are not installed, you should install them from source, and then link them. Examples may be &amp;quot;ncurses&amp;quot;, &amp;quot;cfitsio&amp;quot;, &amp;quot;pgplot&amp;quot;, &amp;quot;gsl&amp;quot;. Add to your ~/.bashrc:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export LDFLAGS=-L/path/to/installation/folder/lib/&lt;br /&gt;
export CPPFLAGS=-I/path/to/installation/folder/include/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If no X window is available and you can not install it, use the &amp;quot;--disable-x&amp;quot; configure flag.&lt;br /&gt;
&lt;br /&gt;
Finally, you need to define $HEASOFT. For me, this meant editing ~/.bashrc to &lt;br /&gt;
include the two following lines. You need to adjust the path of course:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
export HEADAS=/home/matti/programs/heasoft-6.29/x86_64-pc-linux-gnu-libc2.31&lt;br /&gt;
. $HEADAS/headas-init.sh&lt;br /&gt;
# or&lt;br /&gt;
# . $HEADAS/headas-init.sh&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
for csh, use instead:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
setenv HEADAS /home/volans/mdorsch/isis/heasoft/installation/x86_64-pc-linux-gnu-libc2.31&lt;br /&gt;
. $HEADAS/headas-init.csh&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then, in the terminal:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
source ~/.bashrc&lt;br /&gt;
# or source ~/.cshrc&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 3. Install slang / slang modules / isis / isisscripts ==&lt;br /&gt;
&lt;br /&gt;
First get the latest versions of slang / jed/ slgsl / slxfig / slirp / isis / isisscripts. &lt;br /&gt;
A scripted installation of slang and isis is possible, following the MIT guide:&lt;br /&gt;
&lt;br /&gt;
https://space.mit.edu/cxc/isis/install.html#XI&lt;br /&gt;
&lt;br /&gt;
However, I recommend using the git repositories. Go to the Software directory in your terminal and clone the repositories:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone https://github.com/houckj/isis&lt;br /&gt;
git clone git://git.jedsoft.org/git/slang.git&lt;br /&gt;
git clone git://git.jedsoft.org/git/slxfig.git&lt;br /&gt;
git clone git://git.jedsoft.org/git/slgsl.git&lt;br /&gt;
git clone git://git.jedsoft.org/git/jed.git&lt;br /&gt;
git clone git://git.jedsoft.org/git/slirp.git&lt;br /&gt;
git clone http://www.sternwarte.uni-erlangen.de/git.public/isisscripts &lt;br /&gt;
&amp;lt;/pre&amp;gt; &lt;br /&gt;
&lt;br /&gt;
As of July 2023, there are some issues with slang 2.3.4 and isis; you may get this error when running isis: &lt;br /&gt;
&lt;br /&gt;
***Warning: Executable compiled against S-Lang 20303 but linked to 20304&lt;br /&gt;
&lt;br /&gt;
In that case you should use slang 2.3.3 instead:&lt;br /&gt;
&lt;br /&gt;
https://www.jedsoft.org/releases/slang/&lt;br /&gt;
&lt;br /&gt;
If you download slang as an archive, you may have to apply the correct permissions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
find . -name '*.sh' | xargs chmod +x&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The following will assume you want to use root access. If that is not the case, you can use use the option&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
--prefix=/path/to/installation/folder/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the ./configure commands. It seems to work to install all of them to the same &amp;quot;--prefix&amp;quot;. See also &amp;quot;./configure --help&amp;quot;.&lt;br /&gt;
&lt;br /&gt;
Install slang. Go to the slang directory.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash ./configure&lt;br /&gt;
make&lt;br /&gt;
sudo make install&lt;br /&gt;
make clean&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Make sure that all necessary standard modules are installed, including &amp;quot;png&amp;quot;. This requires &amp;quot;libpng16&amp;quot; to be installed. &lt;br /&gt;
If &amp;quot;libpng16&amp;quot; is not installed yet (try &amp;lt;tt&amp;gt; find /usr/lib/ -name &amp;quot;*libpng*&amp;quot;&amp;lt;/tt&amp;gt;), you can easily build it from source:&lt;br /&gt;
&lt;br /&gt;
https://sourceforge.net/projects/libpng/files/&lt;br /&gt;
&lt;br /&gt;
and then add its installation folder to $PATH or add it to the configure step with --with-png.&lt;br /&gt;
&lt;br /&gt;
Install isis. Go to the isis directory, here with HEADAS:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash ./configure --with-headas=$HEADAS&lt;br /&gt;
make&lt;br /&gt;
make check&lt;br /&gt;
sudo make install&lt;br /&gt;
make clean&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If your aim is to install the &amp;quot;stellar_isisscripts&amp;quot;, you don't need HEADAS, but may have to install fitsio:&lt;br /&gt;
&lt;br /&gt;
https://heasarc.gsfc.nasa.gov/fitsio/&lt;br /&gt;
&lt;br /&gt;
Then go to the isis directory and:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash ./configure&lt;br /&gt;
make&lt;br /&gt;
sudo make install&lt;br /&gt;
make clean&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If &amp;quot;make&amp;quot; has an issue with pgplot, try setting using &amp;lt;tt&amp;gt;./configure --with-pgplot=/usr/&amp;lt;/tt&amp;gt;, or wherever pgplot is installed.&lt;br /&gt;
Add &amp;lt;tt&amp;gt;'''--with-slang=/path/to/slang'''&amp;lt;/tt&amp;gt; to the following configuration set ups in case you installed slang in a different path other than &amp;lt;tt&amp;gt;/usr/local/&amp;lt;/tt&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install jed. Go to the jed directory&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash ./configure&lt;br /&gt;
make&lt;br /&gt;
make xjed&lt;br /&gt;
sudo make install&lt;br /&gt;
make clean&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install slgsl. Go to the slgsl directory&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash ./configure&lt;br /&gt;
make&lt;br /&gt;
sudo make install&lt;br /&gt;
make clean&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install slxfig. Go to the slxfig directory&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash ./configure&lt;br /&gt;
make&lt;br /&gt;
sudo make install&lt;br /&gt;
make clean&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Install slirp. Go to the slirp directory&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
bash ./configure&lt;br /&gt;
make&lt;br /&gt;
sudo make install&lt;br /&gt;
make clean&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Before building the isisscripts, install the &amp;quot;SLURP&amp;quot; perl module. This is not necessary if &amp;quot;libfile-slurp-perl&amp;quot; was already installed with apt. &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo perl -MCPAN -e shell&lt;br /&gt;
install CPAN&lt;br /&gt;
reload cpan&lt;br /&gt;
install File::Slurp&lt;br /&gt;
quit&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then go to the isisscripts directory and&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
make&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Create / modify the [[ISIS auto start (.isisrc)|.isisrc]] file in your &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt; directory and add the following line&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
add_to_isis_load_path(&amp;quot;/Path/To/isisscripts/share/&amp;quot;);&lt;br /&gt;
define ris() { require(&amp;quot;isisscripts&amp;quot;); };&lt;br /&gt;
% if s-lang is not in your $PATH&lt;br /&gt;
% add_to_isis_load_path(&amp;quot;/usr/local/share/slsh/local-packages:/usr/local/share/slsh&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
This allows the easy access to &amp;lt;tt&amp;gt;require(&amp;quot;isisscripts&amp;quot;)&amp;lt;/tt&amp;gt; in isis.&lt;br /&gt;
&lt;br /&gt;
== 4. Install stellar_isisscripts ==&lt;br /&gt;
&lt;br /&gt;
Get the newest version of the stellar_isisscripts from git.&lt;br /&gt;
If you have a Remeis/gitlab account and are at the Remeis observatory, do:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone git@serpens.sternwarte.uni-erlangen.de:irrgang/stellar.git&lt;br /&gt;
mv stellar stellar_isisscripts&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
If you have a Remeis/gitlab account and are not at the Remeis observatory, do:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
git clone http://www.sternwarte.uni-erlangen.de/gitlab/irrgang/stellar.git&lt;br /&gt;
mv stellar stellar_isisscripts&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Otherwise ask Matti Dorsch. Then:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd stellar_isisscripts&lt;br /&gt;
make &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now compile some necessary C functions:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
cd stellar_isisscripts/slirp&lt;br /&gt;
slirp -make -lm -lgsl -lgslcblas -lpthread c_functions.h c_functions.o&lt;br /&gt;
sed -i -e 's/^CFLAGS[[:space:]]*= -g -O2/CFLAGS = -g -Ofast -Wall -Wextra/g' Makefile &lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
make&lt;br /&gt;
make test&lt;br /&gt;
rm c_functions-test.sl c_functions.o c_functions_glue.o c_functions_glue.c Makefile&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Create or modify the [[ISIS auto start (.isisrc)|.isisrc]] file in your &amp;lt;tt&amp;gt;$HOME&amp;lt;/tt&amp;gt; directory and add the following line&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
add_to_isis_load_path(&amp;quot;/Path/To/stellar_isisscripts/share/&amp;quot;);&lt;br /&gt;
define rmy() { require(&amp;quot;stellar_isisscripts&amp;quot;); };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To use all functions, you need the stilts tool, which is related to TOPCAT. You can install it as a package:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
sudo apt-get install stilts&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Only if this does not work:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
mkdir stilts; cd stilts&lt;br /&gt;
wget http://www.star.bris.ac.uk/~mbt/stilts/stilts.jar&lt;br /&gt;
wget 'http://www.star.bris.ac.uk/~mbt/stilts/stilts'&lt;br /&gt;
chmod +x stilts&lt;br /&gt;
sudo cp * /usr/bin/&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Without root, you can add the folder that contains stilts to your $PATH in ~/.bashrc or ~/.cshrc.&lt;br /&gt;
&lt;br /&gt;
To test if the scripts work, try for example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
isis&lt;br /&gt;
rmy;&lt;br /&gt;
s = query_photometry (&amp;quot;HZ44&amp;quot;);&lt;br /&gt;
s.print();&lt;br /&gt;
help query_photometry&lt;br /&gt;
q&lt;br /&gt;
exit;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Known issues:&lt;br /&gt;
1) On Ubuntu 22, xfig/fig2dev 3.2.8b are the standard. This contains a bug that will prevent correct labels. Fix this by compiling xfig/fig2dev 3.2.9 from source.&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2024&amp;diff=3199</id>
		<title>Skivergnuegen 2024</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2024&amp;diff=3199"/>
		<updated>2024-01-06T19:35:43Z</updated>

		<summary type="html">&lt;p&gt;Koenig: /* Aufteilung */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Remeis Skifahren 2024, 13.01.24-20.01.24''' The same procedure as every year ;-) &lt;br /&gt;
Hier werden die wichtigsten organisatorischen Dinge festgehalten. Jeder darf hier gerne editieren; wer keinen Account für das Wiki hat, muss sich bitte an jemanden wenden, der Zugang hat.&lt;br /&gt;
&lt;br /&gt;
[[File:154854 silvretta-montafon-panorama.jpeg|right|700px]]&lt;br /&gt;
&lt;br /&gt;
== Allgemeine Infos ==&lt;br /&gt;
&lt;br /&gt;
*Ferienwohnung(en):&lt;br /&gt;
**Wir kommen wieder unter im [https://www.grandau.at/montafon/zimmer/ferienhaus-enzian Haus Enzian] des [https://www.grandau.at/ Sporthotels Grandau].&lt;br /&gt;
**Adresse des Sporthotel: Montafonerstraße 274a, 6791 St. Gallenkirch, Österreich. Unsere Unterkünfte liegen im Türkeiweg.&lt;br /&gt;
&lt;br /&gt;
*Skigebiet: Montafon (Vorarlberg)&lt;br /&gt;
**Skigebiet Karte: https://winter.intermaps.com/montafon?lang=de&lt;br /&gt;
**Skipass Preise:  https://www.silvretta-montafon.at/de/onlineshop/ticket-uebersicht&lt;br /&gt;
**Wer nicht alle 7 Tage Skifahren möchte: Es gibt Angebote wie z.B. 5 aus 6, mit einem solchen Skipass kann man innerhalb von 6 Tagen an 5 beliebigen Tagen skifahren.&lt;br /&gt;
&lt;br /&gt;
*Skiverleih: [http://www.sportharry.at/ Sport Harry], direkt an der Talstation.&lt;br /&gt;
&lt;br /&gt;
== Unterkunft ==&lt;br /&gt;
=== Aufteilung ===&lt;br /&gt;
&lt;br /&gt;
Dieses Jahr gibt es die Hütte mit folgenden Zimmern (wir sind insgesamt 13-15, also bleiben keine Plätze frei).&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Zimmer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Insassen&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 2-er&lt;br /&gt;
| Aafia, Amy&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 3-er&lt;br /&gt;
| Jakob, Katya, Flo&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 4-er&lt;br /&gt;
| Eugenia (ab So, Einzelbett), Sebastian W. (Doppelbett), Marina B. (ab Di, Doppelbett), Ole (ab Sonntag)&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 5-er&lt;br /&gt;
| Christian H., Bastien G&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
Außerdem gibt es ein Apartment für die Familien mit Kindern  ;-): Eva+Steffen+Tom+Lorenz, Nela+Ohle+Sophia&lt;br /&gt;
&lt;br /&gt;
== Fahrer ==&lt;br /&gt;
Fahrer können sich hier eintragen und Eckdaten angeben. Diejenige, die mitfahren möchten, sprechen sich mit den Fahrern ab und tragen sich ebenfalls ein. Klärt bitte auch die Gepäcklage, ggf. kann ein anderer Fahrer z.B. Skiausrüstung mitnehmen.&lt;br /&gt;
&lt;br /&gt;
'''Auf der Suche nach einer Mitfahrgelegenheit: '''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Fahrer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | # Plätze&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Skimitnahme&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Kommentar&lt;br /&gt;
|-&lt;br /&gt;
|evtl. Christian H.? &lt;br /&gt;
|&lt;br /&gt;
|Sa, Fürth&lt;br /&gt;
|&lt;br /&gt;
|Sa, Fürth&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|no car, but could borrow one?&lt;br /&gt;
|-&lt;br /&gt;
|Marina B.&lt;br /&gt;
|3 ohne Fahrer&lt;br /&gt;
|Di, Geretsried, über München&lt;br /&gt;
|&lt;br /&gt;
|Sa, Erlangen oder Geretsried&lt;br /&gt;
|Sebastian W. ?&lt;br /&gt;
|ja&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|Flo&lt;br /&gt;
|3 mit Fahrer&lt;br /&gt;
|Sa aus Nürnberg über München&lt;br /&gt;
|Tobi&lt;br /&gt;
|Sa nach München&lt;br /&gt;
|Eugenia (nicht sicher)&lt;br /&gt;
|ja&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Schwarzes Brett ==&lt;br /&gt;
&lt;br /&gt;
=== Essen ===&lt;br /&gt;
Hier gibt's den Essensplan, soweit übernommen von den letzten Jahren. Da wir sehr viele Leute sind, sollten wir wieder vorab grob planen, was wir kochen wollen. Wir können einige Dinge in Deutschland besorgen und mitnehmen, es gib aber auch einen nahe gelegen Supermarkt. Generelle Dinge, wie z.B. Gewürze oder Aufstriche für Frühstücksbrötchen könnte man auch mitbringen.&lt;br /&gt;
&lt;br /&gt;
Wer Kochvorschläge oder andere Ideen hat, gerne unten eintragen und kommentieren. Bedenkt, dass die Zubereitung (relativ) einfach sein sollte. Da sich die Liste in den letzten Jahren gut bewaehrt hat, behalten wir sie bei, aber die Reihenfolge/Tage kann man sicher noch hin und her bewegen. Einige (wenige) Sachen müssen wir dann manchmal trotzdem noch beim Spar kaufen, da ja auch der Kühlschrank nur begrenzt Platz hat.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Essenplan:'''&lt;br /&gt;
*Samstag: Spaghetti Bolognese + veg. Bolognese&lt;br /&gt;
*Sonntag: Käse Spätzle&lt;br /&gt;
*Montag: Burritos&lt;br /&gt;
*Dienstag: Risotto&lt;br /&gt;
*Mittwoch: Gulasch&lt;br /&gt;
*Donnerstag: Kartoffeln mit Kräuter-Quark&lt;br /&gt;
*Freitag: Dal mit Reis  (od. Curry mit Suesskartoffeln )&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Einkaufsliste von 2022 (vorletztes Jahr): https://docs.google.com/document/d/166b_mIXxmoAgcv1upIGremd_36E1fmwwxsQADP2uEv8/&lt;br /&gt;
&lt;br /&gt;
@Christian: Vegane Option entwender selbst mitbringen oder hier mit reinschreiben (falls man die einfach bekommt)&lt;br /&gt;
&lt;br /&gt;
=== Alkohol/Party ===&lt;br /&gt;
&lt;br /&gt;
Stiegl (Johannes?)&lt;br /&gt;
&lt;br /&gt;
=== Spiele ===&lt;br /&gt;
 &lt;br /&gt;
Immer lustig sind gemeinsame Spieleabende. Also, wer im Besitz von Gesellschaftsspielen ist, bringt diese gerne mit! Um eine Übersicht zu bekommen auch bitte hier eintragen:&lt;br /&gt;
&lt;br /&gt;
''* Secret Hitler (Johannes?)''&lt;br /&gt;
&lt;br /&gt;
6 nimmt, Uno, Kniffel (Marina B.)&lt;br /&gt;
&lt;br /&gt;
=== Sonstiges ===&lt;br /&gt;
&lt;br /&gt;
Ich (Marina B.) mache sehr gerne Langlauf (Skating). Lässt sich sonst noch jemand für Langlauf begeistern? :)&lt;br /&gt;
&lt;br /&gt;
Ich nehme meine Langlaufausrüstung mit und würde mich total freuen, wenn der ein oder andere Langlauf ausprobiert, egal ob Klassisch oder Skating.&lt;br /&gt;
&lt;br /&gt;
=== Alternativprogramm ===&lt;br /&gt;
  *  Rodeln (http://www.montafon.at/de/urlaubswelten/echte_naturliebhaber/rodeln)&lt;br /&gt;
  *  Schneeschuhwandern (http://www.montafon.at/schneeschuhwanderungen)&lt;br /&gt;
  *  Langlauf (https://www.montafon.at/de/Bergerlebnisse/Schnee/Langlaufen)&lt;br /&gt;
  *  Therme (http://www.montafon.at/schwimmen), z.B. http://www.aqua-dome.at/de (ca. 130km entfernt!)&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Remeis_English_Checklist&amp;diff=3143</id>
		<title>Remeis English Checklist</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Remeis_English_Checklist&amp;diff=3143"/>
		<updated>2023-12-01T10:37:39Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====== The Remeis English Checklist ======&lt;br /&gt;
&lt;br /&gt;
(by J. Wilms and K. Pottschmidt)&lt;br /&gt;
&lt;br /&gt;
'''''First of all and most importantly:'''''&lt;br /&gt;
&lt;br /&gt;
* did you read the instructions to authors of the journal? If you are working on a thesis, did you read those of [http://www.aanda.org/doc_journal/instructions/aadoc.pdf Astronomy and Astrophysics]?&lt;br /&gt;
* did you read appendix A of the [https://journals.aps.org/files/rmpguide.pdf instructions to authors] of Rev Mod Phys?&lt;br /&gt;
&lt;br /&gt;
==== Punctuation ====&lt;br /&gt;
&lt;br /&gt;
* did you remove all commas before &amp;quot;that&amp;quot;?&lt;br /&gt;
* did you end your footnotes and captions with a full stop (&amp;quot;.&amp;quot;)?&lt;br /&gt;
* did you make sure that your use of &amp;quot;data&amp;quot; is correct and uses plural verbs?&lt;br /&gt;
* did you make sure that you do not have a &amp;quot;:&amp;quot; anywhere before an equation, but that your equations are seen as part of your sentences?&lt;br /&gt;
* did you make sure that you have commas surrounding &amp;quot;i.e.&amp;quot; and &amp;quot;e.g.&amp;quot;?&lt;br /&gt;
&lt;br /&gt;
==== Spelling and Word usage ====&lt;br /&gt;
&lt;br /&gt;
* do you consistently use either British or American spelling?&lt;br /&gt;
* did you run a spell checker over your manuscript? For TeX, use &amp;quot;ispell&amp;quot; or the built in spell checker in emacs.&lt;br /&gt;
* did you make sure not to use country prefixes in addresses in the author list?&lt;br /&gt;
* did you avoid passive voice as much as possible?&lt;br /&gt;
* did you make sure that you are //not// using &amp;quot;The found results are...&amp;quot; and similar German constructs in your text?&lt;br /&gt;
* did you make sure that everything in your text that is not your original result is accompanied by proper citations?&lt;br /&gt;
* did you make sure that you distinguish between &amp;quot;estimate&amp;quot; and &amp;quot;estimation&amp;quot; by replacing all &amp;quot;estimation&amp;quot; with &amp;quot;estimate&amp;quot;?&lt;br /&gt;
* did you replace all uses of &amp;quot;exemplary&amp;quot; by &amp;quot;example&amp;quot;?&lt;br /&gt;
* did you use &amp;quot;short&amp;quot; for length intervals and &amp;quot;brief&amp;quot; for time intervals? (but note that &amp;quot;short of duration&amp;quot; is correct) &lt;br /&gt;
* did you replace all uses of &amp;quot;the actual value&amp;quot; by &amp;quot;the real value&amp;quot;? (if you are German, &amp;quot;actual&amp;quot; does not mean &amp;quot;aktuell&amp;quot;!)&lt;br /&gt;
* did you remove all uses of &amp;quot;hence&amp;quot; and &amp;quot;thereby&amp;quot;?&lt;br /&gt;
* did you make sure that you use &amp;quot;however&amp;quot; as sparingly as possible?&lt;br /&gt;
* did you make sure that you did not use &amp;quot;the equation reads..&amp;quot;, but rather used &amp;quot;the equation is...&amp;quot; or &amp;quot;the equation is given by...&amp;quot;?&lt;br /&gt;
* did you avoid split infinitives? (&amp;quot;to boldly go...&amp;quot; is wrong; yes, in many cases split infinitives are ok in current English, but they tend to be so often used wrongly by non-native speakers that it is best to avoid them)&lt;br /&gt;
* did you make sure that you distinguish properly between &amp;quot;this&amp;quot; and &amp;quot;these&amp;quot;?&lt;br /&gt;
* did you make use of the &amp;quot;Oxford comma&amp;quot;, i.e., do you have a comma before &amp;quot;and&amp;quot; in lists?&lt;br /&gt;
* did you use &amp;quot;i.e.&amp;quot; and &amp;quot;e.g.&amp;quot; correctly, i.e., using &amp;quot;i.e.&amp;quot; for a specific clarification or definition and &amp;quot;e.g.&amp;quot; where you would otherwise use &amp;quot;for example&amp;quot;?&lt;br /&gt;
* did you use the IAU recommended year - month - day sequences (2016 March 15)?&lt;br /&gt;
* did you make sure that you did not use contractions such as &amp;quot;didn't&amp;quot; or &amp;quot;you're&amp;quot;?&lt;br /&gt;
* did you replace &amp;quot;cf.&amp;quot; with &amp;quot;see&amp;quot; everywhere since you know that &amp;quot;cf.&amp;quot; means &amp;quot;compare&amp;quot;?&lt;br /&gt;
* did you ensure that you use &amp;quot;opportunity&amp;quot; where in German you would be using &amp;quot;Chance&amp;quot; or &amp;quot;Gelegenheit&amp;quot; (and did not use &amp;quot;chance&amp;quot;...)?&lt;br /&gt;
* did you make sure that the reader will understand what thing you refer to when using &amp;quot;it&amp;quot; rather than naming it? &lt;br /&gt;
* did you make sure that all uses of &amp;quot;this&amp;quot; are followed by the object you are referring to? &lt;br /&gt;
* did you use &amp;quot;led&amp;quot; rather than &amp;quot;lead&amp;quot; when using the past tense of the verb &amp;quot;to lead&amp;quot;?&lt;br /&gt;
* did you reread the manuscript for internal consistency after you added comments from your coauthors?&lt;br /&gt;
* did you make sure that your sentences are short (rule of thumb: if a sentence goes over more than three lines it is probably too long)?&lt;br /&gt;
* did you check that you did not combine two sentences that could be separate sentences with &amp;quot;and&amp;quot;?&lt;br /&gt;
* did you avoid abbreviations as much as possible and only used them when they are really, really common (HST, AGN, XMM,...)?&lt;br /&gt;
* did you check that you defined all abbreviations that you used at their first usage? (&amp;quot;...Active Galactic Nucleus (AGN)...&amp;quot; //not// &amp;quot;...AGN (Active Galactic Nucleus)...&amp;quot; )&lt;br /&gt;
&lt;br /&gt;
==== Citations ====&lt;br /&gt;
&lt;br /&gt;
* did you add the journal to all publications where you list the arXiv-reference and not just blindly copy the erroneous ADS bibtex entry?&lt;br /&gt;
* did you make sure that you distinguished between arXiv references where a paper is submitted and references where a paper is already accepted by checking the paper author's comment on the arXiv-page for that article?&lt;br /&gt;
* did you remove the page number for all ATEL-references downloaded from ADS and changed the journal name to &amp;quot;Astron. Tel.&amp;quot; or &amp;quot;ATEL&amp;quot;? (and similar for IAU telegrams)&lt;br /&gt;
* did you add editors and the title of the conference publication to all conference publications?&lt;br /&gt;
* did you add the publisher and place information (city only) to all books, conference publications, and other book-like publications that you are citing?&lt;br /&gt;
* did you check that your references are correct in that you are using ''\citet{biblabel}'' for references in the text and ''\citep{biblabel}'' for references in parentheses?&lt;br /&gt;
* did you make sure that none of your ''\citet{..}'' commands refer to more than one biblabel?&lt;br /&gt;
* [added by O. Koenig: did you make sure all SPIE references have an address? (you may want to follow this procedure: go to NASA ADS to get bibtex entry (&amp;lt;code&amp;gt;@inproceedings&amp;lt;/code&amp;gt;!), put the entry of &amp;quot;booktitle&amp;quot; into &amp;quot;series&amp;quot;, put &amp;lt;code&amp;gt;booktitle = procspie&amp;lt;/code&amp;gt; (there should be a &amp;lt;code&amp;gt;@STRING{procspie = &amp;quot;Proc. SPIE.&amp;quot;}&amp;lt;/code&amp;gt; in &amp;lt;code&amp;gt;mnemonic.bib&amp;lt;/code&amp;gt;, go to the SPIE webpage of the paper, get the address, and insert it by hand. A MWE could be &amp;lt;code&amp;gt;@INPROCEEDINGS{Doehring2015a, author = {{D{\&amp;quot;o}hring}, T. and {...}, title = &amp;quot;{The challenge of developing thin mirror shells for future x-ray telescopes}&amp;quot;, series = {Optical Systems Design 2015: Optical Fabrication, Testing, and Metrology V}, year = 2015, editor = {{Duparr}, A. and {Geyl}, R.}, booktitle = procspie, volume = {9628}, address = {Jena, Germany}, pages = {962809}}&amp;lt;/code&amp;gt;)]&lt;br /&gt;
&lt;br /&gt;
==== Typesetting (mainly in TeX) ====&lt;br /&gt;
&lt;br /&gt;
* did you check for missing spaces between values and units?&lt;br /&gt;
* did you make sure that all scientific units are typeset in &amp;lt;code&amp;gt;\mathrm&amp;lt;/code&amp;gt;?&lt;br /&gt;
* did you make sure not to use constructs such as &amp;lt;code&amp;gt;$\mathrm{m}/\mathrm{s}$&amp;lt;/code&amp;gt; by using &amp;lt;code&amp;gt;$\mathrm{m}\,\mathrm{s}^{-1}$&amp;lt;/code&amp;gt; instead?&lt;br /&gt;
* did you make sure that almost all of your error bars are rounded up to only one significant digit rather than following the DIN-norm (which is not applied in astronomical journals)?&lt;br /&gt;
* did you make sure that you are not using any positioning commands for the table or figure environment such as &amp;lt;code&amp;gt;\begin{table}[htpb]&amp;lt;/code&amp;gt;?&lt;br /&gt;
* did you make sure that your tables have captions above the table, and figures have captions below the figure or next to it (where allowed by the style)?&lt;br /&gt;
* did you make sure that you use empty lines to denote the start of a new paragraph rather than the ''\\''-command? (use &amp;lt;code&amp;gt;\parindent{0pt}&amp;lt;/code&amp;gt; if you do not want to indent paragraphs)&lt;br /&gt;
* did you make sure that there are no paragraph endings above or below &amp;lt;code&amp;gt;\begin{equation}...\end{equation}&amp;lt;/code&amp;gt; by ensuring that there is no empty line above or below the ''equation''-environment?&lt;br /&gt;
* did you make sure that you are not using &amp;lt;code&amp;gt;$\frac{a}{b}$&amp;lt;/code&amp;gt; in normal text, but use &amp;lt;code&amp;gt;$a/b$&amp;lt;/code&amp;gt; instead?&lt;br /&gt;
* did you make sure that you are not using the &amp;lt;code&amp;gt;displaymath&amp;lt;/code&amp;gt;-environment and that all equations are numbered?&lt;br /&gt;
* did you make sure that all of your sections, subsections, paragraphs and so on are numbered?&lt;br /&gt;
* did you avoid any and all uses of &amp;lt;code&amp;gt;\bf&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;\it&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;\sl&amp;lt;/code&amp;gt;, or &amp;lt;code&amp;gt;\em&amp;lt;/code&amp;gt; and use the proper commands &amp;lt;code&amp;gt;\textbf&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;\textit&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;\textsl&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;\emph&amp;lt;/code&amp;gt; instead?&lt;br /&gt;
* did you use the en-dash of TeX for ranges, even if they occur in math, by using &amp;lt;code&amp;gt;--&amp;lt;/code&amp;gt; in text mode rather than a minus sign? (that is, did you typeset a range in an equation as &amp;lt;code&amp;gt;$3x$--$5x$&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;$3x\mbox{--}5x$&amp;lt;/code&amp;gt; rather than, erroneously, &amp;lt;code&amp;gt;$3x-5x$&amp;lt;/code&amp;gt;?&lt;br /&gt;
* did you correctly use the minus-sign and dashes in astronomical source names, where the name contains coordinates and the &amp;lt;q&amp;gt;dash&amp;lt;/q&amp;gt; really is a southern declination or Galactic latitude, that is, did you typeset &amp;lt;code&amp;gt;Her X-1&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;LMC X-3&amp;lt;/code&amp;gt;, but &amp;lt;code&amp;gt;GX\,339$-$4&amp;lt;/code&amp;gt; or &amp;lt;code&amp;gt;IGR J16318$-$4848&amp;lt;/code&amp;gt; (and as a really difficult one: &amp;lt;code&amp;gt;MCG$-$6-30-15&amp;lt;/code&amp;gt;)?&lt;br /&gt;
* did you make sure to typeset hydrogen equivalent columns as &amp;lt;code&amp;gt;$N_\mathrm{H}$&amp;lt;/code&amp;gt; rather than $n_H$ or $n_\mathrm{H}$? (note: in astronomy, $n$ denotes a particle density, so it has units of particles per cubic centimeter, while N is a column with units of particles per square centimeter; a certain analysis program uses nH for this parameter, but this does not mean that n should be used in papers).&lt;br /&gt;
&lt;br /&gt;
[[Category:Current Members]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Guided_Tours_and_Public_Outreach&amp;diff=3052</id>
		<title>Guided Tours and Public Outreach</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Guided_Tours_and_Public_Outreach&amp;diff=3052"/>
		<updated>2023-11-10T14:59:36Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this pages you can find all information needed to guide a tour through our observatory for public people.&lt;br /&gt;
&lt;br /&gt;
At least, that's the aim of this page, so please add information if you think its missing!&lt;br /&gt;
&lt;br /&gt;
= Platform =&lt;br /&gt;
&lt;br /&gt;
'''The following is important:'''&amp;lt;br&amp;gt;&lt;br /&gt;
Tours are officially '''organized''' by the [http://foerderverein-sternwarte-bamberg.de Foerderverein of the observatory], but '''operated''' by the observatory staff to be insured through the university!&lt;br /&gt;
&lt;br /&gt;
The main platform for the general public is&amp;lt;br&amp;gt;[http://foerderverein-sternwarte-bamberg.de/fuehrungen http://foerderverein-sternwarte-bamberg.de/fuehrungen].&lt;br /&gt;
&lt;br /&gt;
Here, tours can be ordered in two different ways:&lt;br /&gt;
* single persons and smaller groups can directly book public and regular guided tours via our webform at&amp;lt;br&amp;gt;https://www.sternwarte.uni-erlangen.de/guided-tours/&lt;br /&gt;
* larger groups may directly ask for a tour at a separate date either by email to&amp;lt;br&amp;gt;[mailto:fuehrungen@foerderverein-sternwarte-bamberg.de fuehrungen@foerderverein-sternwarte-bamberg.de]&amp;lt;br&amp;gt;or by phone. The email address is a forwarding to a certain person of the mailing list (see below).&lt;br /&gt;
&lt;br /&gt;
The source code of the webform for booking a public tour can be found in the GIT repository at&amp;lt;br&amp;gt;&lt;br /&gt;
[https://www.sternwarte.uni-erlangen.de/gitlab/webmaster/guided-tours https://www.sternwarte.uni-erlangen.de/gitlab/webmaster/guided-tours].&amp;lt;br&amp;gt;&lt;br /&gt;
All pushes to the master branch are automatically deployed at the official URL above!&lt;br /&gt;
&lt;br /&gt;
== Mailing List / Current &amp;quot;staff&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
All people interested in giving guided tours are member of the mailing list&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:astro-fuehrungen@lists.fau.de astro-fuehrungen@lists.fau.de]&amp;lt;br&amp;gt;&lt;br /&gt;
Please '''do not share''' this address with anybody outside of the observatory!&lt;br /&gt;
If you're not on the list but interested in giving tours, just ask Max to add you :-)&lt;br /&gt;
&lt;br /&gt;
New tours without a guide yet are usually announced via this list. See [[Mailing lists]] for the list administrator.&lt;br /&gt;
&lt;br /&gt;
In charge of organization:&lt;br /&gt;
* Katrin&lt;br /&gt;
&lt;br /&gt;
Currently, the following people give tours on a regular basis:&lt;br /&gt;
* Steven&lt;br /&gt;
* Christian&lt;br /&gt;
* Aafia&lt;br /&gt;
* Amy&lt;br /&gt;
* Julia&lt;br /&gt;
* Philipp T.&lt;br /&gt;
* Alexey&lt;br /&gt;
* Katharina&lt;br /&gt;
&lt;br /&gt;
= Tip policy =&lt;br /&gt;
&lt;br /&gt;
The group is informed beforehand about the rule of tip. &lt;br /&gt;
For the guides the following applies for the money you get during the tour:&lt;br /&gt;
During the week (Monday till Friday) the factor for the donation, which goes directly to the Foerderverein, is 1/3 (33%), the rest (2/3) is your personal tip.&lt;br /&gt;
For weekend tours (Saturday, Sunday) the factor for the donation is 1/4 (25%).&lt;br /&gt;
&lt;br /&gt;
Make sure to hand in the donations immediately after your tour. Put the money on Ingos desk with a note stating which tour you gave, how much you received and how many people were there.&lt;br /&gt;
&lt;br /&gt;
= Offering Tours =&lt;br /&gt;
&lt;br /&gt;
At some point you're perhaps asked to give a tour or if it is possible to have one on a specific date. The latter might occur if Edith is out of her office and her telephone is redirected. In those cases you should know ''and'' check following things:&lt;br /&gt;
* '''Who is asking''' for a tour? &amp;lt;br&amp;gt; If you're on the phone ask for a name and write down the phone number where the person can be called!&lt;br /&gt;
* The '''number of people should be between 15 and 25'''. &amp;lt;br&amp;gt; Of course, the boundary conditions depend on the person guiding the tour. In a case of doubt, ask the guide or, if none is found yet, you simply can't promise that there will be a tour! In that case you may offer to look for a guide giving the tour anyway (and call the person back), but again don't make any promises! A list of people offering tours can be found below.&lt;br /&gt;
* What is the '''average age of the persons attending''' the tour? &amp;lt;br&amp;gt; That's an important point since the content of a tour differs between children and adults, of course. Furthermore some guides have favorite age groups!&lt;br /&gt;
* Carefully check the '''date and time of the tour'''!&lt;br /&gt;
** It's most likely to find a guide if the weekday of a tour is '''between Monday and Thursday'''. Friday might work if it is before noon, otherwise people might be on the way into the weekend. In all other cases you may offer to look for a guide giving the tour anyway (see the point about the group size above).&lt;br /&gt;
** Make sure that there is '''no tour scheduled already'''! For that look into our online calendar (the link can be found below) and on the whiteboard in the copying machine office.&lt;br /&gt;
** Related to the last point ensure that there is '''no lab course''' running at the date! The lab course is also marked in our online calendar.&lt;br /&gt;
** Usually, a tour will take '''about 1.5 to 2 hours'''.&lt;br /&gt;
* Clarify the '''costs for a tour''' beforehand!! &amp;lt;br&amp;gt; Unfortunately, we are '''not allowed to ask for money''' for giving a tour! The only way for getting money is to tell the people that they are welcomed to '''donate money to the [http://foerderverein-sternwarte-bamberg.de/| Foerderverein der Sternwarte]'''. In nearly every case the people or the one asking for a tour will pay something. A rule of thumb is that '''every person donates 3 EUR''' (for children 2 EUR are alright as well). &amp;lt;br&amp;gt; __For your ears only__: 1 EUR per attended person of the donated money will be transferred to the Förderverein (the current treasurer is Ingo, leave the money on his desk with a short note stating the number of people and what kind of a tour this was), the rest is for the guide! There will be a contact phone-number available, best would be to call the contacts-person before the tour to tell them these infos...&lt;br /&gt;
* '''Observing with our telescopes''' is possible, of course, but only in case of '''night and cloudless weather'''! No kidding, some people are not aware of these facts... &amp;lt;br&amp;gt; (&amp;quot;The day is too bright to see stars.&amp;quot;, &amp;quot;I'm sorry, we can't look trough the clouds.&amp;quot;, &amp;quot;The weather forecast is too uncertain, we can't promise.&amp;quot;) Please check if someone else has already signed in for one of the telescopes on the day/night of the guided tour!  '''IMPORTANT:'''  Be aware that you are not allowed to operate the telescopes on you own, there ALWAYS has to be at least a second person around. To assure that, you either have to have a second person from the observatory present (at least in the main building) or you have to set up AND disassemble everything while the people from the tour are there. Otherwise no observations are allowed to take place!&lt;br /&gt;
* If all the above points are clarified '''mark the tour''' &amp;lt;br&amp;gt; in our [https://www.sternwarte.uni-erlangen.de/calendar/ online calendar] and the whiteboard in the copying machine office. You have to be at a machine at the observatory to have access to the online calendar (to be more precisely: you're machine has to have an internal IP-address). &amp;lt;br&amp;gt; The entry into the online calendar, besides the date and time, should include &amp;quot;for whom is the tour and the group size, contact person and phone number, name of the guide giving the tour&amp;quot;. &amp;lt;br&amp;gt; The whiteboard template is as follows: &amp;quot;date, for whom is the tour, time, guide&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Giving a tour =&lt;br /&gt;
&lt;br /&gt;
Each tour has several stations, where you can show things or talk to the people. Useful information and experiences about that is listed below. It might be helpful as well to read the article about [[intern:popular_science:start|popular science]] in case you want or have to (if somebody asks) explain scientific aspects.&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
'''Before the tour'''&lt;br /&gt;
* Shut the doors of all offices in the main building (at least on the ground level) for several reasons: people are not allowed to walk in there and some of us are still working there during tours!&lt;br /&gt;
* Remove the northern hemisphere from the model of the inner solar system located in the entrance hall (you may put it into the Knigge room)&lt;br /&gt;
* Switch the lights on you need:&lt;br /&gt;
** entrance hall&lt;br /&gt;
** meteorite showcase (switch is on the right next to it)&lt;br /&gt;
** hallway to the meridian building; here are multiple switches&lt;br /&gt;
*** main light: right next to the door of the hallway or the backdoor of the main building (labeled &amp;quot;Meridian&amp;quot;)&lt;br /&gt;
*** info panels: on the opposite wall to the door next to the blue cupboard&lt;br /&gt;
*** showcases and Blinkkomparator: in the cutout box up the few stairs on the right wall close to the library, labeled &amp;quot;Linke Steckdosen&amp;quot;&lt;br /&gt;
* In case of observations&lt;br /&gt;
** make sure that you are '''never''' alone in the domes! There always have to be at least two people present.&lt;br /&gt;
** carry the needed oculars into the domes (for observations with the naked eye, the 40cm telescope is strongly recommended)&lt;br /&gt;
** do __not__ open the domes (first, it might start raining while nobody is there, and it's more fun for you to let the people do it)&lt;br /&gt;
** do __not__ remove the dust covers of the telescopes (wait until the dome has been opened)&lt;br /&gt;
** read the [[remeis:start|guide how to operate the telescopes and mountings]]&lt;br /&gt;
** after observing: shut down the telescope mounting, put the dust covers back on and close the domes&lt;br /&gt;
* In case of showing nice pictures&lt;br /&gt;
** make sure enough chairs are available&lt;br /&gt;
** switch on the beamer in the library, log into the machine and start your presentation, picture-viewer, ...&lt;br /&gt;
 &lt;br /&gt;
'''After the tour'''&lt;br /&gt;
* Switch the lights off&lt;br /&gt;
* Logout of the used machines and switch the beamer off&lt;br /&gt;
* Put on the dust covers of the telescopes in the hallway&lt;br /&gt;
* Protect the photo plates on the Blinkkomparator from getting dusty&lt;br /&gt;
* '''Move the entry from the whiteboard''' in the copying machine office '''to the list attached to the pin-board''' next to it&lt;br /&gt;
&lt;br /&gt;
== Material and Presentations ==&lt;br /&gt;
*Collected information valuable for guided tours: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours'' and ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/guidedtour.pdf''&lt;br /&gt;
&lt;br /&gt;
*A merger of (picture/video) presentations given at guided tours: &amp;lt;br&amp;gt; ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/guidedtour.odp'' &lt;br /&gt;
&lt;br /&gt;
*and according videos in  &amp;lt;br&amp;gt; ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/videos/''&lt;br /&gt;
&lt;br /&gt;
*Research and activities at Remeis from the radio to the gamma-rays (Tobi Beuchert): &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Andrea_tour_material/observations_allwavelength_remeisresearch_beuchert.pdf''&lt;br /&gt;
&lt;br /&gt;
*A presentation showing the astrophotography done at the observatory: &amp;lt;br&amp;gt; ''/data/media/slideshow_astrophotographie/optische_Quellen.pdf''&lt;br /&gt;
&lt;br /&gt;
*A video tour with Joern: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Joerns_example_tour''&lt;br /&gt;
&lt;br /&gt;
*A video tour with Uli (only hallway): &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Ulis_example_tour_hall_2019''&lt;br /&gt;
&lt;br /&gt;
*A complete tour by Sebastian Müller: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Sebastians_example_tour''&lt;br /&gt;
&lt;br /&gt;
*Presentations for tours by Andrea Gokus: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Andrea_tour_material'' &amp;lt;br&amp;gt; Contains presentations about the solar system (solar_system.pdf), neutrinos (neutrino_talk.pdf), overview of different objet types (impressions_universe.pdf)&lt;br /&gt;
&lt;br /&gt;
*Andreas tour notes: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Andrea_tour_material/Sternwarten_Fuehrungsnotizen.pdf''&lt;br /&gt;
&lt;br /&gt;
*Amys tour notes: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Guided_Tour_Information_Amy.pdf''&lt;br /&gt;
&lt;br /&gt;
*Tobis tour notes: &amp;lt;br&amp;gt; ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/guidedtour.pdf''&lt;br /&gt;
&lt;br /&gt;
*Information on clocks: &amp;lt;br&amp;gt; ''/data/media/history/2009-01_KlassikUhren_Poehlmann_Max_Ort.pdf''&lt;br /&gt;
&lt;br /&gt;
*Information on history of the observatory and the collection: &amp;lt;br&amp;gt; ''/data/media/history/AstronomischeSammlung.pdf''&lt;br /&gt;
&lt;br /&gt;
*History of the observatory (Litten): &amp;lt;br&amp;gt; ''/data/media/history/Litten-1.pdf'' &amp;lt;br&amp;gt; ''/data/media/history/Litten-2.pdf''&lt;br /&gt;
&lt;br /&gt;
*Last will of Dr. Karl Remeis: &amp;lt;br&amp;gt; ''/data/media/history/Testament_Karl_Remeis.pdf''&lt;br /&gt;
&lt;br /&gt;
*Die Remeis-Sternwarte zu Bamberg 1889-1939, E. Zinner: &amp;lt;br&amp;gt; [https://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1939VeBam...4....1Z&amp;amp;amp;data_type=PDF_HIGH&amp;amp;amp;whole_paper=YES&amp;amp;amp;type=PRINTER&amp;amp;amp;filetype=.pdf ADS]&lt;br /&gt;
&lt;br /&gt;
== Picture presentation ==&lt;br /&gt;
&lt;br /&gt;
There are different modes for the beamer (e.g. presentation, videos, graphics etc.) which result in a different appearance of the images. Make sure to select a presentation mode in which the pictures appear in an appropriate brightness on the screen. (I think &amp;quot;graphics&amp;quot; was a good one).&lt;br /&gt;
&lt;br /&gt;
== Telescopes in the domes ==&lt;br /&gt;
It's nice to show the main mirror to the people so that they can see themselves. To do so, just turn down the telescope in declination. This is only possible for the 40cm as the 50cm-Mount has a mechanical stop.&lt;br /&gt;
&lt;br /&gt;
Sometimes people want to know what we do with the tiny, little telescopes attached on the side of the big ones. Those are guiding-scopes. You can attach a camera to them and let it take pictures of one star. If the star moves (what it shouldn't do) a correction signal is sent to the mount to compensate the movement of the star. Stars can &amp;quot;move&amp;quot; in an image for example due to impreciseness of the mount. &lt;br /&gt;
&lt;br /&gt;
The (dusty) plate in front of the 40cm mirror is called &amp;quot;Schmid-Platte&amp;quot;. It is used to improve the imaging of the telescope and to correct several errors of the mirrors like coma and field-curvature. As it is very sensitive to scratching, it won't be cleaned and left dusty. But the dust has hardly any effect on the images as it lays far outside of the focusplane. The optical design is similar to a Ritchey-Chrétien type regarding the capability, but has a spherical main mirror, an aspherical secondary mirror and a lens-corrector (=Schmidt-plate). The price of the telescope and the mount is roughly about 25.000€ altogether.&lt;br /&gt;
&lt;br /&gt;
The 50cm doesn't have a Schmid-Platte but a corrector (lenses) placed in the OAZ. It has an elliptical main mirror and a spherical secondary mirror (Corrected Dall Kirkham). The price of the telescope and the mount is roughly about 65.000€. The secondary mirror is, compared to the size of the main mirror, significantly larger than the 40cm-sec. mirror. This results in a better luminosity an a larger, flat-field which is important for taking images. We can achieve tracking accuracies of about 10min without guiding.&lt;br /&gt;
&lt;br /&gt;
== Monthly public tours ==&lt;br /&gt;
NOTE: even if there is already someone listed for a tour, please enter you availability&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date &lt;br /&gt;
! Language&lt;br /&gt;
! Time&lt;br /&gt;
! Type&lt;br /&gt;
! Available Guides&lt;br /&gt;
! Backup&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/11/14 &lt;br /&gt;
| German&lt;br /&gt;
| 6 pm &lt;br /&gt;
| Tour for children &lt;br /&gt;
| &lt;br /&gt;
| Katharina&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/11/28&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| Katharina&lt;br /&gt;
| Philipp&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/12/05&lt;br /&gt;
| English&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| Federico&lt;br /&gt;
| Philipp, Katharina&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/12/12&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/12/19&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| Philipp&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/09&lt;br /&gt;
| English&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/19&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children (option 1)&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/23&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children (option 2)&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/30&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/02/06&lt;br /&gt;
| English&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/02/13&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/02/27&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Tours looking for a guide ==&lt;br /&gt;
ATTENTION: tours are not sorted by the date of the tour, but the time when they were requested&lt;br /&gt;
&lt;br /&gt;
NOTE: please enter you availability even if there is already another name listed.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date &lt;br /&gt;
! Language&lt;br /&gt;
! Time&lt;br /&gt;
! Organizer &lt;br /&gt;
! Additional information &lt;br /&gt;
! # People&lt;br /&gt;
! Observing&lt;br /&gt;
! Available Guides&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/.../..&lt;br /&gt;
| German&lt;br /&gt;
| 11:30am to 1pm&lt;br /&gt;
| DG school&lt;br /&gt;
| 5th grade&lt;br /&gt;
| ? &lt;br /&gt;
| no&lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Observing ==&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Guided_Tours_and_Public_Outreach&amp;diff=3051</id>
		<title>Guided Tours and Public Outreach</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Guided_Tours_and_Public_Outreach&amp;diff=3051"/>
		<updated>2023-11-10T14:59:10Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;On this pages you can find all information needed to guide a tour through our observatory for public people.&lt;br /&gt;
&lt;br /&gt;
At least, that's the aim of this page, so please add information if you think its missing!&lt;br /&gt;
&lt;br /&gt;
= Platform =&lt;br /&gt;
&lt;br /&gt;
'''The following is important:'''&amp;lt;br&amp;gt;&lt;br /&gt;
Tours are officially '''organized''' by the [http://foerderverein-sternwarte-bamberg.de Foerderverein of the observatory], but '''operated''' by the observatory staff to be insured through the university!&lt;br /&gt;
&lt;br /&gt;
The main platform for the general public is&amp;lt;br&amp;gt;[http://foerderverein-sternwarte-bamberg.de/fuehrungen http://foerderverein-sternwarte-bamberg.de/fuehrungen].&lt;br /&gt;
&lt;br /&gt;
Here, tours can be ordered in two different ways:&lt;br /&gt;
* single persons and smaller groups can directly book public and regular guided tours via our webform at&amp;lt;br&amp;gt;https://www.sternwarte.uni-erlangen.de/guided-tours/&lt;br /&gt;
* larger groups may directly ask for a tour at a separate date either by email to&amp;lt;br&amp;gt;[mailto:fuehrungen@foerderverein-sternwarte-bamberg.de fuehrungen@foerderverein-sternwarte-bamberg.de]&amp;lt;br&amp;gt;or by phone. The email address is a forwarding to a certain person of the mailing list (see below).&lt;br /&gt;
&lt;br /&gt;
The source code of the webform for booking a public tour can be found in the GIT repository at&amp;lt;br&amp;gt;&lt;br /&gt;
[https://www.sternwarte.uni-erlangen.de/gitlab/webmaster/guided-tours https://www.sternwarte.uni-erlangen.de/gitlab/webmaster/guided-tours].&amp;lt;br&amp;gt;&lt;br /&gt;
All pushes to the master branch are automatically deployed at the official URL above!&lt;br /&gt;
&lt;br /&gt;
== Mailing List / Current &amp;quot;staff&amp;quot; ==&lt;br /&gt;
&lt;br /&gt;
All people interested in giving guided tours are member of the mailing list&amp;lt;br&amp;gt;&lt;br /&gt;
[mailto:astro-fuehrungen@lists.fau.de astro-fuehrungen@lists.fau.de]&amp;lt;br&amp;gt;&lt;br /&gt;
Please '''do not share''' this address with anybody outside of the observatory!&lt;br /&gt;
If you're not on the list but interested in giving tours, just ask Max to add you :-)&lt;br /&gt;
&lt;br /&gt;
New tours without a guide yet are usually announced via this list. See [[Mailing lists]] for the list administrator.&lt;br /&gt;
&lt;br /&gt;
In charge of organization:&lt;br /&gt;
* Katrin&lt;br /&gt;
&lt;br /&gt;
Currently, the following people give tours on a regular basis:&lt;br /&gt;
* Steven&lt;br /&gt;
* Christian&lt;br /&gt;
* Aafia&lt;br /&gt;
* Amy&lt;br /&gt;
* Julia&lt;br /&gt;
* Philipp T.&lt;br /&gt;
* Alexey&lt;br /&gt;
* Katharina&lt;br /&gt;
&lt;br /&gt;
= Tip policy =&lt;br /&gt;
&lt;br /&gt;
The group is informed beforehand about the rule of tip. &lt;br /&gt;
For the guides the following applies for the money you get during the tour:&lt;br /&gt;
During the week (Monday till Friday) the factor for the donation, which goes directly to the Foerderverein, is 1/3 (33%), the rest (2/3) is your personal tip.&lt;br /&gt;
For weekend tours (Saturday, Sunday) the factor for the donation is 1/4 (25%).&lt;br /&gt;
&lt;br /&gt;
Make sure to hand in the donations immediately after your tour. Put the money on Ingos desk with a note stating which tour you gave, how much you received and how many people were there.&lt;br /&gt;
&lt;br /&gt;
= Offering Tours =&lt;br /&gt;
&lt;br /&gt;
At some point you're perhaps asked to give a tour or if it is possible to have one on a specific date. The latter might occur if Edith is out of her office and her telephone is redirected. In those cases you should know ''and'' check following things:&lt;br /&gt;
* '''Who is asking''' for a tour? &amp;lt;br&amp;gt; If you're on the phone ask for a name and write down the phone number where the person can be called!&lt;br /&gt;
* The '''number of people should be between 15 and 25'''. &amp;lt;br&amp;gt; Of course, the boundary conditions depend on the person guiding the tour. In a case of doubt, ask the guide or, if none is found yet, you simply can't promise that there will be a tour! In that case you may offer to look for a guide giving the tour anyway (and call the person back), but again don't make any promises! A list of people offering tours can be found below.&lt;br /&gt;
* What is the '''average age of the persons attending''' the tour? &amp;lt;br&amp;gt; That's an important point since the content of a tour differs between children and adults, of course. Furthermore some guides have favorite age groups!&lt;br /&gt;
* Carefully check the '''date and time of the tour'''!&lt;br /&gt;
** It's most likely to find a guide if the weekday of a tour is '''between Monday and Thursday'''. Friday might work if it is before noon, otherwise people might be on the way into the weekend. In all other cases you may offer to look for a guide giving the tour anyway (see the point about the group size above).&lt;br /&gt;
** Make sure that there is '''no tour scheduled already'''! For that look into our online calendar (the link can be found below) and on the whiteboard in the copying machine office.&lt;br /&gt;
** Related to the last point ensure that there is '''no lab course''' running at the date! The lab course is also marked in our online calendar.&lt;br /&gt;
** Usually, a tour will take '''about 1.5 to 2 hours'''.&lt;br /&gt;
* Clarify the '''costs for a tour''' beforehand!! &amp;lt;br&amp;gt; Unfortunately, we are '''not allowed to ask for money''' for giving a tour! The only way for getting money is to tell the people that they are welcomed to '''donate money to the [http://foerderverein-sternwarte-bamberg.de/| Foerderverein der Sternwarte]'''. In nearly every case the people or the one asking for a tour will pay something. A rule of thumb is that '''every person donates 3 EUR''' (for children 2 EUR are alright as well). &amp;lt;br&amp;gt; __For your ears only__: 1 EUR per attended person of the donated money will be transferred to the Förderverein (the current treasurer is Ingo, leave the money on his desk with a short note stating the number of people and what kind of a tour this was), the rest is for the guide! There will be a contact phone-number available, best would be to call the contacts-person before the tour to tell them these infos...&lt;br /&gt;
* '''Observing with our telescopes''' is possible, of course, but only in case of '''night and cloudless weather'''! No kidding, some people are not aware of these facts... &amp;lt;br&amp;gt; (&amp;quot;The day is too bright to see stars.&amp;quot;, &amp;quot;I'm sorry, we can't look trough the clouds.&amp;quot;, &amp;quot;The weather forecast is too uncertain, we can't promise.&amp;quot;) Please check if someone else has already signed in for one of the telescopes on the day/night of the guided tour!  '''IMPORTANT:'''  Be aware that you are not allowed to operate the telescopes on you own, there ALWAYS has to be at least a second person around. To assure that, you either have to have a second person from the observatory present (at least in the main building) or you have to set up AND disassemble everything while the people from the tour are there. Otherwise no observations are allowed to take place!&lt;br /&gt;
* If all the above points are clarified '''mark the tour''' &amp;lt;br&amp;gt; in our [https://www.sternwarte.uni-erlangen.de/calendar/ online calendar] and the whiteboard in the copying machine office. You have to be at a machine at the observatory to have access to the online calendar (to be more precisely: you're machine has to have an internal IP-address). &amp;lt;br&amp;gt; The entry into the online calendar, besides the date and time, should include &amp;quot;for whom is the tour and the group size, contact person and phone number, name of the guide giving the tour&amp;quot;. &amp;lt;br&amp;gt; The whiteboard template is as follows: &amp;quot;date, for whom is the tour, time, guide&amp;quot;&lt;br /&gt;
&lt;br /&gt;
= Giving a tour =&lt;br /&gt;
&lt;br /&gt;
Each tour has several stations, where you can show things or talk to the people. Useful information and experiences about that is listed below. It might be helpful as well to read the article about [[intern:popular_science:start|popular science]] in case you want or have to (if somebody asks) explain scientific aspects.&lt;br /&gt;
&lt;br /&gt;
== Preparation ==&lt;br /&gt;
&lt;br /&gt;
'''Before the tour'''&lt;br /&gt;
* Shut the doors of all offices in the main building (at least on the ground level) for several reasons: people are not allowed to walk in there and some of us are still working there during tours!&lt;br /&gt;
* Remove the northern hemisphere from the model of the inner solar system located in the entrance hall (you may put it into the Knigge room)&lt;br /&gt;
* Switch the lights on you need:&lt;br /&gt;
** entrance hall&lt;br /&gt;
** meteorite showcase (switch is on the right next to it)&lt;br /&gt;
** hallway to the meridian building; here are multiple switches&lt;br /&gt;
*** main light: right next to the door of the hallway or the backdoor of the main building (labeled &amp;quot;Meridian&amp;quot;)&lt;br /&gt;
*** info panels: on the opposite wall to the door next to the blue cupboard&lt;br /&gt;
*** showcases and Blinkkomparator: in the cutout box up the few stairs on the right wall close to the library, labeled &amp;quot;Linke Steckdosen&amp;quot;&lt;br /&gt;
* In case of observations&lt;br /&gt;
** make sure that you are '''never''' alone in the domes! There always have to be at least two people present.&lt;br /&gt;
** carry the needed oculars into the domes (for observations with the naked eye, the 40cm telescope is strongly recommended)&lt;br /&gt;
** do __not__ open the domes (first, it might start raining while nobody is there, and it's more fun for you to let the people do it)&lt;br /&gt;
** do __not__ remove the dust covers of the telescopes (wait until the dome has been opened)&lt;br /&gt;
** read the [[remeis:start|guide how to operate the telescopes and mountings]]&lt;br /&gt;
** after observing: shut down the telescope mounting, put the dust covers back on and close the domes&lt;br /&gt;
* In case of showing nice pictures&lt;br /&gt;
** make sure enough chairs are available&lt;br /&gt;
** switch on the beamer in the library, log into the machine and start your presentation, picture-viewer, ...&lt;br /&gt;
 &lt;br /&gt;
'''After the tour'''&lt;br /&gt;
* Switch the lights off&lt;br /&gt;
* Logout of the used machines and switch the beamer off&lt;br /&gt;
* Put on the dust covers of the telescopes in the hallway&lt;br /&gt;
* Protect the photo plates on the Blinkkomparator from getting dusty&lt;br /&gt;
* '''Move the entry from the whiteboard''' in the copying machine office '''to the list attached to the pin-board''' next to it&lt;br /&gt;
&lt;br /&gt;
== Material and Presentations ==&lt;br /&gt;
*Collected information valuable for guided tours: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours'' and ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/guidedtour.pdf''&lt;br /&gt;
&lt;br /&gt;
*A merger of (picture/video) presentations given at guided tours: &amp;lt;br&amp;gt; ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/guidedtour.odp'' &lt;br /&gt;
&lt;br /&gt;
*and according videos in  &amp;lt;br&amp;gt; ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/videos/''&lt;br /&gt;
&lt;br /&gt;
*Research and activities at Remeis from the radio to the gamma-rays (Tobi Beuchert): &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Andrea_tour_material/observations_allwavelength_remeisresearch_beuchert.pdf''&lt;br /&gt;
&lt;br /&gt;
*A presentation showing the astrophotography done at the observatory: &amp;lt;br&amp;gt; ''/data/media/slideshow_astrophotographie/optische_Quellen.pdf''&lt;br /&gt;
&lt;br /&gt;
*A video tour with Joern: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Joerns_example_tour''&lt;br /&gt;
&lt;br /&gt;
*A video tour with Uli (only hallway): &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Ulis_example_tour_hall_2019''&lt;br /&gt;
&lt;br /&gt;
*A complete tour by Sebastian Müller: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Sebastians_example_tour''&lt;br /&gt;
&lt;br /&gt;
*Presentations for tours by Andrea Gokus: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Andrea_tour_material'' &amp;lt;br&amp;gt; Contains presentations about the solar system (solar_system.pdf), neutrinos (neutrino_talk.pdf), overview of different objet types (impressions_universe.pdf)&lt;br /&gt;
&lt;br /&gt;
*Andreas tour notes: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Andrea_tour_material/Sternwarten_Fuehrungsnotizen.pdf''&lt;br /&gt;
&lt;br /&gt;
*Amys tour notes: &amp;lt;br&amp;gt; ''/data/media/presentations_for_guided_tours/Guided_Tour_Information_Amy.pdf''&lt;br /&gt;
&lt;br /&gt;
*Tobis tour notes: &amp;lt;br&amp;gt; ''/userdata/data/beuchert/work/teaching/guidedtour/presentation/guidedtour.pdf''&lt;br /&gt;
&lt;br /&gt;
*Information on clocks: &amp;lt;br&amp;gt; ''/data/media/history/2009-01_KlassikUhren_Poehlmann_Max_Ort.pdf''&lt;br /&gt;
&lt;br /&gt;
*Information on history of the observatory and the collection: &amp;lt;br&amp;gt; ''/data/media/history/AstronomischeSammlung.pdf''&lt;br /&gt;
&lt;br /&gt;
*History of the observatory (Litten): &amp;lt;br&amp;gt; ''/data/media/history/Litten-1.pdf'' &amp;lt;br&amp;gt; ''/data/media/history/Litten-2.pdf''&lt;br /&gt;
&lt;br /&gt;
*Last will of Dr. Karl Remeis: &amp;lt;br&amp;gt; ''/data/media/history/Testament_Karl_Remeis.pdf''&lt;br /&gt;
&lt;br /&gt;
*Die Remeis-Sternwarte zu Bamberg 1889-1939, E. Zinner: &amp;lt;br&amp;gt; [https://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1939VeBam...4....1Z&amp;amp;amp;data_type=PDF_HIGH&amp;amp;amp;whole_paper=YES&amp;amp;amp;type=PRINTER&amp;amp;amp;filetype=.pdf ADS]&lt;br /&gt;
&lt;br /&gt;
== Picture presentation ==&lt;br /&gt;
&lt;br /&gt;
There are different modes for the beamer (e.g. presentation, videos, graphics etc.) which result in a different appearance of the images. Make sure to select a presentation mode in which the pictures appear in an appropriate brightness on the screen. (I think &amp;quot;graphics&amp;quot; was a good one).&lt;br /&gt;
&lt;br /&gt;
== Telescopes in the domes ==&lt;br /&gt;
It's nice to show the main mirror to the people so that they can see themselves. To do so, just turn down the telescope in declination. This is only possible for the 40cm as the 50cm-Mount has a mechanical stop.&lt;br /&gt;
&lt;br /&gt;
Sometimes people want to know what we do with the tiny, little telescopes attached on the side of the big ones. Those are guiding-scopes. You can attach a camera to them and let it take pictures of one star. If the star moves (what it shouldn't do) a correction signal is sent to the mount to compensate the movement of the star. Stars can &amp;quot;move&amp;quot; in an image for example due to impreciseness of the mount. &lt;br /&gt;
&lt;br /&gt;
The (dusty) plate in front of the 40cm mirror is called &amp;quot;Schmid-Platte&amp;quot;. It is used to improve the imaging of the telescope and to correct several errors of the mirrors like coma and field-curvature. As it is very sensitive to scratching, it won't be cleaned and left dusty. But the dust has hardly any effect on the images as it lays far outside of the focusplane. The optical design is similar to a Ritchey-Chrétien type regarding the capability, but has a spherical main mirror, an aspherical secondary mirror and a lens-corrector (=Schmidt-plate). The price of the telescope and the mount is roughly about 25.000€ altogether.&lt;br /&gt;
&lt;br /&gt;
The 50cm doesn't have a Schmid-Platte but a corrector (lenses) placed in the OAZ. It has an elliptical main mirror and a spherical secondary mirror (Corrected Dall Kirkham). The price of the telescope and the mount is roughly about 65.000€. The secondary mirror is, compared to the size of the main mirror, significantly larger than the 40cm-sec. mirror. This results in a better luminosity an a larger, flat-field which is important for taking images. We can achieve tracking accuracies of about 10min without guiding.&lt;br /&gt;
&lt;br /&gt;
== Monthly public tours ==&lt;br /&gt;
NOTE: even if there is already someone listed for a tour, please enter you availability&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date &lt;br /&gt;
! Language&lt;br /&gt;
! Time&lt;br /&gt;
! Type&lt;br /&gt;
! Available Guides&lt;br /&gt;
! Backup&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/11/14 &lt;br /&gt;
| German&lt;br /&gt;
| 6 pm &lt;br /&gt;
| Tour for children &lt;br /&gt;
| Ole&lt;br /&gt;
| Katharina&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/11/28&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| Katharina&lt;br /&gt;
| Philipp&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/12/05&lt;br /&gt;
| English&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| Federico&lt;br /&gt;
| Philipp, Katharina&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/12/12&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/12/19&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| Philipp&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/09&lt;br /&gt;
| English&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/19&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children (option 1)&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/23&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children (option 2)&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/01/30&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/02/06&lt;br /&gt;
| English&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/02/13&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| Tour for children&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2024/02/27&lt;br /&gt;
| German&lt;br /&gt;
| 6 pm&lt;br /&gt;
| normal tour&lt;br /&gt;
| &lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Tours looking for a guide ==&lt;br /&gt;
ATTENTION: tours are not sorted by the date of the tour, but the time when they were requested&lt;br /&gt;
&lt;br /&gt;
NOTE: please enter you availability even if there is already another name listed.&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! Date &lt;br /&gt;
! Language&lt;br /&gt;
! Time&lt;br /&gt;
! Organizer &lt;br /&gt;
! Additional information &lt;br /&gt;
! # People&lt;br /&gt;
! Observing&lt;br /&gt;
! Available Guides&lt;br /&gt;
&lt;br /&gt;
|- &lt;br /&gt;
| 2023/.../..&lt;br /&gt;
| German&lt;br /&gt;
| 11:30am to 1pm&lt;br /&gt;
| DG school&lt;br /&gt;
| 5th grade&lt;br /&gt;
| ? &lt;br /&gt;
| no&lt;br /&gt;
| &lt;br /&gt;
&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Observing ==&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2023&amp;diff=2761</id>
		<title>Skivergnuegen 2023</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Skivergnuegen_2023&amp;diff=2761"/>
		<updated>2023-01-13T05:29:52Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Remeis Skifahren 2023''' The same procedure as every year ;-) &lt;br /&gt;
Hier werden die wichtigsten organisatorischen Dinge festgehalten. Jeder darf hier gerne editieren; wer keinen Account für das Wiki hat, muss sich bitte an jemanden wenden, der Zugang hat.&lt;br /&gt;
&lt;br /&gt;
[[File:154854 silvretta-montafon-panorama.jpeg|right|700px]]&lt;br /&gt;
&lt;br /&gt;
== Allgemeine Infos ==&lt;br /&gt;
&lt;br /&gt;
*Ferienwohnung(en):&lt;br /&gt;
**Wir kommen wieder unter im [https://www.grandau.at/montafon/zimmer/ferienhaus-enzian Haus Enzian] des [https://www.grandau.at/ Sporthotels Grandau].&lt;br /&gt;
**Adresse des Sporthotel: Montafonerstraße 274a, 6791 St. Gallenkirch, Österreich. Unsere Unterkünfte liegen im Türkeiweg.&lt;br /&gt;
&lt;br /&gt;
*Skigebiet: Montafon (Vorarlberg)&lt;br /&gt;
**Skigebiet Karte: https://winter.intermaps.com/montafon?lang=de&lt;br /&gt;
**Skipass Preise:  https://www.silvretta-montafon.at/de/onlineshop/ticket-uebersicht&lt;br /&gt;
**Wer nicht alle 7 Tage Skifahren möchte: Es gibt Angebote wie z.B. 5 aus 6, mit einem solchen Skipass kann man innerhalb von 6 Tagen an 5 beliebigen Tagen skifahren.&lt;br /&gt;
&lt;br /&gt;
*Skiverleih: [http://www.sportharry.at/ Sport Harry], direkt an der Talstation.&lt;br /&gt;
&lt;br /&gt;
== Unterkunft ==&lt;br /&gt;
=== Aufteilung ===&lt;br /&gt;
&lt;br /&gt;
Dieses Jahr gibt es nur die Hütte mit folgenden Zimmern (wir sind insgesamt 14, also bleiben keine Plätze frei).&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Zimmer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Insassen&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 2-er&lt;br /&gt;
| Amy, Aafia&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 3-er&lt;br /&gt;
| Jakob, Max, Katya&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 4-er&lt;br /&gt;
| Eva, Steffen, Flo, Christian&lt;br /&gt;
|-&lt;br /&gt;
! scope=&amp;quot;row&amp;quot; | 5-er&lt;br /&gt;
| Basti, Philipp, Ole&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Fahrer ==&lt;br /&gt;
Fahrer können sich hier eintragen und Eckdaten angeben. Diejenige, die mitfahren möchten, sprechen sich mit den Fahrern ab und tragen sich ebenfalls ein. Klärt bitte auch die Gepäcklage, ggf. kann ein anderer Fahrer z.B. Skiausrüstung mitnehmen.&lt;br /&gt;
&lt;br /&gt;
'''Auf der Suche nach einer Mitfahrgelegenheit: '''&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Fahrer&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | # Plätze&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Anreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Mitfahrer Abreise&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Skimitnahme&lt;br /&gt;
! scope=&amp;quot;col&amp;quot; | Kommentar&lt;br /&gt;
|-&lt;br /&gt;
|Eva&lt;br /&gt;
|0(3)&lt;br /&gt;
|Samstag&lt;br /&gt;
|Steffen+Christian&lt;br /&gt;
|Samstag?&lt;br /&gt;
|Steffen+Christian&lt;br /&gt;
|eigene ja&lt;br /&gt;
|Auto ist voll&lt;br /&gt;
|-&lt;br /&gt;
|Max&lt;br /&gt;
|0(3)&lt;br /&gt;
|Samstag (from Arlberg)&lt;br /&gt;
|Jakob+Katya&lt;br /&gt;
|Samstag&lt;br /&gt;
|Jakob+Katya&lt;br /&gt;
|eigene ja&lt;br /&gt;
|Auto ist voll&lt;br /&gt;
|-&lt;br /&gt;
|Basti&lt;br /&gt;
|1(2)&lt;br /&gt;
|Sunday (early morning)&lt;br /&gt;
|Alexey&lt;br /&gt;
|Friday (lunch time)&lt;br /&gt;
|Alexey&lt;br /&gt;
|Ja&lt;br /&gt;
|I will be at the airport NUE (alternative meeting point would be Nuremberg trainstation)&lt;br /&gt;
|-&lt;br /&gt;
|Johannes&lt;br /&gt;
|3&lt;br /&gt;
|Saturday from Munich&lt;br /&gt;
|?&lt;br /&gt;
|Saturday until Crailsheim&lt;br /&gt;
|?&lt;br /&gt;
|Ja&lt;br /&gt;
|on the way to Montafon I could pick up people and material in Munich. On the way back I could take people to Crailsheim station (direct train to Nürnberg). Decision about skiing on Saturday and therefore leave time in Montafon dependent on local conditions.&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Schwarzes Brett ==&lt;br /&gt;
&lt;br /&gt;
=== Essen ===&lt;br /&gt;
Hier gibt's den Essensplan, soweit übernommen von den letzten Jahren. Da wir sehr viele Leute sind, sollten wir wieder vorab grob planen, was wir kochen wollen. Wir können einige Dinge in Deutschland besorgen und mitnehmen, es gib aber auch einen nahe gelegen Supermarkt. Generelle Dinge, wie z.B. Gewürze oder Aufstriche für Frühstücksbrötchen könnte man auch mitbringen.&lt;br /&gt;
&lt;br /&gt;
Wer Kochvorschläge oder andere Ideen hat, gerne unten eintragen und kommentieren. Bedenkt, dass die Zubereitung (relativ) einfach sein sollte. Da sich die Liste in den letzten Jahren gut bewaehrt hat, behalten wir sie bei, aber die Reihenfolge/Tage kann man sicher noch hin und her bewegen. Einige (wenige) Sachen müssen wir dann manchmal trotzdem noch beim Spar kaufen, da ja auch der Kühlschrank nur begrenzt Platz hat.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Essenplan:'''&lt;br /&gt;
*Samstag: Spaghetti Bolognese + veg. Bolognese&lt;br /&gt;
*Sonntag: Käse Spätzle&lt;br /&gt;
*Montag: Burritos&lt;br /&gt;
*Dienstag: Risotto&lt;br /&gt;
*Mittwoch: Gulasch&lt;br /&gt;
*Donnerstag: Kartoffeln mit Kräuter-Quark&lt;br /&gt;
*Freitag: Dal mit Reis  (od. Curry mit Suesskartoffeln )&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Einkaufsliste von 2022 (letztes Jahr): https://docs.google.com/document/d/166b_mIXxmoAgcv1upIGremd_36E1fmwwxsQADP2uEv8/&lt;br /&gt;
&lt;br /&gt;
@Christian: Vegane Option entwender selbst mitbringen oder hier mit reinschreiben (falls man die einfach bekommt)&lt;br /&gt;
&lt;br /&gt;
=== Alkohol/Party ===&lt;br /&gt;
&lt;br /&gt;
Stiegl (Johannes)&lt;br /&gt;
&lt;br /&gt;
=== Spiele ===&lt;br /&gt;
 &lt;br /&gt;
Immer lustig sind gemeinsame Spieleabende. Also, wer im Besitz von Gesellschaftsspielen ist, bringt diese gerne mit! Um eine Übersicht zu bekommen auch bitte hier eintragen:&lt;br /&gt;
&lt;br /&gt;
''* Secret Hitler (Johannes)''&lt;br /&gt;
&lt;br /&gt;
=== Sonstiges ===&lt;br /&gt;
&lt;br /&gt;
=== Alternativprogramm ===&lt;br /&gt;
  *  Rodeln (http://www.montafon.at/de/urlaubswelten/echte_naturliebhaber/rodeln)&lt;br /&gt;
  *  Schneeschuhwandern (http://www.montafon.at/schneeschuhwanderungen)&lt;br /&gt;
  *  Therme (http://www.montafon.at/schwimmen), z.B. http://www.aqua-dome.at/de (ca. 130km entfernt!)&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Andrea%27s_PhD_Hat&amp;diff=2524</id>
		<title>Andrea's PhD Hat</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Andrea%27s_PhD_Hat&amp;diff=2524"/>
		<updated>2022-05-31T15:27:12Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Hat:''' &lt;br /&gt;
&lt;br /&gt;
30x30cm platform (top), 10-15cm  tube (base) &lt;br /&gt;
&lt;br /&gt;
'''General'''&lt;br /&gt;
*half in Würzburg and half in bamberg one could make a split hat in two colors or so (only top in franconian colors + LOGOs) [DOMINIC]&lt;br /&gt;
&lt;br /&gt;
'''Science:'''&lt;br /&gt;
*FERMI [Christian]&lt;br /&gt;
*eROSITA, Xmm sun glases (print 3D model) [Ole, Philipp]&lt;br /&gt;
*Radio/Effelsberg [Würzbutg, Florian]&lt;br /&gt;
*blazars model with jet [Steven]&lt;br /&gt;
*Fermi pipeline, data (problems with fermipy), Lego figure with Lego &amp;quot;paper&amp;quot; standing in front of a maze (representing the Fermi collaboration) with the journal/published article at the end of the maze [Ole]&lt;br /&gt;
*Bayesian block representation of PKS2004: either print this light curve, or build physical steps analogue to the Bayesian blocks on the side of the hat (Lego blocks like light curve) [Dominic]&lt;br /&gt;
*AGN model would also be super cool (baby, teeny, mid-age and senior AGN as she is discussing AGN evolution) (cartoon) [Aafia]&lt;br /&gt;
*Integral, NuStar proposal&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''SM:''' [Dominic, Steven, Aafia]&lt;br /&gt;
*behind the scenes (do we have some material?) - picture&lt;br /&gt;
*SM-picture (SM-buddies) - pictures, photoshop&lt;br /&gt;
*Interviews for WhoIsDrRemeis&lt;br /&gt;
*SM-outing at Dominics place&lt;br /&gt;
*instawalk&lt;br /&gt;
*fixing Dominics hair&lt;br /&gt;
*changing Stevens Wallpaper to Wendler and Laura Müller&lt;br /&gt;
&lt;br /&gt;
'''FRANCI:'''&lt;br /&gt;
*Logo [Max]&lt;br /&gt;
*FRANCIball referee (whistle) [Max]&lt;br /&gt;
*playground shape (?)&lt;br /&gt;
&lt;br /&gt;
'''Outreach:'''&lt;br /&gt;
*Hashtags&lt;br /&gt;
*Starlink interview [Dominic, Steven]&lt;br /&gt;
*girls day (LOGO) [Dominic]&lt;br /&gt;
*public talks/guided Tours - group fotos [Steven]&lt;br /&gt;
*astronomers for earth [Ole]&lt;br /&gt;
*Lange Nacht der Wissenschaften (LOGO, balloon with helium) [Dominic]&lt;br /&gt;
*Twitter (most successful tweet) [Dominic, Stevens]&lt;br /&gt;
&lt;br /&gt;
'''Personal:'''&lt;br /&gt;
*black dog ears like Amus because she became dog mummy during her PhD (+ drawing a nose) [Amy?, Dominic] &lt;br /&gt;
*countless hours after zoom meetings talking about how we dont have time due to zoom meetings [Steven]&lt;br /&gt;
*Memes: summon a random PhD student: &amp;quot;Gokus Gokus Fidibus&amp;quot;; Gokus Gokus; I know Thomas Rauch, he was in my class; overly motivated Master student; Franci meeting a real conference; phd student is free real estate; several eROSITA memes, NRTA [Dominic]&lt;br /&gt;
*likes science fiction (The Expanse - Name of Amos (Amos Burten)&lt;br /&gt;
*very good fairy costume - picture [Ole]&lt;br /&gt;
*Kangoroo Chroniken&lt;br /&gt;
*Remeis skiing (pictures) [Ole]&lt;br /&gt;
*certainly St Louis and USA references! maybe a Mississippi going around her hat? (River from bottom to top, cities along the river at top st. louis, gateway St. louis, sth typical for st. louis) [Würzburg, Florian]&lt;br /&gt;
*Andrea hanging &lt;br /&gt;
*Amos (the almost new remeis puppy, meeting with Manami's dog), runs after the dog&lt;br /&gt;
&lt;br /&gt;
'''Remeis Merch:'''&lt;br /&gt;
*Hoody&lt;br /&gt;
*maybe: explaining to the professors what a Jersey Beanie is &amp;quot;modern type of hat&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''Guided Tours:'''&lt;br /&gt;
*Guided Tours for children&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Andrea%27s_PhD_Hat&amp;diff=2519</id>
		<title>Andrea's PhD Hat</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Andrea%27s_PhD_Hat&amp;diff=2519"/>
		<updated>2022-05-24T16:10:27Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;'''Hat:''' &lt;br /&gt;
&lt;br /&gt;
30x30cm platform (top), 10-15cm  tube (base) &lt;br /&gt;
&lt;br /&gt;
'''Science:'''&lt;br /&gt;
*eROSITA, FERMI, XMM, Radio/Effelsberg&lt;br /&gt;
*blazars&lt;br /&gt;
*Fermi pipeline, data&lt;br /&gt;
*Integral, NuStar proposal &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''SM:'''&lt;br /&gt;
*behind the scenes (do we have some material?)&lt;br /&gt;
*SM-picture&lt;br /&gt;
*SM-buddies (meme(?): Steven &amp;amp; Dominic as her ****)&lt;br /&gt;
*Interviews for WhoIsDrRemeis&lt;br /&gt;
*SM-outing at Dominics place&lt;br /&gt;
*instawalk&lt;br /&gt;
*fixing Dominics hair&lt;br /&gt;
*changing Stevens Wallpaper to Wendler and Laura Müller&lt;br /&gt;
&lt;br /&gt;
'''FRANCI:'''&lt;br /&gt;
*Logo&lt;br /&gt;
*FRANCIball referee (?)&lt;br /&gt;
&lt;br /&gt;
'''Outreach:'''&lt;br /&gt;
*Starlink interview&lt;br /&gt;
*girls day&lt;br /&gt;
*public talks&lt;br /&gt;
*Lange Nacht der Wissenschaften&lt;br /&gt;
&lt;br /&gt;
'''Remeis Merch:'''&lt;br /&gt;
*Hoody (pic, meme)&lt;br /&gt;
&lt;br /&gt;
'''Guided Tours:'''&lt;br /&gt;
*Guided Tours for children&lt;br /&gt;
&lt;br /&gt;
'''Personal:'''&lt;br /&gt;
*Amos (the almost new remeis puppy, meeting with Manami's dog), runs after the dog&lt;br /&gt;
*countless hours after zoom meetings talking about how we dont have time due to zoom meetings&lt;br /&gt;
*Memes: summon a random PhD student: &amp;quot;Gokus Gokus Fidibus&amp;quot;; Gokus Gokus; I know Thomas Rauch, he was in my class; overly motivated Master student; Franci meeting a real conference; phd student is free real estate; several eROSITA memes, NRTA&lt;br /&gt;
*likes science fiction; very good fairy costume&lt;br /&gt;
*Remeis skiing&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=PhD_hat_for_Jonathan&amp;diff=2425</id>
		<title>PhD hat for Jonathan</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=PhD_hat_for_Jonathan&amp;diff=2425"/>
		<updated>2022-01-04T14:07:48Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Jonathan has handed in his PhD thesis '''mid of December 2021'''. Therefore, his defense will most likely take place in '''February/March 2022'''.&lt;br /&gt;
&lt;br /&gt;
'''Timeline''':&lt;br /&gt;
&lt;br /&gt;
'''Mid of January 2022''': first virtual hat meeting (date will be determined by means of a doodle poll that will be sent around in the beginning of 2022); during this first meeting, we will come up with (more) ideas for the hat and distribute the work among the different contributors.&lt;br /&gt;
&lt;br /&gt;
'''Two weeks later (probably in the first week of February 2022)''': second virtual hat meeting (date will be determined during the first meeting); all items should be ready/bought by this date; status update and clarifying last organisational questions&lt;br /&gt;
&lt;br /&gt;
'''Probably in the second week of February 2022''': transport to Bamberg/Erlangen and construction of the hat; personally, I would like to store the hat somewhere at the observatory so that everything can be done there; the handover will also be done in Bamberg since Jonathan lives there  &lt;br /&gt;
&lt;br /&gt;
The hat has already been built by me (David). The hat platform is about 30 x 30 cm, the cylinder is about 11.5 cm high.&lt;br /&gt;
&lt;br /&gt;
Biggest problem (as in the case of Simon Kreuzer's and Ralf Ballhausen's hats): How to gather all things to put on the hat.&lt;br /&gt;
Idea: People that commute between Bamberg &amp;amp; Erlangen can deliver things, if the hat is stored at the observatory.&lt;br /&gt;
&lt;br /&gt;
Below you can find a list of ideas (work related, office related, private) for the construction of the hat (feel free to update anything you like, also before the 1st hat meeting!):&lt;br /&gt;
&lt;br /&gt;
'''Work things''':&lt;br /&gt;
* PhD related stuff -&amp;gt; Supernova remnants &amp;amp; large scale collisions of molecular clouds of the ISM&lt;br /&gt;
* Work for astronomical lab course -&amp;gt; one of the main if not the main responsible person for the new radio telescope&lt;br /&gt;
* Various different EROSITA Meetings -&amp;gt; Pictures/Events? We [Jonathan, Katja, Ole,...] stayed at the Hopfen Hotel in Wolnzach during one Consortiums Meeting &lt;br /&gt;
* EROSITA work&lt;br /&gt;
* Was responsible for the XMM archive at Remeis -&amp;gt; don't know yet what to make out of it ['''Ole''']&lt;br /&gt;
* Frequently used the observatory pool -&amp;gt; Pictures?&lt;br /&gt;
* Working abroad (Japan) for several weeks in Mai &amp;amp; June 2018 -&amp;gt; Pictures/Events?&lt;br /&gt;
* PhD student aka 'Wunderwuzzi':&lt;br /&gt;
** Jonathan and me (David) were responsible for hanging up several whiteboards at the observatory including the one in the conference room, also the 'new' astronomical pictures in the hallway to the meridian building were mainly hung up by us -&amp;gt; don't know yet what to make out of it ['''David''']&lt;br /&gt;
** Jonathan, Ingo, and me (David) cleaned the meridian building after the visit (internship) of a student from India who did not clean after their stay (a really 'special' experience that we would have gladly done without -&amp;gt; mold on dishes and in cups as well as fingernails,...) -&amp;gt; don't know yet what to make out of it ['''David''']&lt;br /&gt;
** Removal of the fluorescent tubes in the Drechsel office -&amp;gt; at the end, we (Jonathan, Steven, and me (David)) used a paper clip to fix them&lt;br /&gt;
&lt;br /&gt;
'''Drechsel room (crazy office) stuff''':&lt;br /&gt;
* Basketball basket in the office -&amp;gt; 'days with and without working accidents' (for instance, Sara's macbook episode with Dominik M.; you are only allowed to go home if you have hit the basket once,...)&lt;br /&gt;
* Contributor to the Remeis meme wall and to the 'black hole' of the office (in the wall)&lt;br /&gt;
* 'Zoo phenomenon': people come by and stare into the office, then just go away -&amp;gt; we as 'Drechsel inhabitants' felt like animals in a zoo&lt;br /&gt;
* 'Bermuda Triangle' behind the roll containers/heating where stuff disappears ['''David''']&lt;br /&gt;
* Inventor of the neologism 'papern', which describes everything related to papers (writing, talks, submission, language editing, etc.)&lt;br /&gt;
&lt;br /&gt;
'''Private things''':&lt;br /&gt;
* AoE II-DE (and IV) stuff: Picture where a villager is killed by an elephant/boar/lion/wolf etc., picture of a Daut castle/Daut monastery, TheViper's Snake logo, Nili's hippo logo, GL logo, Wololo V logo, pictures/memes from Wololo V in Heidelberg (Nili as a jester, etc.) -&amp;gt; not everything of this has to be placed on the hat! I (David) will make up my mind. ['''David''']&lt;br /&gt;
* SAO fan: SAO logo, pictures, etc. ['''David''']&lt;br /&gt;
* Trine fan: Maybe some nice screenshots? ['''David''']&lt;br /&gt;
* Fan of chilis, hot food, Japanese noodles (Ramen)&lt;br /&gt;
* Likes to go to Croatia and Japan for holidays -&amp;gt; Croatia/Japan-related stuff? &lt;br /&gt;
* Fan of the Game of Thrones and the Dune franchises&lt;br /&gt;
* Dream car: Ford Mustang Ecoboost (white with blue stripes or black with red stripes) -&amp;gt; Matchbox car? ['''David''']&lt;br /&gt;
* Big fan of honey (but special one); most favorite honey eaten thus far: Spanish honey with orange flavor&lt;br /&gt;
* Jonathan had several issues with orders at Amazon (oders for the observatory as well as orders for himself got delivered really late and only after several (non-)successful attempts to contact the responsible people...)&lt;br /&gt;
* Maybe wants to emigrate to Canada and live there in a penthouse at some point in his life&lt;br /&gt;
* Jonathan has to sneeze 1x after having eaten a piece of chocolate or when he goes outside and the Sun is shining; after that everything is okay -&amp;gt; maybe piece of chocolate &amp;amp; picture of the Sun &amp;amp; handkerchief? ['''David''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=PhD_hat_for_Jonathan&amp;diff=2424</id>
		<title>PhD hat for Jonathan</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=PhD_hat_for_Jonathan&amp;diff=2424"/>
		<updated>2022-01-04T14:05:31Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Jonathan has handed in his PhD thesis '''mid of December 2021'''. Therefore, his defense will most likely take place in '''February/March 2022'''.&lt;br /&gt;
&lt;br /&gt;
'''Timeline''':&lt;br /&gt;
&lt;br /&gt;
'''Mid of January 2022''': first virtual hat meeting (date will be determined by means of a doodle poll that will be sent around in the beginning of 2022); during this first meeting, we will come up with (more) ideas for the hat and distribute the work among the different contributors.&lt;br /&gt;
&lt;br /&gt;
'''Two weeks later (probably in the first week of February 2022)''': second virtual hat meeting (date will be determined during the first meeting); all items should be ready/bought by this date; status update and clarifying last organisational questions&lt;br /&gt;
&lt;br /&gt;
'''Probably in the second week of February 2022''': transport to Bamberg/Erlangen and construction of the hat; personally, I would like to store the hat somewhere at the observatory so that everything can be done there; the handover will also be done in Bamberg since Jonathan lives there  &lt;br /&gt;
&lt;br /&gt;
The hat has already been built by me (David). The hat platform is about 30 x 30 cm, the cylinder is about 11.5 cm high.&lt;br /&gt;
&lt;br /&gt;
Biggest problem (as in the case of Simon Kreuzer's and Ralf Ballhausen's hats): How to gather all things to put on the hat.&lt;br /&gt;
Idea: People that commute between Bamberg &amp;amp; Erlangen can deliver things, if the hat is stored at the observatory.&lt;br /&gt;
&lt;br /&gt;
Below you can find a list of ideas (work related, office related, private) for the construction of the hat (feel free to update anything you like, also before the 1st hat meeting!):&lt;br /&gt;
&lt;br /&gt;
'''Work things''':&lt;br /&gt;
* PhD related stuff -&amp;gt; Supernova remnants &amp;amp; large scale collisions of molecular clouds of the ISM&lt;br /&gt;
* Work for astronomical lab course -&amp;gt; one of the main if not the main responsible person for the new radio telescope&lt;br /&gt;
* Various different EROSITA Meetings -&amp;gt; Pictures/Events?&lt;br /&gt;
* EROSITA work&lt;br /&gt;
* Was responsible for the XMM archive at Remeis -&amp;gt; don't know yet what to make out of it ['''Ole''']&lt;br /&gt;
* Frequently used the observatory pool -&amp;gt; Pictures?&lt;br /&gt;
* Working abroad (Japan) for several weeks in Mai &amp;amp; June 2018 -&amp;gt; Pictures/Events?&lt;br /&gt;
* PhD student aka 'Wunderwuzzi':&lt;br /&gt;
** Jonathan and me (David) were responsible for hanging up several whiteboards at the observatory including the one in the conference room, also the 'new' astronomical pictures in the hallway to the meridian building were mainly hung up by us -&amp;gt; don't know yet what to make out of it ['''David''']&lt;br /&gt;
** Jonathan, Ingo, and me (David) cleaned the meridian building after the visit (internship) of a student from India who did not clean after their stay (a really 'special' experience that we would have gladly done without -&amp;gt; mold on dishes and in cups as well as fingernails,...) -&amp;gt; don't know yet what to make out of it ['''David''']&lt;br /&gt;
** Removal of the fluorescent tubes in the Drechsel office -&amp;gt; at the end, we (Jonathan, Steven, and me (David)) used a paper clip to fix them&lt;br /&gt;
&lt;br /&gt;
'''Drechsel room (crazy office) stuff''':&lt;br /&gt;
* Basketball basket in the office -&amp;gt; 'days with and without working accidents' (for instance, Sara's macbook episode with Dominik M.; you are only allowed to go home if you have hit the basket once,...)&lt;br /&gt;
* Contributor to the Remeis meme wall and to the 'black hole' of the office (in the wall)&lt;br /&gt;
* 'Zoo phenomenon': people come by and stare into the office, then just go away -&amp;gt; we as 'Drechsel inhabitants' felt like animals in a zoo&lt;br /&gt;
* 'Bermuda Triangle' behind the roll containers/heating where stuff disappears ['''David''']&lt;br /&gt;
* Inventor of the neologism 'papern', which describes everything related to papers (writing, talks, submission, language editing, etc.)&lt;br /&gt;
&lt;br /&gt;
'''Private things''':&lt;br /&gt;
* AoE II-DE (and IV) stuff: Picture where a villager is killed by an elephant/boar/lion/wolf etc., picture of a Daut castle/Daut monastery, TheViper's Snake logo, Nili's hippo logo, GL logo, Wololo V logo, pictures/memes from Wololo V in Heidelberg (Nili as a jester, etc.) -&amp;gt; not everything of this has to be placed on the hat! I (David) will make up my mind. ['''David''']&lt;br /&gt;
* SAO fan: SAO logo, pictures, etc. ['''David''']&lt;br /&gt;
* Trine fan: Maybe some nice screenshots? ['''David''']&lt;br /&gt;
* Fan of chilis, hot food, Japanese noodles (Ramen)&lt;br /&gt;
* Likes to go to Croatia and Japan for holidays -&amp;gt; Croatia/Japan-related stuff? &lt;br /&gt;
* Fan of the Game of Thrones and the Dune franchises&lt;br /&gt;
* Dream car: Ford Mustang Ecoboost (white with blue stripes or black with red stripes) -&amp;gt; Matchbox car? ['''David''']&lt;br /&gt;
* Big fan of honey (but special one); most favorite honey eaten thus far: Spanish honey with orange flavor&lt;br /&gt;
* Jonathan had several issues with orders at Amazon (oders for the observatory as well as orders for himself got delivered really late and only after several (non-)successful attempts to contact the responsible people...)&lt;br /&gt;
* Maybe wants to emigrate to Canada and live there in a penthouse at some point in his life&lt;br /&gt;
* Jonathan has to sneeze 1x after having eaten a piece of chocolate or when he goes outside and the Sun is shining; after that everything is okay -&amp;gt; maybe piece of chocolate &amp;amp; picture of the Sun &amp;amp; handkerchief? ['''David''']&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Litclub&amp;diff=2420</id>
		<title>Litclub</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Litclub&amp;diff=2420"/>
		<updated>2021-12-20T11:13:49Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Litclub Paper List ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== X-ray astronomy ===&lt;br /&gt;
&lt;br /&gt;
- Electromagnetic extraction of energy from Kerr black holes (Blandford-Znajek mechanism): Blandford, Znajek, MNRAS 179, 433, 1977 &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1977MNRAS.179..433B/abstract]Blandford, Znajek, MNRAS 179, 433, 1977&amp;lt;/ref&amp;gt; (although, astrophysically, the Blandford-Payne mechanism likely contributes more to jet launching, e.g., Livio, Ogilvie, Pringle, ApJ 512, 100, 1999)&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;The corona contracts in a black-hole transient&amp;quot;: Kara et al., Nature 565 198, 2019 &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2019Natur.565..198K/abstract]Kara et al., Nature 565 198, 2019&amp;lt;/ref&amp;gt;, although this result is disputed (e.g., arxiv/2112.08116)&lt;br /&gt;
&lt;br /&gt;
- Connection of FRBs and magnetars: &amp;quot;INTEGRAL Discovery of a Burst with Associated Radio Emission from the Magnetar SGR 1935+2154&amp;quot; (Mereghetti, 2020ApJ...898L..29M)&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;Discovery of oscillations above 200 keV in a black hole X-ray binary with Insight-HXMT&amp;quot; (Ma et al., 2021NatAs...5...94M)&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;Cygnus X-1 contains a 21-solar mass black hole—Implications for massive star winds&amp;quot; (Miller-Jones 2021Sci...371.1046M)&lt;br /&gt;
&lt;br /&gt;
- EuCAPT White Paper: Opportunities and Challenges for Theoretical Astroparticle Physics in the Next Decade (Batista et al., arxiv.org/abs/2110.10074) - perhaps too long but really interesting summary&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;Hydrodynamical backflow in X-shaped radio galaxy PKS 2014-55&amp;quot;, Cotton et al., ApJ 495, 1271, 2020 (2020MNRAS.495.1271C)&lt;br /&gt;
&lt;br /&gt;
- Dark Energy Survey 3 Year Results (e.g., http://adsabs.harvard.edu/abs/2021arXiv210513549D)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Statistics ===&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;STATISTICS, HANDLE WITH CARE: DETECTING MULTIPLE MODEL COMPONENTS WITH THE LIKELIHOOD RATIO TEST&amp;quot; (Protassov, 2002ApJ...571..545P)&lt;br /&gt;
&lt;br /&gt;
=== Some more extravagant stuff ===&lt;br /&gt;
&lt;br /&gt;
- The Breakthrough Listen Search For Intelligent Life Near the Galactic Center (Gajjar et al., ApJ 162, 33, 2021)&lt;br /&gt;
&lt;br /&gt;
- An Objective Bayesian Analysis of Life's Early Start and Our Late Arrival (Kipping, 2020, https://arxiv.org/abs/2005.09008)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Non-astrophysics ===&lt;br /&gt;
&lt;br /&gt;
- creating unique random numbers using the quantum noise of the vacuum: Gabriel et al., Nature Photonics 4, 711, 2010&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Litclub&amp;diff=2419</id>
		<title>Litclub</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Litclub&amp;diff=2419"/>
		<updated>2021-12-20T11:13:16Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Created page with &amp;quot;== Litclub Paper List ==   === X-ray astronomy ===  - Electromagnetic extraction of energy from Kerr black holes (Blandford-Znajek mechanism): Blandford, Znajek, MNRAS 179, 43...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Litclub Paper List ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== X-ray astronomy ===&lt;br /&gt;
&lt;br /&gt;
- Electromagnetic extraction of energy from Kerr black holes (Blandford-Znajek mechanism): Blandford, Znajek, MNRAS 179, 433, 1977 &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1977MNRAS.179..433B/abstract]Blandford, Znajek, MNRAS 179, 433, 1977&amp;lt;/ref&amp;gt; (although, astrophysically, the Blandford-Payne mechanism likely contributes more to jet launching, e.g., Livio, Ogilvie, Pringle, ApJ 512, 100, 1999)&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;The corona contracts in a black-hole transient&amp;quot;: Kara et al., Nature 565 198, 2019 &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2019Natur.565..198K/abstract]Kara et al., Nature 565 198, 2019&amp;lt;/ref&amp;gt;, although this result is disputed (e.g., arxiv/2112.08116)&lt;br /&gt;
&lt;br /&gt;
- Connection of FRBs and magnetars: &amp;quot;INTEGRAL Discovery of a Burst with Associated Radio Emission from the Magnetar SGR 1935+2154&amp;quot; (Mereghetti, 2020ApJ...898L..29M)&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;Discovery of oscillations above 200 keV in a black hole X-ray binary with Insight-HXMT&amp;quot; (Ma et al., 2021NatAs...5...94M)&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;Cygnus X-1 contains a 21-solar mass black hole—Implications for massive star winds&amp;quot; (Miller-Jones 2021Sci...371.1046M)&lt;br /&gt;
&lt;br /&gt;
- EuCAPT White Paper: Opportunities and Challenges for Theoretical Astroparticle Physics in the Next Decade (Batista et al., arxiv.org/abs/2110.10074) - perhaps too long but really interesting summary&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;Hydrodynamical backflow in X-shaped radio galaxy PKS 2014-55&amp;quot;, Cotton et al., ApJ 495, 1271, 2020 (2020MNRAS.495.1271C)&lt;br /&gt;
&lt;br /&gt;
- Dark Energy Survey 3 Year Results (e.g., http://adsabs.harvard.edu/abs/2021arXiv210513549D)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Statistics ===&lt;br /&gt;
&lt;br /&gt;
- &amp;quot;STATISTICS, HANDLE WITH CARE: DETECTING MULTIPLE MODEL COMPONENTS WITH THE LIKELIHOOD RATIO TEST&amp;quot; (Protassov, 2002ApJ...571..545P)&lt;br /&gt;
&lt;br /&gt;
=== Some more extravagant stuff ===&lt;br /&gt;
&lt;br /&gt;
- The Breakthrough Listen Search For Intelligent Life Near the Galactic Center (Gajjar et al., ApJ 162, 33, 2021)&lt;br /&gt;
- An Objective Bayesian Analysis of Life's Early Start and Our Late Arrival (Kipping, 2020, https://arxiv.org/abs/2005.09008)&lt;br /&gt;
- The paper that Jakob found about Kepler was super cool, but we've already spoken about it.&lt;br /&gt;
&lt;br /&gt;
=== Non-astrophysics ===&lt;br /&gt;
&lt;br /&gt;
- creating unique random numbers using the quantum noise of the vacuum: Gabriel et al., Nature Photonics 4, 711, 2010&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=PhD_hat_Simon&amp;diff=2134</id>
		<title>PhD hat Simon</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=PhD_hat_Simon&amp;diff=2134"/>
		<updated>2021-01-26T11:37:42Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Simon's defense is planned in a few weeks.&lt;br /&gt;
&lt;br /&gt;
Biggest problem: How to gather all things to put on the hat.&lt;br /&gt;
Idea: People that commute between Bamberg &amp;amp; Erlangen (or pass Erlangen) can deliver things. (Max, Jakob, Ralf, ...)&lt;br /&gt;
&lt;br /&gt;
'''Work things''':&lt;br /&gt;
* The &amp;quot;Slurm Snail&amp;quot;&lt;br /&gt;
&lt;br /&gt;
'''Private things''':&lt;br /&gt;
* Spruz label&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Internal]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=2123</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=2123"/>
		<updated>2020-12-21T10:26:54Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!; last edit by AG (10 Aug 2020) regarding the correction of telemetry drop-outs]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and the Good Time Intervals (GTI) information to be able to correct the light curve for the telemetry drop-outs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require(&amp;quot;isisscripts&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI00006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI00006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time information in the light curve is not given as a grid, however, a time grid with connecting high to low bins is necessary for the filter_gti function to filter out all time bins with an unsufficient fractional exposure. Compared to the default binning width of 100s, telemetry drop-outs can last for only a few seconds, hence only datapoints with times values that fall directly in the gaps between the GTIs, are sorted out. This can be avoided by inserting a grid, which we have to create from the information in the light curve FITS-file. Something that might be important for some analyses is the information about at which position in the grid the given time data points have to be set (either beginning, middle or the end). This is instrument dependent, and should be usually found either in the header or the manual. For XMM data, the data points mark the beginning of each bin (trel = 0).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% The binning width is encoded in the header with the keyword TIMEDEL&lt;br /&gt;
variable tdiff = fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TIMEDEL&amp;quot;);&lt;br /&gt;
variable trel = 0.;&lt;br /&gt;
&lt;br /&gt;
variable tlo_a = a.time - trel*tdiff;&lt;br /&gt;
variable thi_a = a.time + (1.-trel)*tdiff;&lt;br /&gt;
variable tlo_b = b.time - trel*tdiff;&lt;br /&gt;
variable thi_b = b.time + (1.-trel)*tdiff;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Now we can go ahead and correct the light curve. As the default value of the fractional exposure (fracexp) is set to a very low limit, one needs to set this to a reasonable high level (e.g. 90-95%). We can insert this information via the corresponding qualifier.&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
variable minimum_frac = 0.95;&lt;br /&gt;
(filta,filtb)     = filter_gti(tlo_a, thi_a, gtia; minfracexp=minimum_frac), filter_gti(tlo_b, thi_b ,gtib; minfracexp=minimum_frac);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves. As described above, this can be done with the &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; routine. I choose the Miyamoto normalization (see &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1992ApJ...391L..21M/abstract Miyamoto et al., ApJ, 391:L21-L24 (1992)] or [https://publikationen.uni-tuebingen.de/xmlui/handle/10900/48443 Katja's dissertation],p.68f&amp;lt;/ref&amp;gt;) as in the publication. This normalization yields the PSD in units of &amp;lt;i&amp;gt;fractional&amp;lt;/i&amp;gt; root mean square (rms/R&amp;lt;sub&amp;gt;signal&amp;lt;/sub&amp;gt;)&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; % (bins), should be a power of 2 for FFT&lt;br /&gt;
dt=0.1; % (s), timing resolution of lightcurve extraction&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC (due to the telemetry dropouts) if verbose is set in foucalc. We ignore that for this simple analysis.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15;&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
ylog;hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (also called &amp;lt;i&amp;gt;windowing&amp;lt;/i&amp;gt;, see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.). The maximum sampled frequency is 1/(2*dt), where dt is the timing resolution of the lightcurve. This is called the Nyquist frequency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;For the curious:&amp;lt;/i&amp;gt;&lt;br /&gt;
The time-lag is computed from the phase of the cross-power density. The CPD is a complex quantity and thus has an imaginary and real part (see, e.g., Uttley+14 eq.8-10): C~e&amp;lt;sup&amp;gt;i&amp;amp;phi;&amp;lt;/sup&amp;gt;=cos(&amp;amp;phi;)+isin(&amp;amp;phi;) =&amp;gt; &amp;amp;phi;=arctan(Im/Re) with the time lag &amp;amp;tau;=&amp;amp;phi;/(2&amp;amp;pi;f). In code &amp;lt;code&amp;gt;phase=atan2(r.imagcpd12,r.realcpd12); tlag = phase/(2*PI*r.freq);&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
The coherence function is 1 at low frequencies showing a perfect (linear) correlation of the two lightcurves (values above 1 are possible due to counting noise, see Sect. 3.4.1 and 3.5.3 in Katja's diss.). As a final step, we calculate the lag-energy spectrum, by extracting lightcurves in logarithmically binned energy intervals, and calculating their timelags. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.pdf]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=X2Go_Remote_Desktop&amp;diff=2008</id>
		<title>X2Go Remote Desktop</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=X2Go_Remote_Desktop&amp;diff=2008"/>
		<updated>2020-04-24T08:48:30Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
A remote desktop connection allows you to work with the full desktop, but remotely from home. Currently all desktop computers at Remeis support a remote desktop connection with '''X2Go'''.&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&lt;br /&gt;
All you need to do is to install the X2Go Client on your laptop, available for Linux, Mac, and Windows. This page shows how to do this: [https://wiki.x2go.org/doku.php/doc:installation:x2goclient X2Go Client Download and Installation]&lt;br /&gt;
&lt;br /&gt;
=== Usage ===&lt;br /&gt;
&lt;br /&gt;
Start the X2Go Client an click on &amp;quot;New Session&amp;quot;&lt;br /&gt;
&lt;br /&gt;
In the client software you have to specify the following things:&lt;br /&gt;
* username: your remeis username&lt;br /&gt;
* host:  &amp;lt;desktop-computer&amp;gt;.sternwarte.uni-erlangen.de   (fill in &amp;lt;desktop-computer&amp;gt; with the machine you typically use when working at the observatory)&lt;br /&gt;
* session type: XFCE   &lt;br /&gt;
* if you have ssh-keys installed, you can also check the box with &amp;quot;try auto login&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This could then look like the following (in case of user dauser on host corona):&lt;br /&gt;
&lt;br /&gt;
[[File:x2goclient_session_example.png|800px]]&lt;br /&gt;
&lt;br /&gt;
Save the session, type in the session name into &amp;quot;Session&amp;quot;, click enter and type in your password. This will launch the XFCE environment on the Remeis machine.&lt;br /&gt;
&lt;br /&gt;
[[Category:Working Remotely]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1993</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1993"/>
		<updated>2020-02-21T11:50:17Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require(&amp;quot;isisscripts&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI00006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI00006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves. As described above, this can be done with the &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; routine. I choose the Miyamoto normalization (see &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1992ApJ...391L..21M/abstract Miyamoto et al., ApJ, 391:L21-L24 (1992)] or [https://publikationen.uni-tuebingen.de/xmlui/handle/10900/48443 Katja's dissertation],p.68f&amp;lt;/ref&amp;gt;) as in the publication. This normalization yields the PSD in units of &amp;lt;i&amp;gt;fractional&amp;lt;/i&amp;gt; root mean square (rms/R&amp;lt;sub&amp;gt;signal&amp;lt;/sub&amp;gt;)&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; % (bins), should be a power of 2 for FFT&lt;br /&gt;
dt=0.1; % (s), timing resolution of lightcurve extraction&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC (due to the telemetry dropouts) if verbose is set in foucalc. We ignore that for this simple analysis.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15;&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
ylog;hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (also called &amp;lt;i&amp;gt;windowing&amp;lt;/i&amp;gt;, see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.). The maximum sampled frequency is 1/(2*dt), where dt is the timing resolution of the lightcurve. This is called the Nyquist frequency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;For the curious:&amp;lt;/i&amp;gt;&lt;br /&gt;
The time-lag is computed from the phase of the cross-power density. The CPD is a complex quantity and thus has an imaginary and real part (see, e.g., Uttley+14 eq.8-10): C~e&amp;lt;sup&amp;gt;i&amp;amp;phi;&amp;lt;/sup&amp;gt;=cos(&amp;amp;phi;)+isin(&amp;amp;phi;) =&amp;gt; &amp;amp;phi;=arctan(Im/Re) with the time lag &amp;amp;tau;=&amp;amp;phi;/(2&amp;amp;pi;f). In code &amp;lt;code&amp;gt;phase=atan2(r.imagcpd12,r.realcpd12); tlag = phase/(2*PI*r.freq);&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
The coherence function is 1 at low frequencies showing a perfect (linear) correlation of the two lightcurves (values above 1 are possible due to counting noise, see Sect. 3.4.1 and 3.5.3 in Katja's diss.). As a final step, we calculate the lag-energy spectrum, by extracting lightcurves in logarithmically binned energy intervals, and calculating their timelags. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
refeband=[540,10080]; % (eV), lower and upper bound of energy reference band&lt;br /&gt;
reffreqband=[0.034,0.12]; %% (Hz), lower and upper bound of frequency reference band, as in Fig. 2(a) of Uttley+11&lt;br /&gt;
&lt;br /&gt;
fnameref=sprintf(&amp;quot;%sfull_sd_0.1s_%04d-%d.lc&amp;quot;,datadir,refeband[0],refeband[1]);&lt;br /&gt;
lcref       = fits_read_table(fnameref);&lt;br /&gt;
lcrefgti    = fits_read_table(fnameref+&amp;quot;[GTI00006]&amp;quot;); % GTI correction&lt;br /&gt;
filtref     = filter_gti(lcref.time, lcrefgti);&lt;br /&gt;
timelcref   = lcref.time[filtref]-tstart;&lt;br /&gt;
cratelcref = lcref.counts[filtref]/dt;&lt;br /&gt;
&lt;br /&gt;
(elo,ehi)=log_grid(500,10000,14); % define logarithmic energy bins&lt;br /&gt;
&lt;br /&gt;
tlags=Double_Type[length(elo)];&lt;br /&gt;
tlagerrs=Double_Type[length(elo)];&lt;br /&gt;
&lt;br /&gt;
_for ii (0,length(elo)-1,1) {&lt;br /&gt;
  lcname=sprintf(&amp;quot;full_sd_0.1s_%04.0f-%04.0f.lc&amp;quot;,elo[ii],ehi[ii]);&lt;br /&gt;
  if (ehi[ii]&amp;gt;9999) { lcname=sprintf(&amp;quot;full_sd_0.1s_%04.0f-%05.0f.lc&amp;quot;,elo[ii],ehi[ii]); };&lt;br /&gt;
&lt;br /&gt;
  lc = fits_read_table(lcname);&lt;br /&gt;
  lcgti = fits_read_table(lcname+&amp;quot;[GTI00006]&amp;quot;);&lt;br /&gt;
  filtgti = filter_gti(lc.time, lcgti);&lt;br /&gt;
&lt;br /&gt;
  timelc=lc.time[filtgti]-tstart; %% Do GTI correction&lt;br /&gt;
  cratelc=lc.counts[filtgti]/dt;&lt;br /&gt;
&lt;br /&gt;
  %% Calculate PSD, cross-spectrum (CPD) and lags with foucalc&lt;br /&gt;
  %% Note that we subtract off the small band LC from the reference&lt;br /&gt;
  %% LC. Uttley+11 call this a Channel-of-interest (CI) correction.&lt;br /&gt;
  %% This is needed to account for the Poisson noise which is&lt;br /&gt;
  %% correlated for both LCs. See also Uttley+11 Sect 3.2.&lt;br /&gt;
  r=foucalc(struct{time=timelcref, rate1=cratelcref-cratelc, rate2=cratelc}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
  %% Filter the CPD / timelag array for the given frequency range&lt;br /&gt;
  filt=where(reffreqband[0]&amp;lt;=r.freq&amp;lt;=reffreqband[1]);&lt;br /&gt;
&lt;br /&gt;
  %% Compute average time lag as arithmetic weighted mean&lt;br /&gt;
  tlags[ii]=weighted_mean(r.lag12[filt] ; err=r.errlag12[filt]);&lt;br /&gt;
  tlagerrs[ii]=1./sqrt(sum(1/r.errlag12[filt]^2));&lt;br /&gt;
}&lt;br /&gt;
&lt;br /&gt;
ylin;hplot_with_err(elo/1000., ehi/1000., tlags, tlagerrs);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.pdf]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1992</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1992"/>
		<updated>2020-02-21T11:32:19Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require(&amp;quot;isisscripts&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI00006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI00006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves. As described above, this can be done with the &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; routine. I choose the Miyamoto normalization (see &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1992ApJ...391L..21M/abstract Miyamoto et al., ApJ, 391:L21-L24 (1992)] or [https://publikationen.uni-tuebingen.de/xmlui/handle/10900/48443 Katja's dissertation],p.68f&amp;lt;/ref&amp;gt;) as in the publication. This normalization yields the PSD in units of &amp;lt;i&amp;gt;fractional&amp;lt;/i&amp;gt; root mean square (rms/R&amp;lt;sub&amp;gt;signal&amp;lt;/sub&amp;gt;)&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; % (bins), should be a power of 2 for FFT&lt;br /&gt;
dt=0.1; % (s), timing resolution of lightcurve extraction&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC (due to the telemetry dropouts) if verbose is set in foucalc. We ignore that for this simple analysis.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15;&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
ylog;hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (also called &amp;lt;i&amp;gt;windowing&amp;lt;/i&amp;gt;, see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.). The maximum sampled frequency is 1/(2*dt), where dt is the timing resolution of the lightcurve. This is called the Nyquist frequency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;For the curious:&amp;lt;/i&amp;gt;&lt;br /&gt;
The time-lag is computed from the phase of the cross-power density. The CPD is a complex quantity and thus has an imaginary and real part (see, e.g., Uttley+14 eq.8-10): C~e&amp;lt;sup&amp;gt;i&amp;amp;phi;&amp;lt;/sup&amp;gt;=cos(&amp;amp;phi;)+isin(&amp;amp;phi;) =&amp;gt; &amp;amp;phi;=arctan(Im/Re) with the time lag &amp;amp;tau;=&amp;amp;phi;/(2&amp;amp;pi;f). In code &amp;lt;code&amp;gt;phase=atan2(r.imagcpd12,r.realcpd12); tlag = phase/(2*PI*r.freq);&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
The coherence function is 1 at low frequencies showing a perfect (linear) correlation of the two lightcurves (values above 1 are possible due to counting noise, see Sect. 3.4.1 and 3.5.3 in Katja's diss.). As a final step, we calculate the lag-energy spectrum, by extracting lightcurves in logarithmically binned energy intervals, and calculating their timelags. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
refeband=[540,10080]; % (eV), lower and upper bound of energy reference band&lt;br /&gt;
reffreqband=[0.034,0.12]; %% (Hz), lower and upper bound of frequency reference band, as in Fig. 2(a) of Uttley+11&lt;br /&gt;
&lt;br /&gt;
fnameref=sprintf(&amp;quot;%sfull_sd_0.1s_%04d-%d.lc&amp;quot;,datadir,refeband[0],refeband[1]);&lt;br /&gt;
lcref       = fits_read_table(fnameref);&lt;br /&gt;
lcrefgti    = fits_read_table(fnameref+&amp;quot;[GTI00006]&amp;quot;); % GTI correction&lt;br /&gt;
filtref     = filter_gti(lcref.time, lcrefgti);&lt;br /&gt;
timelcref   = lcref.time[filtref]-tstart;&lt;br /&gt;
countslcref = lcref.counts[filtref]/dt;&lt;br /&gt;
&lt;br /&gt;
(lo,hi)=log_grid(500,10000,14); % define logarithmic energy bins&lt;br /&gt;
&lt;br /&gt;
tlags=Double_Type[length(elo)];&lt;br /&gt;
tlagerrs=Double_Type[length(elo)];&lt;br /&gt;
&lt;br /&gt;
_for ii (0,length(elo)-1,1) {&lt;br /&gt;
  lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%04.0f.lc&amp;quot;,elo[ii],ehi[ii]);&lt;br /&gt;
  if (ehi[ii]&amp;gt;9999) { lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%05.0f.lc&amp;quot;,elo[ii],ehi[ii]); };&lt;br /&gt;
&lt;br /&gt;
  lc = fits_read_table(lcname);&lt;br /&gt;
  lcgti = fits_read_table(lcname+&amp;quot;[GTI00006]&amp;quot;);&lt;br /&gt;
  filtgti = filter_gti(lc.time, lcgti);&lt;br /&gt;
&lt;br /&gt;
  timelc=lc.time[filtgti]-tstart; %% Do GTI correction&lt;br /&gt;
  countslc=lc.counts[filtgti]/dt;&lt;br /&gt;
&lt;br /&gt;
  %% Calculate PSD, cross-spectrum (CPD) and lags with foucalc&lt;br /&gt;
  %% Note that we subtract off the small band LC from the reference&lt;br /&gt;
  %% LC. Uttley+11 call this a Channel-of-interest (CI) correction.&lt;br /&gt;
  %% This is needed to account for the Poisson noise which is&lt;br /&gt;
  %% correlated for both LCs. See also Uttley+11 Sect 3.2.&lt;br /&gt;
  r=foucalc(struct{time=timelcref, rate1=cratelcref-cratelc, rate2=cratelc}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
  %% Filter the CPD / timelag array for the given frequency range&lt;br /&gt;
  filt=where(reffreqband[0]&amp;lt;=r.freq&amp;lt;=reffreqband[1]);&lt;br /&gt;
&lt;br /&gt;
  %% Compute average time lag as arithmetic weighted mean&lt;br /&gt;
  tlags[ii]=weighted_mean(r.lag12[filt] ; err=r.errlag12[filt]);&lt;br /&gt;
  tlagerrs[ii]=1./sqrt(sum(1/r.errlag12[filt]^2));&lt;br /&gt;
&lt;br /&gt;
hplot(elo/1000., ehi/1000., tlags, tlagerrs);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.pdf]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1991</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1991"/>
		<updated>2020-02-21T11:21:40Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require(&amp;quot;isisscripts&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI00006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI00006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves. As described above, this can be done with the &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; routine. I choose the Miyamoto normalization (see &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1992ApJ...391L..21M/abstract Miyamoto et al., ApJ, 391:L21-L24 (1992)] or [https://publikationen.uni-tuebingen.de/xmlui/handle/10900/48443 Katja's dissertation],p.68f&amp;lt;/ref&amp;gt;) as in the publication. This normalization yields the PSD in units of &amp;lt;i&amp;gt;fractional&amp;lt;/i&amp;gt; root mean square (rms/R&amp;lt;sub&amp;gt;signal&amp;lt;/sub&amp;gt;)&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC (due to the telemetry dropouts) if verbose is set in foucalc. We ignore that for this simple analysis.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (also called &amp;lt;i&amp;gt;windowing&amp;lt;/i&amp;gt;, see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.). The maximum sampled frequency is 1/(2*dt), where dt is the timing resolution of the lightcurve. This is called the Nyquist frequency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;For the curious:&amp;lt;/i&amp;gt;&lt;br /&gt;
The time-lag is computed from the phase of the cross-power density. The CPD is a complex quantity and thus has an imaginary and real part (see, e.g., Uttley+14 eq.8-10): C~e&amp;lt;sup&amp;gt;i&amp;amp;phi;&amp;lt;/sup&amp;gt;=cos(&amp;amp;phi;)+isin(&amp;amp;phi;) =&amp;gt; &amp;amp;phi;=arctan(Im/Re) with the time lag &amp;amp;tau;=&amp;amp;phi;/(2&amp;amp;pi;f). In code &amp;lt;code&amp;gt;phase=atan2(r.imagcpd12,r.realcpd12); tlag = phase/(2*PI*r.freq);&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
The coherence function is 1 at low frequencies showing a perfect (linear) correlation of the two lightcurves (values above 1 are possible due to counting noise, see Sect. 3.4.1 and 3.5.3 in Katja's diss.). As a final step, we calculate the lag-energy spectrum, by extracting lightcurves in logarithmically binned energy intervals, and calculating their timelags. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
refeband=[540,10080]; % (eV), lower and upper bound of energy reference band&lt;br /&gt;
reffreqband=[0.034,0.12]; %% (Hz), lower and upper bound of frequency reference band, as in Fig. 2(a) of Uttley+11&lt;br /&gt;
&lt;br /&gt;
fnameref=sprintf(&amp;quot;%sfull_sd_0.1s_%04d-%d.lc&amp;quot;,datadir,refeband[0],refeband[1]);&lt;br /&gt;
lcref       = fits_read_table(fnameref);&lt;br /&gt;
lcrefgti    = fits_read_table(fnameref+&amp;quot;[GTI00006]&amp;quot;); % GTI correction&lt;br /&gt;
filtref     = filter_gti(lcref.time, lcrefgti);&lt;br /&gt;
timelcref   = lcref.time[filtref]-tstart;&lt;br /&gt;
countslcref = lcref.counts[filtref]/dt;&lt;br /&gt;
&lt;br /&gt;
(lo,hi)=log_grid(500,10000,14); % define logarithmic energy bins&lt;br /&gt;
&lt;br /&gt;
tlags=Double_Type[length(elo)];&lt;br /&gt;
tlagerrs=Double_Type[length(elo)];&lt;br /&gt;
&lt;br /&gt;
_for ii (0,length(elo)-1,1) {&lt;br /&gt;
  lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%04.0f.lc&amp;quot;,elo[ii],ehi[ii]);&lt;br /&gt;
  if (ehi[ii]&amp;gt;9999) { lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%05.0f.lc&amp;quot;,elo[ii],ehi[ii]); };&lt;br /&gt;
&lt;br /&gt;
  lc = fits_read_table(lcname);&lt;br /&gt;
  lcgti = fits_read_table(lcname+&amp;quot;[GTI00006]&amp;quot;);&lt;br /&gt;
  filtgti = filter_gti(lc.time, lcgti);&lt;br /&gt;
&lt;br /&gt;
  timelc=lc.time[filtgti]-tstart; %% Do GTI correction&lt;br /&gt;
  countslc=lc.counts[filtgti]/dt;&lt;br /&gt;
&lt;br /&gt;
  %% Calculate PSD, cross-spectrum (CPD) and lags with foucalc&lt;br /&gt;
  %% Note that we subtract off the small band LC from the reference&lt;br /&gt;
  %% LC. Uttley+11 call this a Channel-of-interest (CI) correction.&lt;br /&gt;
  %% This is needed to account for the Poisson noise which is&lt;br /&gt;
  %% correlated for both LCs. See also Uttley+11 Sect 3.2.&lt;br /&gt;
  r=foucalc(struct{time=timelcref, rate1=cratelcref-cratelc, rate2=cratelc}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
  %% Filter the CPD / timelag array for the given frequency range&lt;br /&gt;
  filt=where(reffreqband[0]&amp;lt;=r.freq&amp;lt;=reffreqband[1]);&lt;br /&gt;
&lt;br /&gt;
  %% Compute average time lag as arithmetic weighted mean&lt;br /&gt;
  tlags[ii]=weighted_mean(r.lag12[filt] ; err=r.errlag12[filt]);&lt;br /&gt;
  tlagerrs[ii]=1./sqrt(sum(1/r.errlag12[filt]^2));&lt;br /&gt;
&lt;br /&gt;
hplot(elo/1000., ehi/1000., tlags, tlagerrs);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.pdf]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1990</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1990"/>
		<updated>2020-02-21T11:12:47Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves. As described above, this can be done with the &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; routine. I choose the Miyamoto normalization (see &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1992ApJ...391L..21M/abstract Miyamoto et al., ApJ, 391:L21-L24 (1992)] or [https://publikationen.uni-tuebingen.de/xmlui/handle/10900/48443 Katja's dissertation],p.68f&amp;lt;/ref&amp;gt;) as in the publication. This normalization yields the PSD in units of &amp;lt;i&amp;gt;fractional&amp;lt;/i&amp;gt; root mean square (rms/R&amp;lt;sub&amp;gt;signal&amp;lt;/sub&amp;gt;)&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC (due to the telemetry dropouts) if verbose is set in foucalc. We ignore that for this simple analysis.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (also called &amp;lt;i&amp;gt;windowing&amp;lt;/i&amp;gt;, see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.). The maximum sampled frequency is 1/(2*dt), where dt is the timing resolution of the lightcurve. This is called the Nyquist frequency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;For the curious:&amp;lt;/i&amp;gt;&lt;br /&gt;
The time-lag is computed from the phase of the cross-power density. The CPD is a complex quantity and thus has an imaginary and real part (see, e.g., Uttley+14 eq.8-10): C~e&amp;lt;sup&amp;gt;i&amp;amp;phi;&amp;lt;/sup&amp;gt;=cos(&amp;amp;phi;)+isin(&amp;amp;phi;) =&amp;gt; &amp;amp;phi;=arctan(Im/Re) with the time lag &amp;amp;tau;=&amp;amp;phi;/(2&amp;amp;pi;f). In code &amp;lt;code&amp;gt;phase=atan2(r.imagcpd12,r.realcpd12); tlag = phase/(2*PI*r.freq);&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
The coherence function is 1 at low frequencies showing a perfect (linear) correlation of the two lightcurves (values above 1 are possible due to counting noise, see Sect. 3.4.1 and 3.5.3 in Katja's diss.). As a final step, we calculate the lag-energy spectrum, by extracting lightcurves in logarithmically binned energy intervals, and calculating their timelags. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
refeband=[540,10080]; % (eV), lower and upper bound of energy reference band&lt;br /&gt;
reffreqband=[0.034,0.12]; %% (Hz), lower and upper bound of frequency reference band, as in Fig. 2(a) of Uttley+11&lt;br /&gt;
&lt;br /&gt;
fnameref=sprintf(&amp;quot;%sfull_sd_0.1s_%04d-%d.lc&amp;quot;,datadir,refeband[0],refeband[1]);&lt;br /&gt;
lcref       = fits_read_table(fnameref);&lt;br /&gt;
lcrefgti    = fits_read_table(fnameref+&amp;quot;[GTI00006]&amp;quot;); % GTI correction&lt;br /&gt;
filtref     = filter_gti(lcref.time, lcrefgti);&lt;br /&gt;
timelcref   = lcref.time[filtref]-tstart;&lt;br /&gt;
countslcref = lcref.counts[filtref]/dt;&lt;br /&gt;
&lt;br /&gt;
(lo,hi)=log_grid(500,10000,14); % define logarithmic energy bins&lt;br /&gt;
&lt;br /&gt;
tlags=Double_Type[length(elo)];&lt;br /&gt;
tlagerrs=Double_Type[length(elo)];&lt;br /&gt;
&lt;br /&gt;
_for ii (0,length(elo)-1,1) {&lt;br /&gt;
  lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%04.0f.lc&amp;quot;,elo[ii],ehi[ii]);&lt;br /&gt;
  if (ehi[ii]&amp;gt;9999) { lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%05.0f.lc&amp;quot;,elo[ii],ehi[ii]); };&lt;br /&gt;
&lt;br /&gt;
  lc = fits_read_table(lcname);&lt;br /&gt;
  lcgti = fits_read_table(lcname+&amp;quot;[GTI00006]&amp;quot;);&lt;br /&gt;
  filtgti = filter_gti(lc.time, lcgti);&lt;br /&gt;
&lt;br /&gt;
  timelc=lc.time[filtgti]-tstart; %% Do GTI correction&lt;br /&gt;
  countslc=lc.counts[filtgti]/dt;&lt;br /&gt;
&lt;br /&gt;
  %% Calculate PSD, cross-spectrum (CPD) and lags with foucalc&lt;br /&gt;
  %% Note that we subtract off the small band LC from the reference&lt;br /&gt;
  %% LC. Uttley+11 call this a Channel-of-interest (CI) correction.&lt;br /&gt;
  %% This is needed to account for the Poisson noise which is&lt;br /&gt;
  %% correlated for both LCs. See also Uttley+11 Sect 3.2.&lt;br /&gt;
  r=foucalc(struct{time=timelcref, rate1=cratelcref-cratelc, rate2=cratelc}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
  %% Filter the CPD / timelag array for the given frequency range&lt;br /&gt;
  filt=where(reffreqband[0]&amp;lt;=r.freq&amp;lt;=reffreqband[1]);&lt;br /&gt;
&lt;br /&gt;
  %% Compute average time lag as arithmetic weighted mean&lt;br /&gt;
  tlags[ii]=weighted_mean(r.lag12[filt] ; err=r.errlag12[filt]);&lt;br /&gt;
  tlagerrs[ii]=1./sqrt(sum(1/r.errlag12[filt]^2));&lt;br /&gt;
&lt;br /&gt;
hplot(elo/1000., ehi/1000., tlags, tlagerrs);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx339m4_xmm_lagenergy.pdf]]&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_lagenergy.png&amp;diff=1989</id>
		<title>File:Gx339m4 xmm lagenergy.png</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_lagenergy.png&amp;diff=1989"/>
		<updated>2020-02-21T11:11:51Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Lag-energy spectrum of GX 339-4 as in Uttley+11&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lag-energy spectrum of GX 339-4 as in Uttley+11&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_lagenergy.pdf&amp;diff=1988</id>
		<title>File:Gx339m4 xmm lagenergy.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_lagenergy.pdf&amp;diff=1988"/>
		<updated>2020-02-21T11:11:26Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Lag-energy spectrum of GX 339-4 as in Uttley+11&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lag-energy spectrum of GX 339-4 as in Uttley+11&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1987</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1987"/>
		<updated>2020-02-21T10:56:28Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves. As described above, this can be done with the &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; routine. I choose the Miyamoto normalization (see &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1992ApJ...391L..21M/abstract Miyamoto et al., ApJ, 391:L21-L24 (1992)] or [https://publikationen.uni-tuebingen.de/xmlui/handle/10900/48443 Katja's dissertation],p.68f&amp;lt;/ref&amp;gt;) as in the publication. This normalization yields the PSD in units of &amp;lt;i&amp;gt;fractional&amp;lt;/i&amp;gt; root mean square (rms/R&amp;lt;sub&amp;gt;signal&amp;lt;/sub&amp;gt;)&amp;lt;sup&amp;gt;2&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC (due to the telemetry dropouts) if verbose is set in foucalc. We ignore that for this simple analysis.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (also called &amp;lt;i&amp;gt;windowing&amp;lt;/i&amp;gt;, see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.). The maximum sampled frequency is 1/(2*dt), where dt is the timing resolution of the lightcurve. This is called the Nyquist frequency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;i&amp;gt;For the curious:&amp;lt;/i&amp;gt;&lt;br /&gt;
The time-lag is computed from the phase of the cross-power density. The CPD is a complex quantity and thus has an imaginary and real part (see, e.g., Uttley+14 eq.8-10): C~e&amp;lt;sup&amp;gt;i&amp;amp;phi;&amp;lt;/sup&amp;gt;=cos(&amp;amp;phi;)+isin(&amp;amp;phi;) =&amp;gt; &amp;amp;phi;=arctan(Im/Re) with the time lag &amp;amp;tau;=&amp;amp;phi;/(2&amp;amp;pi;f). In code &amp;lt;code&amp;gt;phase=atan2(r.imagcpd12,r.realcpd12); tlag = phase/(2*PI*r.freq);&amp;lt;/code&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
The coherence function is 1 at low frequencies showing a perfect (linear) correlation of the two lightcurves (values above 1 are possible due to counting noise, see Sect. 3.4.1 and 3.5.3 in Katja's diss.). As a final step, we calculate the lag-energy spectrum, by extracting lightcurves in logarithmically binned energy intervals, and calculating their timelags. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
refeband=[540,10080]; % (eV), lower and upper bound of energy reference band&lt;br /&gt;
reffreqband=[0.034,0.12]; %% (Hz), lower and upper bound of frequency reference band, as in Fig. 2(a) of Uttley+11&lt;br /&gt;
&lt;br /&gt;
fnameref=sprintf(&amp;quot;%sfull_sd_0.1s_%04d-%d.lc&amp;quot;,datadir,refeband[0],refeband[1]);&lt;br /&gt;
lcref       = fits_read_table(fnameref);&lt;br /&gt;
lcrefgti    = fits_read_table(fnameref+&amp;quot;[GTI00006]&amp;quot;); % GTI correction&lt;br /&gt;
filtref     = filter_gti(lcref.time, lcrefgti);&lt;br /&gt;
timelcref   = lcref.time[filtref]-tstart;&lt;br /&gt;
countslcref = lcref.counts[filtref]/dt;&lt;br /&gt;
&lt;br /&gt;
(lo,hi)=log_grid(500,10000,14); % define logarithmic energy bins&lt;br /&gt;
&lt;br /&gt;
tlags=Double_Type[length(elo)];&lt;br /&gt;
tlagerrs=Double_Type[length(elo)];&lt;br /&gt;
&lt;br /&gt;
_for ii (0,length(elo)-1,1) {&lt;br /&gt;
  lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%04.0f.lc&amp;quot;,elo[ii],ehi[ii]);&lt;br /&gt;
  if (ehi[ii]&amp;gt;9999) { lcname=datadir+sprintf(&amp;quot;full_sd_0.1s_%04.0f-%05.0f.lc&amp;quot;,elo[ii],ehi[ii]); };&lt;br /&gt;
&lt;br /&gt;
  lc = fits_read_table(lcname);&lt;br /&gt;
  lcgti = fits_read_table(lcname+&amp;quot;[GTI00006]&amp;quot;);&lt;br /&gt;
  filtgti = filter_gti(lc.time, lcgti);&lt;br /&gt;
&lt;br /&gt;
  timelc=lc.time[filtgti]-tstart; %% Do GTI correction&lt;br /&gt;
  countslc=lc.counts[filtgti]/dt;&lt;br /&gt;
&lt;br /&gt;
  %% Calculate PSD, cross-spectrum (CPD) and lags with foucalc&lt;br /&gt;
  %% Note that we subtract off the small band LC from the reference&lt;br /&gt;
  %% LC. Uttley+11 call this a Channel-of-interest (CI) correction.&lt;br /&gt;
  %% This is needed to account for the Poisson noise which is&lt;br /&gt;
  %% correlated for both LCs. See also Uttley+11 Sect 3.2.&lt;br /&gt;
  r=foucalc(struct{time=timelcref, rate1=cratelcref-cratelc, rate2=cratelc}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
  %% Filter the CPD / timelag array for the given frequency range&lt;br /&gt;
  filt=where(reffreqband[0]&amp;lt;=r.freq&amp;lt;=reffreqband[1]);&lt;br /&gt;
&lt;br /&gt;
  %% Compute average time lag as arithmetic weighted mean&lt;br /&gt;
  tlags[ii]=weighted_mean(r.lag12[filt] ; err=r.errlag12[filt]);&lt;br /&gt;
  tlagerrs[ii]=1./sqrt(sum(1/r.errlag12[filt]^2));&lt;br /&gt;
&lt;br /&gt;
hplot(elo/1000., ehi/1000., tlags, tlagerrs);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1984</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1984"/>
		<updated>2020-02-21T10:21:30Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves. As described above, this can be done with the &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; routine. I choose the Miyamoto normalization (see &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/1992ApJ...391L..21M/abstract Miyamoto et al., ApJ, 391:L21-L24 (1992)] or [https://publikationen.uni-tuebingen.de/xmlui/handle/10900/48443 Katja's dissertation],p.68f&amp;lt;/ref&amp;gt;) as in the publication. This normalization yields the PSD in units of &amp;lt;i&amp;gt;fractional&amp;lt;/i&amp;gt; root mean square (rms/R_signal)^2.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC if verbose is set in foucalc. We ignore this...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (also called &amp;lt;i&amp;gt;windowing&amp;lt;/i&amp;gt;, see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.). The maximum sampled frequency is 1/(2*dt), where dt is the timing resolution of the lightcurve. This is called the Nyquist frequency.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
Explain large error bars, explain decay, explain coherence&amp;gt;1, explain how to build lag-energy spectrum&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1983</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1983"/>
		<updated>2020-02-21T08:49:08Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! - VG]&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
[--- under construction --- written by OK, please ask if something is unclear!]&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
Fortunately, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&lt;br /&gt;
%% There is a warning that there are gaps in the LC if verbose is set in foucalc. We ignore this...&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio. All &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; quantities can be re-binned with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine (note that the PSD has already been binned when we set the length of the segments.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
What can we deduce from this PSD? We observe that the minimum sampled frequency is 1/dimseg - if you choose a long segment length, small quantities can be sampled but then also the S/N-ratio is smaller. On the other hand, choosing a small segment length can lead to an effect called red-noise leakage, which is due to the finite extend of the data (see &amp;lt;ref&amp;gt;[https://hdl.handle.net/11245/1.426951 M. van der Klies 1988]&amp;lt;/ref&amp;gt; p.8ff or &amp;lt;ref&amp;gt;[https://ui.adsabs.harvard.edu/abs/2014A%26ARv..22...72U/abstract Uttley, P. et al., A&amp;amp;A Review, Volume 22, article id.72, 66 pp., 2014]&amp;lt;/ref&amp;gt; p.14f.).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
Explain large error bars, explain decay, explain coherence&amp;gt;1, explain how to build lag-energy spectrum&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1982</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1982"/>
		<updated>2020-02-12T15:47:44Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! -- VG]&lt;br /&gt;
&lt;br /&gt;
== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
--- under construction ---&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 (written by OK, please ask if something is unclear) &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
But, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves (let's ignore the warnings of &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; if verbose is set that there are gaps in the lightcurve - XXXX: explain why).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio, which can be done with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
Explain large error bars, explain decay, explain coherence&amp;gt;1, explain how to build lag-energy spectrum&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1981</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1981"/>
		<updated>2020-02-12T15:37:25Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! -- VG]&lt;br /&gt;
&lt;br /&gt;
== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
--- under construction ---&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 (written by OK, please ask if something is unclear) &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;2000,3000&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
But, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves (and ignore the warnings of &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; if verbose is set that there are gaps in the lightcurve).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio, which can be done with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
Explain large error bars, explain decay, explain coherence&amp;gt;1&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1980</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1980"/>
		<updated>2020-02-12T15:36:40Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! -- VG]&lt;br /&gt;
&lt;br /&gt;
== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
--- under construction ---&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 (written by OK, please ask if something is unclear) &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start by extracting the XMM-Newton PN lightcurve in two energy bands (500-900eV and 2000-3000eV)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we can start the analysis. Let's load the data and correct them for the telemetry drop-outs (encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
But, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves (and ignore the warnings of &amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; if verbose is set that there are gaps in the lightcurve).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio, which can be done with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
Explain large error bars, explain decay, explain coherence&amp;gt;1&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1979</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1979"/>
		<updated>2020-02-12T15:30:45Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! -- VG]&lt;br /&gt;
&lt;br /&gt;
== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
--- under construction ---&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 (written by OK, please ask if something is unclear) &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start with extracting the lightcurve in two energy bands (500-900eV and 2000-3000eV) with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we load the data and correct them for the telemetry drop-outs&lt;br /&gt;
(encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
But, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves (and ignore the warnings of foucalc if verbose is set that there are gaps in the lightcurve).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio, which can be done with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
hplot_with_err( reb_psd1.freq_lo, reb_psd1.freq_hi, reb_psd1.value*reb_psd1.freq_lo, reb_err1.value/sqrt(reb_err1.n)*reb_psd1.freq_lo );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Rebin time lag results&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err(reb_lag12.freq_lo, reb_lag12.freq_hi, reb_lag12.value, reb_errlag12.value/sqrt(reb_errlag12.n) );&lt;br /&gt;
&lt;br /&gt;
%% Rebin coherence function&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
hplot_with_err( reb_cof12.freq_lo, reb_cof12.freq_hi, reb_cof12.value, reb_errcof12.value/sqrt(reb_errcof12.n) );&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.png]] [[File:Gx339m4_xmm_coherence.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4_xmm_tlag.pdf]] [[File:Gx339m4_xmm_coherence.pdf]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.png&amp;diff=1978</id>
		<title>File:Gx339m4 xmm tlag.png</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.png&amp;diff=1978"/>
		<updated>2020-02-12T15:30:34Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Koenig uploaded a new version of File:Gx339m4 xmm tlag.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.pdf&amp;diff=1977</id>
		<title>File:Gx339m4 xmm tlag.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.pdf&amp;diff=1977"/>
		<updated>2020-02-12T15:29:59Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Koenig uploaded a new version of File:Gx339m4 xmm tlag.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.pdf&amp;diff=1976</id>
		<title>File:Gx339m4 xmm coherence.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.pdf&amp;diff=1976"/>
		<updated>2020-02-12T15:27:29Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Koenig uploaded a new version of File:Gx339m4 xmm coherence.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.png&amp;diff=1975</id>
		<title>File:Gx339m4 xmm coherence.png</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.png&amp;diff=1975"/>
		<updated>2020-02-12T15:27:02Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Koenig uploaded a new version of File:Gx339m4 xmm coherence.png&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1974</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1974"/>
		<updated>2020-02-12T15:06:59Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! -- VG]&lt;br /&gt;
&lt;br /&gt;
== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
&lt;br /&gt;
--- under construction ---&lt;br /&gt;
&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 (written by OK, please ask if something is unclear) &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start with extracting the lightcurve in two energy bands (500-900eV and 2000-3000eV) with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we load the data and correct them for the telemetry drop-outs&lt;br /&gt;
(encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
But, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves (and ignore the warnings of foucalc if verbose is set that there are gaps in the lightcurve).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio, which can be done with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
psd1 = struct{&lt;br /&gt;
  freq_lo = reb_psd1.freq_lo,&lt;br /&gt;
  freq_hi = reb_psd1.freq_hi,&lt;br /&gt;
  value   = reb_psd1.value,&lt;br /&gt;
  error   = reb_err1.value/sqrt(reb_err1.n)&lt;br /&gt;
};&lt;br /&gt;
psd2 = struct{&lt;br /&gt;
  freq_lo = reb_psd2.freq_lo,&lt;br /&gt;
  freq_hi = reb_psd2.freq_hi,&lt;br /&gt;
  value   = reb_psd2.value,&lt;br /&gt;
  error   = reb_err2.value/sqrt(reb_err2.n)&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2, psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;code&amp;gt;foucalc&amp;lt;/code&amp;gt; also computes the time lag between the two lightcurves. Let's first rebin and then plot them together with the coherence:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_lag12 = rebin_fouquan(r.freq, r.lag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errlag12 = rebin_fouquan(r.freq, r.errlag12, r.numavgall ; logfreq=dff);&lt;br /&gt;
ylin; plot_with_err((reb_lag12.freq_lo+reb_lag12.freq_hi)/2., reb_lag12.value, reb_errlag12.value);&lt;br /&gt;
&lt;br /&gt;
variable reb_cof12 = rebin_fouquan(r.freq, r.cof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
variable reb_errcof12 = rebin_fouquan(r.freq, r.errcof12, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.pdf&amp;diff=1973</id>
		<title>File:Gx339m4 xmm tlag.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.pdf&amp;diff=1973"/>
		<updated>2020-02-12T15:00:41Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.png&amp;diff=1972</id>
		<title>File:Gx339m4 xmm tlag.png</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_tlag.png&amp;diff=1972"/>
		<updated>2020-02-12T15:00:27Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.png&amp;diff=1971</id>
		<title>File:Gx339m4 xmm coherence.png</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.png&amp;diff=1971"/>
		<updated>2020-02-12T15:00:19Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.pdf&amp;diff=1970</id>
		<title>File:Gx339m4 xmm coherence.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_coherence.pdf&amp;diff=1970"/>
		<updated>2020-02-12T15:00:11Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1969</id>
		<title>Timing tools</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Timing_tools&amp;diff=1969"/>
		<updated>2020-02-12T14:32:19Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[In work! You might use it as a reference in the meantime, but some things are not yet covered. Also let me know if things are not clear! -- VG]&lt;br /&gt;
&lt;br /&gt;
== Foucalc ==&lt;br /&gt;
&lt;br /&gt;
The basic tool for X-ray timing analysis with the Remeis scripts is ''foucalc'' &amp;lt;ref&amp;gt;The original version of the Remeis timing scripts were written by Katja Pottschmidt. When using the tools, please cite [http://adsabs.harvard.edu/abs/2003A%26A...407.1039P Pottschmidt et al. 2003, A&amp;amp;A 407, 1039] and [http://adsabs.harvard.edu/abs/2000A%26A...357L..17P Pottschmidt et al. 2000, A&amp;amp;A  357, L17]; see also [http://adsabs.harvard.edu/abs/1999ApJ...510..874N Nowak et al., 1999, ApJ 510, 874]&amp;lt;/ref&amp;gt;, which calculates power-, cross power-, coherence- and timelag-spectra &amp;lt;ref&amp;gt;What these are is wonderfully explained in Katja Pottschmidt's [http://astro.uni-tuebingen.de/publications/pottschmidt-diss.shtml PhD Thesis]&amp;lt;/ref&amp;gt;. It requires '''equally spaced, gapless''' lightcurves as well as the length of the lightcurve segment on which the Fourier transformation is to be performed, given in number of bins. Note that because the underlying algorithm is FFT, the segment length should be a power of two to increase the performance.&lt;br /&gt;
&lt;br /&gt;
When only one lightcurve is given, &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; calculates the power density and related quantities only. When more than one lightcurve is given, the cross power-, coherence- and timelag-spectra are calculated for all pairs of lightcurves. The lightcurves have to be in the format:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lc = struct { time=[t_1, t_2, ...], rate1=[r1_1, r1_2, ...], rate2=..., rate3= };&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The time has to be given in seconds (so be cautious when using &amp;lt;tt&amp;gt;fits_read_lc&amp;lt;/tt&amp;gt;, which usually returns the time in MJD).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;b&amp;gt;If you use RXTE/PCA lightcurves, don't forget to set the &amp;lt;tt&amp;gt;numinst&amp;lt;/tt&amp;gt; keyword, which defines the number of instruments (=PCUs) on, since otherwise your noise correction will be wrong. If you are not using RXTE/PCA set the &amp;lt;tt&amp;gt;deadtime&amp;lt;/tt&amp;gt; keyword to whatever is appropriate for the instrument you are using (e.g. 0).&amp;lt;/b&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Given a single PSD value before any further calculations, its uncertainty will be of the same order of magnitude as the PSD itself. Averaging therefore crucial. This is build in into &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt;: given a lightcurve of say 1288192 bins and a segment length of 16384, foucalc will calculate a PSDs on every segment and average every value over all 1288192/15385 = 78 segments.&lt;br /&gt;
&lt;br /&gt;
== PSDs ==&lt;br /&gt;
&lt;br /&gt;
=== Calculating the PSD ===&lt;br /&gt;
&lt;br /&gt;
Given two PCA lightcurves, &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_0-10.lc&amp;lt;/pre&amp;gt; and &amp;lt;pre&amp;gt;50110-01-45-00_437_AND_good_14off_excl_11-13.lc&amp;lt;/pre&amp;gt; (which have been checked for gaps and found to be gapless &amp;lt;ref&amp;gt;A useful function for this is &amp;lt;tt&amp;gt;split_lc_at_gaps&amp;lt;/tt&amp;gt;&amp;lt;/ref&amp;gt; ) we want to calculate and bin the PSD, lags and coherence.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% read in lightcurves&lt;br /&gt;
variable lc1 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_0-10.lc&amp;quot;);&lt;br /&gt;
variable lc2 = fits_read_table(&amp;quot;50110-01-45-00_438_AND_good_14off_excl_11-13.lc&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%%define variables to store average rate of lightcurve &amp;amp; rms&lt;br /&gt;
variable rms,avg;&lt;br /&gt;
&lt;br /&gt;
%%calculate timing quantities&lt;br /&gt;
variable result = foucalc(struct{time=lc1.time,rate1=lc1.rate,rate2=lc2.rate},16384;&lt;br /&gt;
                          numinst=3,RMS=&amp;amp;rms,avgrate=&amp;amp;avg,verbose);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This produces the following output - thank to the &amp;lt;tt&amp;gt;verbose&amp;lt;/tt&amp;gt; keyword:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
light curves: 2516 s with 1288192 bins of 0.00195312 s  (0 s gaps)&lt;br /&gt;
  segmentation: 78 segments of 16384 bins&lt;br /&gt;
=&amp;gt; frequencies: 0.03125-256 Hz.&lt;br /&gt;
Power spectra will be calculated in Miyamoto-normalization.&lt;br /&gt;
Calculating power spectrum for rate1.  RMS(0.03125-256 Hz) = 27.1%&lt;br /&gt;
Calculating power spectrum for rate2.  RMS(0.03125-256 Hz) = 25.7%&lt;br /&gt;
Calculating cross power spectrum (=&amp;gt; coherence and timelags) for rate1 and rate2.&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is right now the only place at which &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; will complain if the lightcurve contains gap. It will calculate the quantities even if the lightcurve contains gaps - only you can't trust your results. Therefore: use verbose, check that you've done everything right, at least while building up your scripts.&lt;br /&gt;
&lt;br /&gt;
Plot the PSD for the first lightcurve with:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
xlog;ylog;plot_with_err(result.freq,result.signormpsd1,result.errnormpsd1);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Rebinning the PSD ===&lt;br /&gt;
&lt;br /&gt;
The uncertainties on the individual PSDs values are still large, especially at high frequencies. To further improve the SNR, the PSD is rebinned to a new frequency grid and the final PSDs is then the average over the individual PSDs at the frequencies corresponding to the frequencies included into the new frequency bin. Mostly, a logarithmic rebinning with df/f = const. is used, a typical value would be df/f = 0.15 .&lt;br /&gt;
&lt;br /&gt;
Quantities calculated with &amp;lt;tt&amp;gt;foucalc&amp;lt;/tt&amp;gt; are rebinned with the &amp;lt;tt&amp;gt;rebin_fouquan&amp;lt;/tt&amp;gt; tool - the quantity and it's error have to be rebinned in two individual steps:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable reb_psd = rebin_fouquan(result.freq,result.signormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
variable reb_err = rebin_fouquan(result.freq,result.errnormpsd1,&lt;br /&gt;
                                 result.numavgall;logfreq=0.15);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;tt&amp;gt;reb_err&amp;lt;/tt&amp;gt; does not yet contain the real errors of the psds, but needs to be weighted with the square root of the number of values the averaging was performed over. The final PSD is therefore:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable psd1 = struct{freq_lo = reb_psd.freq_lo,freq_hi=reb_psd.freq_hi,&lt;br /&gt;
  value = reb_psd.value, error= reb_err.value/sqrt(reb_err.n)};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Once again plot you PSD - either as PSD vs. frequency or PSD x frequency vs. frequency:&lt;br /&gt;
                                 &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%%PSD vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2,psd1.value,psd1.error);&lt;br /&gt;
&lt;br /&gt;
%%PSDxfreuency vs. frequency&lt;br /&gt;
xlog;ylog;plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2,&lt;br /&gt;
psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Lag-energy spectrum for archival XMM-Newton data ==&lt;br /&gt;
The following is an example how to compute PSDs, a lag-frequency spectrum, and a lag-energy spectrum on archival XMM-Newton data of GX 339-4 (written by OK, please ask if something is unclear) &amp;lt;ref&amp;gt;The corresponding publication is [https://ui.adsabs.harvard.edu/abs/2011MNRAS.414L..60U/abstract Uttley, P. e al, MNRAS, Vol. 414, I. 1, pp. L60-L64 (2011)]&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
We start with extracting the lightcurve in two energy bands (500-900eV and 2000-3000eV) with&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/bin/tcsh&lt;br /&gt;
source $SOFTDIR/sas_init.csh # initialize the SAS software&lt;br /&gt;
set xmmscripts=${XMMTOOLS} # make sure the proper xmmscripts are loaded&lt;br /&gt;
set obsid='0204730201'&lt;br /&gt;
set datadir=/userdata/data/koenig/GX339-4/XMM-2004-03-16/odf&lt;br /&gt;
set dt=0.1&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmprepare --datadir=$datadir --prepdir=$obsid --pn --noapplyflaregti --timing --notimelog # prepare the data&lt;br /&gt;
&lt;br /&gt;
${xmmscripts}/xmmextract --prepdir=$obsid --pn --timing --full --energyband=&amp;quot;500,900&amp;quot; --noflarescreen --defaultpattern --dt=$dt --clobber&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now we load the data and correct them for the telemetry drop-outs&lt;br /&gt;
(encoded in the Good Time Intervals):&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Load the lightcurve and GTI file&lt;br /&gt;
a=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc&amp;quot;);&lt;br /&gt;
b=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc&amp;quot;);&lt;br /&gt;
gtia=fits_read_table(&amp;quot;full_sd_0.1s_0500-0900.lc[GTI0006]&amp;quot;);&lt;br /&gt;
gtib=fits_read_table(&amp;quot;full_sd_0.1s_2000-3000.lc[GTI0006]&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Get the start time such that LC starts at 0s&lt;br /&gt;
tstart=fits_read_key(&amp;quot;full_sd_0.1s_0500-0900.lc[1]&amp;quot;, &amp;quot;TSTART&amp;quot;);&lt;br /&gt;
&lt;br /&gt;
%% Subtract TSTART and do GTI correction&lt;br /&gt;
(filta,filtb)     = filter_gti(a.time, gtia), filter_gti(b.time,gtib);&lt;br /&gt;
(timea,timeb)     = a.time[filta]-tstart,b.time[filtb]-tstart;&lt;br /&gt;
(countsa,countsb) = a.counts[filta],b.counts[filtb];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If we plot the (re-binned) lightcurve, there is not much to see.&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.jpg]]&lt;br /&gt;
&lt;br /&gt;
[[File:gx399m4_xmm_lc.pdf]]&lt;br /&gt;
&lt;br /&gt;
But, we can do a Fourier analysis in order to reveal the variability and correlations of these lightcurves (and ignore the warnings of foucalc if verbose is set that there are gaps in the lightcurve).&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose segment length&lt;br /&gt;
dimseg=8192; %% (bins), should be a power of 2 for FFT&lt;br /&gt;
&lt;br /&gt;
%% Compute PSDs, CPD, and time lags with foucalc&lt;br /&gt;
r=foucalc( struct{time=timea, rate1=countsa/dt, rate2=countsb/dt}, dimseg ; normtype=&amp;quot;Miyamoto&amp;quot;);&lt;br /&gt;
xlog;ylog;plot_with_err(r.freq,r.signormpsd1*r.freq,r.errnormpsd1*r.freq);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd unbinned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx399m4 xmm psd unbinned.pdf]]&lt;br /&gt;
&lt;br /&gt;
This really needs some re-binning to increase the S/N-ratio, which can be done with the &amp;lt;code&amp;gt;rebin_fouquan&amp;lt;/code&amp;gt; routine.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
%% Choose Delta f/f, the logarithmic frequency resolution&lt;br /&gt;
dff=0.15&lt;br /&gt;
reb_psd1 = rebin_fouquan(r.freq, r.signormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err1 = rebin_fouquan(r.freq, r.errnormpsd1, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_psd2 = rebin_fouquan(r.freq, r.signormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
reb_err2 = rebin_fouquan(r.freq, r.errnormpsd2, r.numavgall ; logfreq=dff);&lt;br /&gt;
&lt;br /&gt;
%% Need to weight the PSD error with the square root of the number of values the averaging was performed over&lt;br /&gt;
psd1 = struct{&lt;br /&gt;
  freq_lo = reb_psd1.freq_lo,&lt;br /&gt;
  freq_hi = reb_psd1.freq_hi,&lt;br /&gt;
  value   = reb_psd1.value,&lt;br /&gt;
  error   = reb_err1.value/sqrt(reb_err1.n)&lt;br /&gt;
};&lt;br /&gt;
psd2 = struct{&lt;br /&gt;
  freq_lo = reb_psd2.freq_lo,&lt;br /&gt;
  freq_hi = reb_psd2.freq_hi,&lt;br /&gt;
  value   = reb_psd2.value,&lt;br /&gt;
  error   = reb_err2.value/sqrt(reb_err2.n)&lt;br /&gt;
};&lt;br /&gt;
&lt;br /&gt;
plot_with_err((psd1.freq_lo+psd1.freq_hi)/2, psd1.value*(psd1.freq_lo+psd1.freq_hi)/2, psd1.error*(psd1.freq_lo+psd1.freq_hi)/2);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.png]]&lt;br /&gt;
&lt;br /&gt;
[[File:Gx339m4 xmm psd binned.pdf]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;references/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_binned.png&amp;diff=1968</id>
		<title>File:Gx339m4 xmm psd binned.png</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_binned.png&amp;diff=1968"/>
		<updated>2020-02-12T14:31:35Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_binned.pdf&amp;diff=1967</id>
		<title>File:Gx339m4 xmm psd binned.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_binned.pdf&amp;diff=1967"/>
		<updated>2020-02-12T14:31:07Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Koenig uploaded a new version of File:Gx339m4 xmm psd binned.pdf&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;GX 339-4 binned PSD&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_binned.pdf&amp;diff=1966</id>
		<title>File:Gx339m4 xmm psd binned.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_binned.pdf&amp;diff=1966"/>
		<updated>2020-02-12T14:30:08Z</updated>

		<summary type="html">&lt;p&gt;Koenig: GX 339-4 binned PSD&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;GX 339-4 binned PSD&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_unbinned.png&amp;diff=1965</id>
		<title>File:Gx339m4 xmm psd unbinned.png</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx339m4_xmm_psd_unbinned.png&amp;diff=1965"/>
		<updated>2020-02-12T14:16:33Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Unbinned power-spectrum of GX 339-4&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Unbinned power-spectrum of GX 339-4&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx399m4_xmm_psd_unbinned.pdf&amp;diff=1964</id>
		<title>File:Gx399m4 xmm psd unbinned.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx399m4_xmm_psd_unbinned.pdf&amp;diff=1964"/>
		<updated>2020-02-12T14:15:24Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Unbinned power spectrum of GX 399-4&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Unbinned power spectrum of GX 399-4&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx399m4_xmm_lc.jpg&amp;diff=1963</id>
		<title>File:Gx399m4 xmm lc.jpg</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx399m4_xmm_lc.jpg&amp;diff=1963"/>
		<updated>2020-02-12T13:45:32Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx399m4_xmm_lc.pdf&amp;diff=1962</id>
		<title>File:Gx399m4 xmm lc.pdf</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=File:Gx399m4_xmm_lc.pdf&amp;diff=1962"/>
		<updated>2020-02-12T13:41:19Z</updated>

		<summary type="html">&lt;p&gt;Koenig: Lightcurve of archival XMM-Newton data of GX 399-4&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lightcurve of archival XMM-Newton data of GX 399-4&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Isis:tutorial:slang&amp;diff=1897</id>
		<title>Isis:tutorial:slang</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Isis:tutorial:slang&amp;diff=1897"/>
		<updated>2019-10-17T11:55:30Z</updated>

		<summary type="html">&lt;p&gt;Koenig: /* Regular Expressions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====== Programming in S-Lang ======&lt;br /&gt;
&lt;br /&gt;
''Remark:'' This brief introduction into S-Lang is primarily a translation of the German-language introduction to S-Lang used in the Remeis astronomy lab, which was mainly written by Manfred Hanke.&lt;br /&gt;
&lt;br /&gt;
===== Introduction =====&lt;br /&gt;
&lt;br /&gt;
The underlying engine of Isis is S-Lang, an interpreted language that is similar to other modern scripting languages such as Perl or python. All of these languages are &amp;quot;Algol-like&amp;quot;, therefore, if you know how to program in C or any other of these scripting languages, you should not have a problem to program in S-Lang as well. &lt;br /&gt;
&lt;br /&gt;
The big advantage of having a scripting language as part of a data analysis package is that many things that are &amp;quot;routine&amp;quot; work can be automated, increasing your efficiency. This includes things like loading the data set that you're working with, e.g., in the case you are working with many spectra from different instruments, and need to do some specific ignoring and rebinning, or the calculation of errors. It also allows you to access all internal structures used in doing your best fit, such that you can prepare very nice figures our output your best fit parameters in a way that is better suited to publication than the standard isis routines. Historically, many astronomers (yours truly included) did this last step in IDL. While that language is very nice, it is also very expensive, with educational licenses costing around 1000 EUR ''per year''. It thus makes a lot of sense to move away from this and use a cheaper and more integrated approach to data analysis.&lt;br /&gt;
&lt;br /&gt;
''A comment to future data analysts:'' Scripting is very good, however, do not try to script everything. Many points of data analysis have to do with understanding your data set and here it is often much better to play with it by hand than to automatize things. Get a &amp;quot;feel&amp;quot; for your data first before trusting the computer to do everything right...&lt;br /&gt;
&lt;br /&gt;
''A comment to the language-warriors:'' Often people will ask why S-Lang was chosen as the interface and not, e.g., python. The reason is simple: because it was there. The important thing is that a scripting language is there at all. The main difficulty in learning how to program is not the programming syntax - if you think so, then you are not a good programmer - but rather to think in an algorithmic way. And this type of thinking is difficult to learn. Learning a new syntax isn't. The author of these lines (not M. Hanke ;-) ) started his life with a simple form of Amstrad Basic, followed by Omikron Basic, PASCAL, Turbo Pascal, Fortran-77(yes, it really is spelled &amp;quot;Fortran&amp;quot;, not &amp;quot;FORTRAN&amp;quot;. The only FORTRAN in existence was FORTRAN 66, since the 1977 standard, that language was spelled &amp;quot;Fortran&amp;quot;...), Fortran-90, IDL, C, C++, Perl, javascript, and I am sure some more languages that I have forgotten (plus all of the assembly languages that were useful when one was still programming in assembly, i.e., 80x86, 68xxxx, and so on). Historically, all of these languages have a syntax that goes back to Algol in the 1960s, and thus in the core they are all the same. For this reason, do not worry about having to learn yet another scripting language, it's just a little bit of syntax. And, if you don't know how to program, start now. Because of the languages are all the same, it does not matter that S-Lang might be seen as obscure by some people, once you know how to think algorithmically, switching over to another language won't cost you too much time. This also means that if you are applying to jobs and somebody claims that you must know java or any other language, stay away from these jobs - knowing how to program is what makes you interesting, not the specific language...&lt;br /&gt;
&lt;br /&gt;
In contrast to compiled languages such as C, C++ or Fortran, scripting languages such as IDL, Perl, python, have the advantage that one can also work with them interactively and thus write small &amp;quot;programs&amp;quot; directly on the command line. We are using this feature all the time when doing data analysis by hand.&lt;br /&gt;
&lt;br /&gt;
In the following we assume that you had at least some previous exposure to programming, and just give a list of the most important language structures. &lt;br /&gt;
&lt;br /&gt;
===== S-Lang Language elements =====&lt;br /&gt;
&lt;br /&gt;
S-Lang consists of the following language elements that allow you to structure your programs. Note that in S-Lang programs ''all'' statements must be ending with a semicolon.&lt;br /&gt;
&lt;br /&gt;
==== Variable Declarations and Assignments ====&lt;br /&gt;
&lt;br /&gt;
In S-Lang programs, variables must be declared (this is optional on the command line). This is done with the instruction&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable var_1, var_2, ... ;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you then assign values to a variable with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var_1=value;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;code&amp;gt;value&amp;lt;/code&amp;gt; is a valid S-Lang statement. It is possible to combine the variable declaration and assignment, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or more complicated expressions such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var_1 = sin(a)+sqrt(25.);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Variable names may consist of any combination of the standard ASCII characters &amp;lt;code&amp;gt;a-zA-Z0-9&amp;lt;/code&amp;gt; as well as the underscore &amp;lt;code&amp;gt;_&amp;lt;/code&amp;gt;  and the dollar sign &amp;lt;code&amp;gt;$&amp;lt;/code&amp;gt;. A variable is not allowed to start with a number.&lt;br /&gt;
&lt;br /&gt;
==== Data Types ====&lt;br /&gt;
&lt;br /&gt;
=== Simple Data Types ===&lt;br /&gt;
&lt;br /&gt;
S-Lang variables are generally weakly typed, that is the type of a variable is defined by the type of whatever is assigned to it. For example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
means that after the assignment &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; is an integer. While&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2.;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
means that &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; is a floating point number. Strings are assigned with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=&amp;quot;abcd&amp;quot;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, note that a variable can easily change its type, because the weak typing will mean that after the execution of&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=&amp;quot;abcd&amp;quot;; % String_Type&lt;br /&gt;
a=2.3;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; will have the type &amp;lt;code&amp;gt;Double_Type&amp;lt;/code&amp;gt;.  You can check this by printing the type of the variable:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
typeof(a);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Exercise 1:'''&lt;br /&gt;
&lt;br /&gt;
Assign the result of &amp;lt;code&amp;gt;typeof(a)&amp;lt;/code&amp;gt; to some other variable. What is the datatype of that other variable?&lt;br /&gt;
&lt;br /&gt;
=== An aside on integer and floating point arithmetic ===&lt;br /&gt;
&lt;br /&gt;
Note that while weak typing usually speeds up code development, it does not preserve you from the pitfalls that go hand in hand with integer and floating point arithmetic. Consider the following classical example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=5;&lt;br /&gt;
variable b=10;&lt;br /&gt;
variable c=a/b;&lt;br /&gt;
print(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; is 0 because of the rules of integer arithmetic (everything after the &amp;quot;.&amp;quot; is cut away). The correct result is obtained when doing floating point arithmetic:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=5.;&lt;br /&gt;
variable b=10.;&lt;br /&gt;
variable c=a/b;&lt;br /&gt;
print(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Even worse is the following often encountered example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=1000;&lt;br /&gt;
variable b=6500;&lt;br /&gt;
variable c=a*b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and because of the rules of integer arithmetic you will have an integer overflow and &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; might even be negative.&lt;br /&gt;
&lt;br /&gt;
The rule in arithmetic expressions is that the &amp;quot;strongest&amp;quot; data type wins, i.e., in&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=10000000.0;&lt;br /&gt;
variable b=65000000;&lt;br /&gt;
variable c=a*b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; will have the correct data type since the multiplication is performed in double precision.&lt;br /&gt;
&lt;br /&gt;
If you need to be 100 percent sure that a calculation needs to be done in a certain data type and you have no control that the variables entering an expression have that type (this is, e.g., the case in functions that are called by somebody else), you can force S-Lang to convert (&amp;quot;typecast&amp;quot;) a variable to a certain type:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a=double(a);&lt;br /&gt;
b=int(b);&lt;br /&gt;
c=string(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Arrays and Lists ===&lt;br /&gt;
&lt;br /&gt;
You can combine the above simple data types into more complicated ones. The most important of these are&lt;br /&gt;
&lt;br /&gt;
== Arrays==&lt;br /&gt;
&lt;br /&gt;
Arrays are ordered lists of things of the same data type and are declared using brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable arr=[1,2,3];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Content of arrays is accessed by giving the index in brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable c=arr[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that arrays are zero based, i.e., the above returns &amp;lt;code&amp;gt;2&amp;lt;/code&amp;gt;;&lt;br /&gt;
It is possible to access more than one element at the same time by using an array as the argument of the brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable c=arr[[0,1]];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
which produces an array containing two elements. If you want larger parts of an array, there is a very powerful &amp;quot;slicing&amp;quot; syntax that makes use of the fact that &amp;lt;code&amp;gt;[a:b]&amp;lt;/code&amp;gt; defines the array &amp;lt;code&amp;gt;[a,a+1,a+2,..,b]&amp;lt;/code&amp;gt; (for b&amp;gt;a and a,b Integers):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable b=arr[[0:1]];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(which is a somewhat silly example...).&lt;br /&gt;
&lt;br /&gt;
Arrays can be multi-dimensional, but the definition is not as nice as in other scripting languages:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable arr=Array_Type[2,3];&lt;br /&gt;
arr[0,[0:2]]=[1,2,3];&lt;br /&gt;
arr[1,[0:2]]=[5,4,3];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that also arrays with floating values can be created by the very similar syntax:  &amp;lt;code&amp;gt;[a:b:c]&amp;lt;/code&amp;gt; creates an array with values &amp;lt;code&amp;gt;[a, a+c, a+2*c,...]&amp;lt;/code&amp;gt;, such that the last value is still lower than b. Even more comfortable is the syntax &amp;lt;code&amp;gt;[a:b:#n]&amp;lt;/code&amp;gt;, which creates exactly an array of length n, with equally spaced values ranging from a to b.&lt;br /&gt;
&lt;br /&gt;
== Lists == &lt;br /&gt;
&lt;br /&gt;
Lists are ordered lists of things that can be of different data type. They are declared using curly brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable lis={1,2,3};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Accessing the list elements uses the standard bracket syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=lis[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Lists are important whenever you want to store different things in one variable. For example, the following is legal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable lis2={1,[&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;,&amp;quot;c&amp;quot;],3.2};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Operators ====&lt;br /&gt;
&lt;br /&gt;
Binary operators combine two expressions, &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt; are constants, variables, functions and so on. The most important operators are:&lt;br /&gt;
&lt;br /&gt;
* '''arithmetic operators''': &lt;br /&gt;
** &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;: basic arithmetic operators, the usual priority rules apply,&lt;br /&gt;
** &amp;lt;code&amp;gt;^&amp;lt;/code&amp;gt;: exponentiation (&amp;lt;code&amp;gt;2^3&amp;lt;/code&amp;gt; is two to the power of three),&lt;br /&gt;
** &amp;lt;code&amp;gt;mod&amp;lt;/code&amp;gt;: modulo operation&lt;br /&gt;
* ''' string concatenation''' is done with the &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; operator.&lt;br /&gt;
* ''' comparison ''': is done with &amp;lt;code&amp;gt;&amp;lt;&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;==&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;&amp;gt;=&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;&amp;gt;&amp;lt;/code&amp;gt;.  Note that like in all programming languages, you should ''never'' test two floating point variables for equality, this will most often not work in the way you expect...&lt;br /&gt;
&lt;br /&gt;
All of these operators can be used not only on scalar values but also on arrays. They are then used on an element basis. The resulting code is very fast. For example, to add two arrays:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=[1,2,3];&lt;br /&gt;
variable b=[6,5,3];&lt;br /&gt;
variable c=a+b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an aside, one often wants to add/subtract something from a variable. S-Lang allows the following C-like shortcuts:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a+=5;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
is equivalent to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a=a+5;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and similar &amp;lt;code&amp;gt;-=&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;*=&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;/=&amp;lt;/code&amp;gt; (I don't think I've ever used the last one, though...).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Program flow control ====&lt;br /&gt;
&lt;br /&gt;
=== Conditional execution ===&lt;br /&gt;
&lt;br /&gt;
Conditional execution is done with the &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt;-statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if ( condition ) {&lt;br /&gt;
   true-code;&lt;br /&gt;
} else {&lt;br /&gt;
   false-code;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=+1;&lt;br /&gt;
variable b;&lt;br /&gt;
if ( a&amp;lt;0 ) {&lt;br /&gt;
  b=-1;&lt;br /&gt;
} else {&lt;br /&gt;
  b=+1;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that the &amp;lt;code&amp;gt;else&amp;lt;/code&amp;gt;-branch is optional.&lt;br /&gt;
&lt;br /&gt;
=== Loops ===&lt;br /&gt;
&lt;br /&gt;
== for-loop ==&lt;br /&gt;
&lt;br /&gt;
The syntax of the for loop is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
for( initialize ; condition ; increment ) {&lt;br /&gt;
   code ;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where usually in &amp;lt;code&amp;gt;initialize&amp;lt;/code&amp;gt; a loop control variable is, well, initialized, and then incremented as long as condition is valid. An example would be &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable i;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
for (i=0; i&amp;lt;npt; i++) {&lt;br /&gt;
    print (i);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
which counts from 0 to 9 (a count down is also possible, use &amp;lt;code&amp;gt;i-&amp;lt;/code&amp;gt;&amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;). Obviously, more than one line of code is possible...&lt;br /&gt;
&lt;br /&gt;
''Note:'' even though syntactically possible, never ever use anything else than an integer variable as the loop counter, unless explicitly necessary. &lt;br /&gt;
&lt;br /&gt;
== while loop ==&lt;br /&gt;
&lt;br /&gt;
The while loop is done while a condition is met:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
while ( condition ) {&lt;br /&gt;
  code ;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that if &amp;lt;code&amp;gt;condition&amp;lt;/code&amp;gt; is not met when the while loop is hit first, &amp;lt;code&amp;gt;code&amp;lt;/code&amp;gt; is not executed at all. &lt;br /&gt;
&lt;br /&gt;
The above counting example can be implemented as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable i=0;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
while ( i&amp;lt;npt ) {&lt;br /&gt;
  print(i);&lt;br /&gt;
  i++;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== do...while loop ==&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;do...while&amp;lt;/code&amp;gt;-loop is a loop where the body of the loop is executed at least once, since the condition is only tested at the end of the first passage through the loop:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
do {&lt;br /&gt;
  code ;&lt;br /&gt;
} while (condition);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Functions ====&lt;br /&gt;
&lt;br /&gt;
Functions are subroutines which execute a sequence of instructions whenever they are called (i.e., whenever their name appears in a program). Functions ''can'', but do not have to, have arguments, i.e., variables that control the behavior of the routine. &lt;br /&gt;
&lt;br /&gt;
=== Intrinsic functions ===&lt;br /&gt;
&lt;br /&gt;
''Note 1:'' More information about individual functions can be obtained with isis' &amp;lt;code&amp;gt;help&amp;lt;/code&amp;gt;-function.&lt;br /&gt;
&lt;br /&gt;
''Note 2:'' Most simple functions also work on arrays.&lt;br /&gt;
&lt;br /&gt;
== Mathematical functions ==&lt;br /&gt;
&lt;br /&gt;
* sign functions: &amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;sign&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_diff&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_max&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_min&amp;lt;/code&amp;gt;&lt;br /&gt;
* rounding functions: &amp;lt;code&amp;gt;ceil&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;floor&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;nint&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;round&amp;lt;/code&amp;gt;&lt;br /&gt;
* basic algebraic functions: &amp;lt;code&amp;gt;sqr&amp;lt;/code&amp;gt; (square!), &amp;lt;code&amp;gt;sqrt&amp;lt;/code&amp;gt; (square-root), &amp;lt;code&amp;gt;hypot&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;polynom&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mul2&amp;lt;/code&amp;gt;&lt;br /&gt;
* exponential and logarithm: &amp;lt;code&amp;gt;exp&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;expm1&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log10&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log1p&amp;lt;/code&amp;gt;&lt;br /&gt;
* trigonometric functions (argument is in radian!): &amp;lt;code&amp;gt;sin&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;tan&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;asin&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;acos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atan&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atan2&amp;lt;/code&amp;gt;&lt;br /&gt;
* hyperbolic functions: &amp;lt;code&amp;gt;sinh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cosh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;tanh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;asinh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;acosh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atanh&amp;lt;/code&amp;gt;&lt;br /&gt;
* complex numbers: &amp;lt;code&amp;gt;Real&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Imag&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Conj&amp;lt;/code&amp;gt;&lt;br /&gt;
* tests: &amp;lt;code&amp;gt;isinf&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;isnan&amp;lt;/code&amp;gt; (nan: not a number), &amp;lt;code&amp;gt;_ispos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_isneg&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_isnoneg&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Array functions ==&lt;br /&gt;
&lt;br /&gt;
* number of elements in an array: &amp;lt;code&amp;gt;length&amp;lt;/code&amp;gt;&lt;br /&gt;
* extrema: &amp;lt;code&amp;gt;max&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;min&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;maxabs&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;minabs&amp;lt;/code&amp;gt;&lt;br /&gt;
* summing array elements: &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;sumsq&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cumsum&amp;lt;/code&amp;gt;&lt;br /&gt;
* tests: &amp;lt;code&amp;gt;all&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;any&amp;lt;/code&amp;gt;&lt;br /&gt;
* get the indices for all or some elements for which a condition is met: &amp;lt;code&amp;gt;where&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherenot&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherefirst&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherelast&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Regular Expressions ==&lt;br /&gt;
&lt;br /&gt;
Regular expressions are extremely powerful to format, edit, and filter strings. The SLANG functions &amp;lt;code&amp;gt;string_match&amp;lt;/code&amp;gt; and derivatives take care of it.&lt;br /&gt;
&lt;br /&gt;
* The whitespace regex &amp;lt;code&amp;gt;\s&amp;lt;/code&amp;gt; must be replaced by an actual whitespace &amp;lt;code&amp;gt; &amp;lt;/code&amp;gt;&lt;br /&gt;
* The word regex &amp;lt;code&amp;gt;\w&amp;lt;/code&amp;gt; must be replaced by &amp;lt;code&amp;gt;[A-Za-z0-9_]&amp;lt;/code&amp;gt;&lt;br /&gt;
* To extract individual sub-strings one can group the characters by &amp;lt;code&amp;gt;\(\)&amp;lt;/code&amp;gt; (Note that the group parenthesis is escaped). This group can be accessed in the returned String array of &amp;lt;code&amp;gt;string_matches&amp;lt;/code&amp;gt;. Note that the zeroth entry contains the full string.&lt;br /&gt;
* Important: The regex string has to be followed by an &amp;lt;code&amp;gt;R&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An are some examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% Match only the result 17.56&lt;br /&gt;
variable matches = string_matches(&amp;quot;The result is: 17.56&amp;quot;, &amp;quot;[A-Za-z0-9_ ]+: \(\d+\.\d+\)&amp;quot;R);&lt;br /&gt;
% Match the result 17.56 but also 17&lt;br /&gt;
matches = string_matches(&amp;quot;The result is: 17.56&amp;quot;, &amp;quot;[A-Za-z0-9_ ]+: \(\d+\.?\d*\)&amp;quot;R);&lt;br /&gt;
variable result = matches[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% Match _exactly_ two digits, also matches if no decimal&lt;br /&gt;
string_matches(&amp;quot;dec=-02 57 75.3&amp;quot;, &amp;quot;dec=\(-?\d\{2\} \d\{2\} \d\{2\}\.?\d*\)&amp;quot;R);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% Something more complex&lt;br /&gt;
variable teststr = &amp;quot;1  |      0.00|4U 1850-03&amp;quot;;&lt;br /&gt;
variable regex  = &amp;quot;\(\d\) +| +\(\d+.\d+\)|\(.+ .+\)&amp;quot;R;&lt;br /&gt;
variable matches = string_matches(teststr, regex);&lt;br /&gt;
variable sourcename = matches[3]; % Note that matches[0] contains teststr&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
&lt;br /&gt;
Output is done with the &amp;lt;code&amp;gt;print&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; functions. &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; uses a format similar to the C &amp;lt;code&amp;gt;printf&amp;lt;/code&amp;gt;-function to format the output. Examples include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=1.2347;&lt;br /&gt;
vmessage(&amp;quot;%f&amp;quot;,a);   % print with full precision, note the roundoff error!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=25;&lt;br /&gt;
vmessage(&amp;quot;%05d&amp;quot;,a); % print 5 digits, zero padded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note'': &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; works very similar to the function &amp;lt;code&amp;gt;printf&amp;lt;/code&amp;gt;, which exists in many programming languages (and also in S-Lang).&lt;br /&gt;
&lt;br /&gt;
=== user-defined functions ===&lt;br /&gt;
&lt;br /&gt;
your own function can be defined with the syntax&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
define functionname (arguments) {&lt;br /&gt;
   code;&lt;br /&gt;
   :&lt;br /&gt;
   code;&lt;br /&gt;
   return value;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where the return value statement is optional.&lt;br /&gt;
&lt;br /&gt;
Functions are very useful to structure your program. Use them liberally! An example would be that for a given data set, you write a function to load the data and do the rebinning. Additionally, giving useful names to your functions improves the readability of your code.&lt;br /&gt;
&lt;br /&gt;
A more silly example is the following, which returns the sum and difference of two numbers:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
define adddiff(a,b) {&lt;br /&gt;
   % return the sum and difference of two numbers.&lt;br /&gt;
   variable sum=a+b;&lt;br /&gt;
   variable diff=a-b;&lt;br /&gt;
   return [sum,diff];&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note the comments. It is good style to comment your code well in order to allow you and others later to understand what the code is doing. You should always comment your code while writing, do not only do it at the end because somebody told you so, make the writing of comments part of your coding practice!&lt;br /&gt;
&lt;br /&gt;
'''Exercise 2'''&lt;br /&gt;
&lt;br /&gt;
Write a slang function 'midnight' which returns the roots of a quadratic equation &amp;lt;math&amp;gt;ax^2+bx+c&amp;lt;/math&amp;gt;. The routine should work for all possible values of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;b&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
=== libraries ===&lt;br /&gt;
&lt;br /&gt;
Libraries are collections of S-Lang functions. They get loaded with the statement&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require(&amp;quot;libraryname&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards, all functions in &amp;quot;libraryname&amp;quot; are available. &lt;br /&gt;
&lt;br /&gt;
==== Isis programs ====&lt;br /&gt;
&lt;br /&gt;
Isis programs consist of a sequence of function declarations and a main program, stored in a file that can be written with any editor of your choice. To execute a program you have several choices. In the following, let's assume the program's filename is &amp;lt;code&amp;gt;test.sl&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
  - you can execute the program from the Linux command line, by issuing the command &amp;lt;code&amp;gt;isis test.sl&amp;lt;/code&amp;gt;;&lt;br /&gt;
  - if you want to execute the program from within isis (e.g., because you want to work on its output interactively, use &amp;lt;code&amp;gt;()=evalfile(&amp;quot;test.sl&amp;quot;);&amp;lt;/code&amp;gt;.&lt;br /&gt;
  - to run a program under isis and immediately exit isis, use the &amp;quot;shebang&amp;quot; notation. For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/usr/bin/env isis&lt;br /&gt;
&lt;br /&gt;
% stupid count down example&lt;br /&gt;
variable i;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
for (i=0; i&amp;lt;npt; i++) {&lt;br /&gt;
    print (i);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
then make the code executable under Linux:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hde226868:~/&amp;gt; chmod ugo+x ./test.sl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After this you can execute the code with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hde226868:~/&amp;gt; ./test.sl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The name &amp;quot;shebang&amp;quot;-notation comes from the pronounciation of the hash-sign '#' as &amp;quot;she&amp;quot; and the exclamation mark as &amp;quot;bang&amp;quot;. yes, really.&lt;br /&gt;
&lt;br /&gt;
''' Exercise 3'''&lt;br /&gt;
&lt;br /&gt;
Write a S-Lang program that loads the gratings data from Exercise 3 of [[isis:tutorial:fitting1|Advanced Fitting Techniques, 1]]. The program should have functions that&lt;br /&gt;
# load the data, ignore the appropriate energy channels and rebin it. The function should return the indices of the PCA, HEXTE A, and HEXTE B data.&lt;br /&gt;
# setup the fit function and set the parameters to reasonable starting values&lt;br /&gt;
# call the above functions from a main program and perform the fit&lt;br /&gt;
# call a third function that makes a plot of the best-fit with residuals. Use the same color for the HEXTE A and B data points. ''Hint:'' note that the call to the plot functions is a list. The colors assigned to the data points through the &amp;lt;code&amp;gt;dcol&amp;lt;/code&amp;gt; qualifiers apply to the individual list elements. For example, if &amp;lt;code&amp;gt;dcol=[1,2]&amp;lt;/code&amp;gt; then the spectra corresponding to the  2nd list element are plotted in color number 2. The list describing the spectra is a ''list'', i.e., it can contain arrays as list elements...  In other words: call the plot functions such that all color qualifiers have only two elements. &lt;br /&gt;
# call a fourth function which calculates a 2D-error contour for &amp;lt;math&amp;gt;N_H&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\Gamma&amp;lt;/math&amp;gt; (NOTE: the relevant information for this last point is not yet there and, because of carpal tunnel syndrome, will only be available on Tuesday).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Isis:tutorial:slang&amp;diff=1896</id>
		<title>Isis:tutorial:slang</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Isis:tutorial:slang&amp;diff=1896"/>
		<updated>2019-10-17T11:53:11Z</updated>

		<summary type="html">&lt;p&gt;Koenig: /* Regular Expressions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====== Programming in S-Lang ======&lt;br /&gt;
&lt;br /&gt;
''Remark:'' This brief introduction into S-Lang is primarily a translation of the German-language introduction to S-Lang used in the Remeis astronomy lab, which was mainly written by Manfred Hanke.&lt;br /&gt;
&lt;br /&gt;
===== Introduction =====&lt;br /&gt;
&lt;br /&gt;
The underlying engine of Isis is S-Lang, an interpreted language that is similar to other modern scripting languages such as Perl or python. All of these languages are &amp;quot;Algol-like&amp;quot;, therefore, if you know how to program in C or any other of these scripting languages, you should not have a problem to program in S-Lang as well. &lt;br /&gt;
&lt;br /&gt;
The big advantage of having a scripting language as part of a data analysis package is that many things that are &amp;quot;routine&amp;quot; work can be automated, increasing your efficiency. This includes things like loading the data set that you're working with, e.g., in the case you are working with many spectra from different instruments, and need to do some specific ignoring and rebinning, or the calculation of errors. It also allows you to access all internal structures used in doing your best fit, such that you can prepare very nice figures our output your best fit parameters in a way that is better suited to publication than the standard isis routines. Historically, many astronomers (yours truly included) did this last step in IDL. While that language is very nice, it is also very expensive, with educational licenses costing around 1000 EUR ''per year''. It thus makes a lot of sense to move away from this and use a cheaper and more integrated approach to data analysis.&lt;br /&gt;
&lt;br /&gt;
''A comment to future data analysts:'' Scripting is very good, however, do not try to script everything. Many points of data analysis have to do with understanding your data set and here it is often much better to play with it by hand than to automatize things. Get a &amp;quot;feel&amp;quot; for your data first before trusting the computer to do everything right...&lt;br /&gt;
&lt;br /&gt;
''A comment to the language-warriors:'' Often people will ask why S-Lang was chosen as the interface and not, e.g., python. The reason is simple: because it was there. The important thing is that a scripting language is there at all. The main difficulty in learning how to program is not the programming syntax - if you think so, then you are not a good programmer - but rather to think in an algorithmic way. And this type of thinking is difficult to learn. Learning a new syntax isn't. The author of these lines (not M. Hanke ;-) ) started his life with a simple form of Amstrad Basic, followed by Omikron Basic, PASCAL, Turbo Pascal, Fortran-77(yes, it really is spelled &amp;quot;Fortran&amp;quot;, not &amp;quot;FORTRAN&amp;quot;. The only FORTRAN in existence was FORTRAN 66, since the 1977 standard, that language was spelled &amp;quot;Fortran&amp;quot;...), Fortran-90, IDL, C, C++, Perl, javascript, and I am sure some more languages that I have forgotten (plus all of the assembly languages that were useful when one was still programming in assembly, i.e., 80x86, 68xxxx, and so on). Historically, all of these languages have a syntax that goes back to Algol in the 1960s, and thus in the core they are all the same. For this reason, do not worry about having to learn yet another scripting language, it's just a little bit of syntax. And, if you don't know how to program, start now. Because of the languages are all the same, it does not matter that S-Lang might be seen as obscure by some people, once you know how to think algorithmically, switching over to another language won't cost you too much time. This also means that if you are applying to jobs and somebody claims that you must know java or any other language, stay away from these jobs - knowing how to program is what makes you interesting, not the specific language...&lt;br /&gt;
&lt;br /&gt;
In contrast to compiled languages such as C, C++ or Fortran, scripting languages such as IDL, Perl, python, have the advantage that one can also work with them interactively and thus write small &amp;quot;programs&amp;quot; directly on the command line. We are using this feature all the time when doing data analysis by hand.&lt;br /&gt;
&lt;br /&gt;
In the following we assume that you had at least some previous exposure to programming, and just give a list of the most important language structures. &lt;br /&gt;
&lt;br /&gt;
===== S-Lang Language elements =====&lt;br /&gt;
&lt;br /&gt;
S-Lang consists of the following language elements that allow you to structure your programs. Note that in S-Lang programs ''all'' statements must be ending with a semicolon.&lt;br /&gt;
&lt;br /&gt;
==== Variable Declarations and Assignments ====&lt;br /&gt;
&lt;br /&gt;
In S-Lang programs, variables must be declared (this is optional on the command line). This is done with the instruction&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable var_1, var_2, ... ;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you then assign values to a variable with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var_1=value;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;code&amp;gt;value&amp;lt;/code&amp;gt; is a valid S-Lang statement. It is possible to combine the variable declaration and assignment, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or more complicated expressions such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var_1 = sin(a)+sqrt(25.);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Variable names may consist of any combination of the standard ASCII characters &amp;lt;code&amp;gt;a-zA-Z0-9&amp;lt;/code&amp;gt; as well as the underscore &amp;lt;code&amp;gt;_&amp;lt;/code&amp;gt;  and the dollar sign &amp;lt;code&amp;gt;$&amp;lt;/code&amp;gt;. A variable is not allowed to start with a number.&lt;br /&gt;
&lt;br /&gt;
==== Data Types ====&lt;br /&gt;
&lt;br /&gt;
=== Simple Data Types ===&lt;br /&gt;
&lt;br /&gt;
S-Lang variables are generally weakly typed, that is the type of a variable is defined by the type of whatever is assigned to it. For example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
means that after the assignment &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; is an integer. While&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2.;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
means that &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; is a floating point number. Strings are assigned with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=&amp;quot;abcd&amp;quot;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, note that a variable can easily change its type, because the weak typing will mean that after the execution of&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=&amp;quot;abcd&amp;quot;; % String_Type&lt;br /&gt;
a=2.3;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; will have the type &amp;lt;code&amp;gt;Double_Type&amp;lt;/code&amp;gt;.  You can check this by printing the type of the variable:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
typeof(a);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Exercise 1:'''&lt;br /&gt;
&lt;br /&gt;
Assign the result of &amp;lt;code&amp;gt;typeof(a)&amp;lt;/code&amp;gt; to some other variable. What is the datatype of that other variable?&lt;br /&gt;
&lt;br /&gt;
=== An aside on integer and floating point arithmetic ===&lt;br /&gt;
&lt;br /&gt;
Note that while weak typing usually speeds up code development, it does not preserve you from the pitfalls that go hand in hand with integer and floating point arithmetic. Consider the following classical example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=5;&lt;br /&gt;
variable b=10;&lt;br /&gt;
variable c=a/b;&lt;br /&gt;
print(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; is 0 because of the rules of integer arithmetic (everything after the &amp;quot;.&amp;quot; is cut away). The correct result is obtained when doing floating point arithmetic:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=5.;&lt;br /&gt;
variable b=10.;&lt;br /&gt;
variable c=a/b;&lt;br /&gt;
print(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Even worse is the following often encountered example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=1000;&lt;br /&gt;
variable b=6500;&lt;br /&gt;
variable c=a*b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and because of the rules of integer arithmetic you will have an integer overflow and &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; might even be negative.&lt;br /&gt;
&lt;br /&gt;
The rule in arithmetic expressions is that the &amp;quot;strongest&amp;quot; data type wins, i.e., in&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=10000000.0;&lt;br /&gt;
variable b=65000000;&lt;br /&gt;
variable c=a*b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; will have the correct data type since the multiplication is performed in double precision.&lt;br /&gt;
&lt;br /&gt;
If you need to be 100 percent sure that a calculation needs to be done in a certain data type and you have no control that the variables entering an expression have that type (this is, e.g., the case in functions that are called by somebody else), you can force S-Lang to convert (&amp;quot;typecast&amp;quot;) a variable to a certain type:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a=double(a);&lt;br /&gt;
b=int(b);&lt;br /&gt;
c=string(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Arrays and Lists ===&lt;br /&gt;
&lt;br /&gt;
You can combine the above simple data types into more complicated ones. The most important of these are&lt;br /&gt;
&lt;br /&gt;
== Arrays==&lt;br /&gt;
&lt;br /&gt;
Arrays are ordered lists of things of the same data type and are declared using brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable arr=[1,2,3];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Content of arrays is accessed by giving the index in brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable c=arr[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that arrays are zero based, i.e., the above returns &amp;lt;code&amp;gt;2&amp;lt;/code&amp;gt;;&lt;br /&gt;
It is possible to access more than one element at the same time by using an array as the argument of the brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable c=arr[[0,1]];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
which produces an array containing two elements. If you want larger parts of an array, there is a very powerful &amp;quot;slicing&amp;quot; syntax that makes use of the fact that &amp;lt;code&amp;gt;[a:b]&amp;lt;/code&amp;gt; defines the array &amp;lt;code&amp;gt;[a,a+1,a+2,..,b]&amp;lt;/code&amp;gt; (for b&amp;gt;a and a,b Integers):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable b=arr[[0:1]];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(which is a somewhat silly example...).&lt;br /&gt;
&lt;br /&gt;
Arrays can be multi-dimensional, but the definition is not as nice as in other scripting languages:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable arr=Array_Type[2,3];&lt;br /&gt;
arr[0,[0:2]]=[1,2,3];&lt;br /&gt;
arr[1,[0:2]]=[5,4,3];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that also arrays with floating values can be created by the very similar syntax:  &amp;lt;code&amp;gt;[a:b:c]&amp;lt;/code&amp;gt; creates an array with values &amp;lt;code&amp;gt;[a, a+c, a+2*c,...]&amp;lt;/code&amp;gt;, such that the last value is still lower than b. Even more comfortable is the syntax &amp;lt;code&amp;gt;[a:b:#n]&amp;lt;/code&amp;gt;, which creates exactly an array of length n, with equally spaced values ranging from a to b.&lt;br /&gt;
&lt;br /&gt;
== Lists == &lt;br /&gt;
&lt;br /&gt;
Lists are ordered lists of things that can be of different data type. They are declared using curly brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable lis={1,2,3};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Accessing the list elements uses the standard bracket syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=lis[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Lists are important whenever you want to store different things in one variable. For example, the following is legal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable lis2={1,[&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;,&amp;quot;c&amp;quot;],3.2};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Operators ====&lt;br /&gt;
&lt;br /&gt;
Binary operators combine two expressions, &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt; are constants, variables, functions and so on. The most important operators are:&lt;br /&gt;
&lt;br /&gt;
* '''arithmetic operators''': &lt;br /&gt;
** &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;: basic arithmetic operators, the usual priority rules apply,&lt;br /&gt;
** &amp;lt;code&amp;gt;^&amp;lt;/code&amp;gt;: exponentiation (&amp;lt;code&amp;gt;2^3&amp;lt;/code&amp;gt; is two to the power of three),&lt;br /&gt;
** &amp;lt;code&amp;gt;mod&amp;lt;/code&amp;gt;: modulo operation&lt;br /&gt;
* ''' string concatenation''' is done with the &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; operator.&lt;br /&gt;
* ''' comparison ''': is done with &amp;lt;code&amp;gt;&amp;lt;&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;==&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;&amp;gt;=&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;&amp;gt;&amp;lt;/code&amp;gt;.  Note that like in all programming languages, you should ''never'' test two floating point variables for equality, this will most often not work in the way you expect...&lt;br /&gt;
&lt;br /&gt;
All of these operators can be used not only on scalar values but also on arrays. They are then used on an element basis. The resulting code is very fast. For example, to add two arrays:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=[1,2,3];&lt;br /&gt;
variable b=[6,5,3];&lt;br /&gt;
variable c=a+b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an aside, one often wants to add/subtract something from a variable. S-Lang allows the following C-like shortcuts:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a+=5;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
is equivalent to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a=a+5;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and similar &amp;lt;code&amp;gt;-=&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;*=&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;/=&amp;lt;/code&amp;gt; (I don't think I've ever used the last one, though...).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Program flow control ====&lt;br /&gt;
&lt;br /&gt;
=== Conditional execution ===&lt;br /&gt;
&lt;br /&gt;
Conditional execution is done with the &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt;-statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if ( condition ) {&lt;br /&gt;
   true-code;&lt;br /&gt;
} else {&lt;br /&gt;
   false-code;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=+1;&lt;br /&gt;
variable b;&lt;br /&gt;
if ( a&amp;lt;0 ) {&lt;br /&gt;
  b=-1;&lt;br /&gt;
} else {&lt;br /&gt;
  b=+1;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that the &amp;lt;code&amp;gt;else&amp;lt;/code&amp;gt;-branch is optional.&lt;br /&gt;
&lt;br /&gt;
=== Loops ===&lt;br /&gt;
&lt;br /&gt;
== for-loop ==&lt;br /&gt;
&lt;br /&gt;
The syntax of the for loop is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
for( initialize ; condition ; increment ) {&lt;br /&gt;
   code ;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where usually in &amp;lt;code&amp;gt;initialize&amp;lt;/code&amp;gt; a loop control variable is, well, initialized, and then incremented as long as condition is valid. An example would be &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable i;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
for (i=0; i&amp;lt;npt; i++) {&lt;br /&gt;
    print (i);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
which counts from 0 to 9 (a count down is also possible, use &amp;lt;code&amp;gt;i-&amp;lt;/code&amp;gt;&amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;). Obviously, more than one line of code is possible...&lt;br /&gt;
&lt;br /&gt;
''Note:'' even though syntactically possible, never ever use anything else than an integer variable as the loop counter, unless explicitly necessary. &lt;br /&gt;
&lt;br /&gt;
== while loop ==&lt;br /&gt;
&lt;br /&gt;
The while loop is done while a condition is met:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
while ( condition ) {&lt;br /&gt;
  code ;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that if &amp;lt;code&amp;gt;condition&amp;lt;/code&amp;gt; is not met when the while loop is hit first, &amp;lt;code&amp;gt;code&amp;lt;/code&amp;gt; is not executed at all. &lt;br /&gt;
&lt;br /&gt;
The above counting example can be implemented as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable i=0;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
while ( i&amp;lt;npt ) {&lt;br /&gt;
  print(i);&lt;br /&gt;
  i++;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== do...while loop ==&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;do...while&amp;lt;/code&amp;gt;-loop is a loop where the body of the loop is executed at least once, since the condition is only tested at the end of the first passage through the loop:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
do {&lt;br /&gt;
  code ;&lt;br /&gt;
} while (condition);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Functions ====&lt;br /&gt;
&lt;br /&gt;
Functions are subroutines which execute a sequence of instructions whenever they are called (i.e., whenever their name appears in a program). Functions ''can'', but do not have to, have arguments, i.e., variables that control the behavior of the routine. &lt;br /&gt;
&lt;br /&gt;
=== Intrinsic functions ===&lt;br /&gt;
&lt;br /&gt;
''Note 1:'' More information about individual functions can be obtained with isis' &amp;lt;code&amp;gt;help&amp;lt;/code&amp;gt;-function.&lt;br /&gt;
&lt;br /&gt;
''Note 2:'' Most simple functions also work on arrays.&lt;br /&gt;
&lt;br /&gt;
== Mathematical functions ==&lt;br /&gt;
&lt;br /&gt;
* sign functions: &amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;sign&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_diff&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_max&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_min&amp;lt;/code&amp;gt;&lt;br /&gt;
* rounding functions: &amp;lt;code&amp;gt;ceil&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;floor&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;nint&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;round&amp;lt;/code&amp;gt;&lt;br /&gt;
* basic algebraic functions: &amp;lt;code&amp;gt;sqr&amp;lt;/code&amp;gt; (square!), &amp;lt;code&amp;gt;sqrt&amp;lt;/code&amp;gt; (square-root), &amp;lt;code&amp;gt;hypot&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;polynom&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mul2&amp;lt;/code&amp;gt;&lt;br /&gt;
* exponential and logarithm: &amp;lt;code&amp;gt;exp&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;expm1&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log10&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log1p&amp;lt;/code&amp;gt;&lt;br /&gt;
* trigonometric functions (argument is in radian!): &amp;lt;code&amp;gt;sin&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;tan&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;asin&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;acos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atan&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atan2&amp;lt;/code&amp;gt;&lt;br /&gt;
* hyperbolic functions: &amp;lt;code&amp;gt;sinh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cosh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;tanh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;asinh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;acosh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atanh&amp;lt;/code&amp;gt;&lt;br /&gt;
* complex numbers: &amp;lt;code&amp;gt;Real&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Imag&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Conj&amp;lt;/code&amp;gt;&lt;br /&gt;
* tests: &amp;lt;code&amp;gt;isinf&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;isnan&amp;lt;/code&amp;gt; (nan: not a number), &amp;lt;code&amp;gt;_ispos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_isneg&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_isnoneg&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Array functions ==&lt;br /&gt;
&lt;br /&gt;
* number of elements in an array: &amp;lt;code&amp;gt;length&amp;lt;/code&amp;gt;&lt;br /&gt;
* extrema: &amp;lt;code&amp;gt;max&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;min&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;maxabs&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;minabs&amp;lt;/code&amp;gt;&lt;br /&gt;
* summing array elements: &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;sumsq&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cumsum&amp;lt;/code&amp;gt;&lt;br /&gt;
* tests: &amp;lt;code&amp;gt;all&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;any&amp;lt;/code&amp;gt;&lt;br /&gt;
* get the indices for all or some elements for which a condition is met: &amp;lt;code&amp;gt;where&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherenot&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherefirst&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherelast&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Regular Expressions ==&lt;br /&gt;
&lt;br /&gt;
Regular expressions are extremely powerful to format, edit, and filter strings. The SLANG functions &amp;lt;code&amp;gt;string_match&amp;lt;/code&amp;gt; and derivatives take care of it.&lt;br /&gt;
&lt;br /&gt;
* The whitespace regex &amp;lt;code&amp;gt;\s&amp;lt;/code&amp;gt; must be replaced by an actual whitespace &amp;lt;code&amp;gt; &amp;lt;/code&amp;gt;&lt;br /&gt;
* The word regex &amp;lt;code&amp;gt;\w&amp;lt;/code&amp;gt; must be replaced by &amp;lt;code&amp;gt;[A-Za-z0-9_]&amp;lt;/code&amp;gt;&lt;br /&gt;
* To extract individual sub-strings one can group the characters by &amp;lt;code&amp;gt;\(\)&amp;lt;/code&amp;gt; (Note that the group parenthesis is escaped). This group can be accessed in the returned String array of &amp;lt;code&amp;gt;string_matches&amp;lt;/code&amp;gt;. Note that the zeroth entry contains the full string.&lt;br /&gt;
* Important: The regex string has to be followed by an &amp;lt;code&amp;gt;R&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An are some examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% Match only the result 17.56&lt;br /&gt;
variable matches = string_matches(&amp;quot;The result is: 17.56&amp;quot;, &amp;quot;[A-Za-z0-9_ ]+: \(\d+\.\d+\)&amp;quot;R);&lt;br /&gt;
% Match the result 17.56 but also 17&lt;br /&gt;
matches = string_matches(&amp;quot;The result is: 17.56&amp;quot;, &amp;quot;[A-Za-z0-9_ ]+: \(\d+\.?\d*\)&amp;quot;R);&lt;br /&gt;
variable result = matches[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% Match _exactly_ two digits, also matches if no decimal&lt;br /&gt;
string_matches(&amp;quot;dec=-02 57 75.3&amp;quot;, &amp;quot;dec=\(-?\d\{2\} \d\{2\} \d\{2\}\.?\d*\)&amp;quot;R)&lt;br /&gt;
&amp;lt;\pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
% Something more complex&lt;br /&gt;
variable teststr = &amp;quot;1  |      0.00|4U 1850-03&amp;quot;;&lt;br /&gt;
variable regex  = &amp;quot;\(\d\)[ ]+|[ ]+\(\d+.\d+\)|\(.+ .+\)&amp;quot;R;&lt;br /&gt;
variable matches = string_matches(teststr, regex);&lt;br /&gt;
variable sourcename = matches[3]; % Note that matches[0] contains teststr&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
&lt;br /&gt;
Output is done with the &amp;lt;code&amp;gt;print&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; functions. &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; uses a format similar to the C &amp;lt;code&amp;gt;printf&amp;lt;/code&amp;gt;-function to format the output. Examples include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=1.2347;&lt;br /&gt;
vmessage(&amp;quot;%f&amp;quot;,a);   % print with full precision, note the roundoff error!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=25;&lt;br /&gt;
vmessage(&amp;quot;%05d&amp;quot;,a); % print 5 digits, zero padded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note'': &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; works very similar to the function &amp;lt;code&amp;gt;printf&amp;lt;/code&amp;gt;, which exists in many programming languages (and also in S-Lang).&lt;br /&gt;
&lt;br /&gt;
=== user-defined functions ===&lt;br /&gt;
&lt;br /&gt;
your own function can be defined with the syntax&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
define functionname (arguments) {&lt;br /&gt;
   code;&lt;br /&gt;
   :&lt;br /&gt;
   code;&lt;br /&gt;
   return value;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where the return value statement is optional.&lt;br /&gt;
&lt;br /&gt;
Functions are very useful to structure your program. Use them liberally! An example would be that for a given data set, you write a function to load the data and do the rebinning. Additionally, giving useful names to your functions improves the readability of your code.&lt;br /&gt;
&lt;br /&gt;
A more silly example is the following, which returns the sum and difference of two numbers:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
define adddiff(a,b) {&lt;br /&gt;
   % return the sum and difference of two numbers.&lt;br /&gt;
   variable sum=a+b;&lt;br /&gt;
   variable diff=a-b;&lt;br /&gt;
   return [sum,diff];&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note the comments. It is good style to comment your code well in order to allow you and others later to understand what the code is doing. You should always comment your code while writing, do not only do it at the end because somebody told you so, make the writing of comments part of your coding practice!&lt;br /&gt;
&lt;br /&gt;
'''Exercise 2'''&lt;br /&gt;
&lt;br /&gt;
Write a slang function 'midnight' which returns the roots of a quadratic equation &amp;lt;math&amp;gt;ax^2+bx+c&amp;lt;/math&amp;gt;. The routine should work for all possible values of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;b&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
=== libraries ===&lt;br /&gt;
&lt;br /&gt;
Libraries are collections of S-Lang functions. They get loaded with the statement&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require(&amp;quot;libraryname&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards, all functions in &amp;quot;libraryname&amp;quot; are available. &lt;br /&gt;
&lt;br /&gt;
==== Isis programs ====&lt;br /&gt;
&lt;br /&gt;
Isis programs consist of a sequence of function declarations and a main program, stored in a file that can be written with any editor of your choice. To execute a program you have several choices. In the following, let's assume the program's filename is &amp;lt;code&amp;gt;test.sl&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
  - you can execute the program from the Linux command line, by issuing the command &amp;lt;code&amp;gt;isis test.sl&amp;lt;/code&amp;gt;;&lt;br /&gt;
  - if you want to execute the program from within isis (e.g., because you want to work on its output interactively, use &amp;lt;code&amp;gt;()=evalfile(&amp;quot;test.sl&amp;quot;);&amp;lt;/code&amp;gt;.&lt;br /&gt;
  - to run a program under isis and immediately exit isis, use the &amp;quot;shebang&amp;quot; notation. For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/usr/bin/env isis&lt;br /&gt;
&lt;br /&gt;
% stupid count down example&lt;br /&gt;
variable i;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
for (i=0; i&amp;lt;npt; i++) {&lt;br /&gt;
    print (i);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
then make the code executable under Linux:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hde226868:~/&amp;gt; chmod ugo+x ./test.sl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After this you can execute the code with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hde226868:~/&amp;gt; ./test.sl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The name &amp;quot;shebang&amp;quot;-notation comes from the pronounciation of the hash-sign '#' as &amp;quot;she&amp;quot; and the exclamation mark as &amp;quot;bang&amp;quot;. yes, really.&lt;br /&gt;
&lt;br /&gt;
''' Exercise 3'''&lt;br /&gt;
&lt;br /&gt;
Write a S-Lang program that loads the gratings data from Exercise 3 of [[isis:tutorial:fitting1|Advanced Fitting Techniques, 1]]. The program should have functions that&lt;br /&gt;
# load the data, ignore the appropriate energy channels and rebin it. The function should return the indices of the PCA, HEXTE A, and HEXTE B data.&lt;br /&gt;
# setup the fit function and set the parameters to reasonable starting values&lt;br /&gt;
# call the above functions from a main program and perform the fit&lt;br /&gt;
# call a third function that makes a plot of the best-fit with residuals. Use the same color for the HEXTE A and B data points. ''Hint:'' note that the call to the plot functions is a list. The colors assigned to the data points through the &amp;lt;code&amp;gt;dcol&amp;lt;/code&amp;gt; qualifiers apply to the individual list elements. For example, if &amp;lt;code&amp;gt;dcol=[1,2]&amp;lt;/code&amp;gt; then the spectra corresponding to the  2nd list element are plotted in color number 2. The list describing the spectra is a ''list'', i.e., it can contain arrays as list elements...  In other words: call the plot functions such that all color qualifiers have only two elements. &lt;br /&gt;
# call a fourth function which calculates a 2D-error contour for &amp;lt;math&amp;gt;N_H&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\Gamma&amp;lt;/math&amp;gt; (NOTE: the relevant information for this last point is not yet there and, because of carpal tunnel syndrome, will only be available on Tuesday).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Isis:tutorial:slang&amp;diff=1895</id>
		<title>Isis:tutorial:slang</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=Isis:tutorial:slang&amp;diff=1895"/>
		<updated>2019-10-17T11:35:29Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;====== Programming in S-Lang ======&lt;br /&gt;
&lt;br /&gt;
''Remark:'' This brief introduction into S-Lang is primarily a translation of the German-language introduction to S-Lang used in the Remeis astronomy lab, which was mainly written by Manfred Hanke.&lt;br /&gt;
&lt;br /&gt;
===== Introduction =====&lt;br /&gt;
&lt;br /&gt;
The underlying engine of Isis is S-Lang, an interpreted language that is similar to other modern scripting languages such as Perl or python. All of these languages are &amp;quot;Algol-like&amp;quot;, therefore, if you know how to program in C or any other of these scripting languages, you should not have a problem to program in S-Lang as well. &lt;br /&gt;
&lt;br /&gt;
The big advantage of having a scripting language as part of a data analysis package is that many things that are &amp;quot;routine&amp;quot; work can be automated, increasing your efficiency. This includes things like loading the data set that you're working with, e.g., in the case you are working with many spectra from different instruments, and need to do some specific ignoring and rebinning, or the calculation of errors. It also allows you to access all internal structures used in doing your best fit, such that you can prepare very nice figures our output your best fit parameters in a way that is better suited to publication than the standard isis routines. Historically, many astronomers (yours truly included) did this last step in IDL. While that language is very nice, it is also very expensive, with educational licenses costing around 1000 EUR ''per year''. It thus makes a lot of sense to move away from this and use a cheaper and more integrated approach to data analysis.&lt;br /&gt;
&lt;br /&gt;
''A comment to future data analysts:'' Scripting is very good, however, do not try to script everything. Many points of data analysis have to do with understanding your data set and here it is often much better to play with it by hand than to automatize things. Get a &amp;quot;feel&amp;quot; for your data first before trusting the computer to do everything right...&lt;br /&gt;
&lt;br /&gt;
''A comment to the language-warriors:'' Often people will ask why S-Lang was chosen as the interface and not, e.g., python. The reason is simple: because it was there. The important thing is that a scripting language is there at all. The main difficulty in learning how to program is not the programming syntax - if you think so, then you are not a good programmer - but rather to think in an algorithmic way. And this type of thinking is difficult to learn. Learning a new syntax isn't. The author of these lines (not M. Hanke ;-) ) started his life with a simple form of Amstrad Basic, followed by Omikron Basic, PASCAL, Turbo Pascal, Fortran-77(yes, it really is spelled &amp;quot;Fortran&amp;quot;, not &amp;quot;FORTRAN&amp;quot;. The only FORTRAN in existence was FORTRAN 66, since the 1977 standard, that language was spelled &amp;quot;Fortran&amp;quot;...), Fortran-90, IDL, C, C++, Perl, javascript, and I am sure some more languages that I have forgotten (plus all of the assembly languages that were useful when one was still programming in assembly, i.e., 80x86, 68xxxx, and so on). Historically, all of these languages have a syntax that goes back to Algol in the 1960s, and thus in the core they are all the same. For this reason, do not worry about having to learn yet another scripting language, it's just a little bit of syntax. And, if you don't know how to program, start now. Because of the languages are all the same, it does not matter that S-Lang might be seen as obscure by some people, once you know how to think algorithmically, switching over to another language won't cost you too much time. This also means that if you are applying to jobs and somebody claims that you must know java or any other language, stay away from these jobs - knowing how to program is what makes you interesting, not the specific language...&lt;br /&gt;
&lt;br /&gt;
In contrast to compiled languages such as C, C++ or Fortran, scripting languages such as IDL, Perl, python, have the advantage that one can also work with them interactively and thus write small &amp;quot;programs&amp;quot; directly on the command line. We are using this feature all the time when doing data analysis by hand.&lt;br /&gt;
&lt;br /&gt;
In the following we assume that you had at least some previous exposure to programming, and just give a list of the most important language structures. &lt;br /&gt;
&lt;br /&gt;
===== S-Lang Language elements =====&lt;br /&gt;
&lt;br /&gt;
S-Lang consists of the following language elements that allow you to structure your programs. Note that in S-Lang programs ''all'' statements must be ending with a semicolon.&lt;br /&gt;
&lt;br /&gt;
==== Variable Declarations and Assignments ====&lt;br /&gt;
&lt;br /&gt;
In S-Lang programs, variables must be declared (this is optional on the command line). This is done with the instruction&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable var_1, var_2, ... ;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
you then assign values to a variable with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var_1=value;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where &amp;lt;code&amp;gt;value&amp;lt;/code&amp;gt; is a valid S-Lang statement. It is possible to combine the variable declaration and assignment, e.g.,&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or more complicated expressions such as&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
var_1 = sin(a)+sqrt(25.);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Variable names may consist of any combination of the standard ASCII characters &amp;lt;code&amp;gt;a-zA-Z0-9&amp;lt;/code&amp;gt; as well as the underscore &amp;lt;code&amp;gt;_&amp;lt;/code&amp;gt;  and the dollar sign &amp;lt;code&amp;gt;$&amp;lt;/code&amp;gt;. A variable is not allowed to start with a number.&lt;br /&gt;
&lt;br /&gt;
==== Data Types ====&lt;br /&gt;
&lt;br /&gt;
=== Simple Data Types ===&lt;br /&gt;
&lt;br /&gt;
S-Lang variables are generally weakly typed, that is the type of a variable is defined by the type of whatever is assigned to it. For example&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
means that after the assignment &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; is an integer. While&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=2.;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
means that &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; is a floating point number. Strings are assigned with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=&amp;quot;abcd&amp;quot;;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
However, note that a variable can easily change its type, because the weak typing will mean that after the execution of&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=&amp;quot;abcd&amp;quot;; % String_Type&lt;br /&gt;
a=2.3;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt; will have the type &amp;lt;code&amp;gt;Double_Type&amp;lt;/code&amp;gt;.  You can check this by printing the type of the variable:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
typeof(a);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
'''Exercise 1:'''&lt;br /&gt;
&lt;br /&gt;
Assign the result of &amp;lt;code&amp;gt;typeof(a)&amp;lt;/code&amp;gt; to some other variable. What is the datatype of that other variable?&lt;br /&gt;
&lt;br /&gt;
=== An aside on integer and floating point arithmetic ===&lt;br /&gt;
&lt;br /&gt;
Note that while weak typing usually speeds up code development, it does not preserve you from the pitfalls that go hand in hand with integer and floating point arithmetic. Consider the following classical example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=5;&lt;br /&gt;
variable b=10;&lt;br /&gt;
variable c=a/b;&lt;br /&gt;
print(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; is 0 because of the rules of integer arithmetic (everything after the &amp;quot;.&amp;quot; is cut away). The correct result is obtained when doing floating point arithmetic:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=5.;&lt;br /&gt;
variable b=10.;&lt;br /&gt;
variable c=a/b;&lt;br /&gt;
print(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Even worse is the following often encountered example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=1000;&lt;br /&gt;
variable b=6500;&lt;br /&gt;
variable c=a*b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and because of the rules of integer arithmetic you will have an integer overflow and &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; might even be negative.&lt;br /&gt;
&lt;br /&gt;
The rule in arithmetic expressions is that the &amp;quot;strongest&amp;quot; data type wins, i.e., in&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=10000000.0;&lt;br /&gt;
variable b=65000000;&lt;br /&gt;
variable c=a*b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt; will have the correct data type since the multiplication is performed in double precision.&lt;br /&gt;
&lt;br /&gt;
If you need to be 100 percent sure that a calculation needs to be done in a certain data type and you have no control that the variables entering an expression have that type (this is, e.g., the case in functions that are called by somebody else), you can force S-Lang to convert (&amp;quot;typecast&amp;quot;) a variable to a certain type:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a=double(a);&lt;br /&gt;
b=int(b);&lt;br /&gt;
c=string(c);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Arrays and Lists ===&lt;br /&gt;
&lt;br /&gt;
You can combine the above simple data types into more complicated ones. The most important of these are&lt;br /&gt;
&lt;br /&gt;
== Arrays==&lt;br /&gt;
&lt;br /&gt;
Arrays are ordered lists of things of the same data type and are declared using brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable arr=[1,2,3];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Content of arrays is accessed by giving the index in brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable c=arr[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that arrays are zero based, i.e., the above returns &amp;lt;code&amp;gt;2&amp;lt;/code&amp;gt;;&lt;br /&gt;
It is possible to access more than one element at the same time by using an array as the argument of the brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable c=arr[[0,1]];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
which produces an array containing two elements. If you want larger parts of an array, there is a very powerful &amp;quot;slicing&amp;quot; syntax that makes use of the fact that &amp;lt;code&amp;gt;[a:b]&amp;lt;/code&amp;gt; defines the array &amp;lt;code&amp;gt;[a,a+1,a+2,..,b]&amp;lt;/code&amp;gt; (for b&amp;gt;a and a,b Integers):&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable b=arr[[0:1]];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
(which is a somewhat silly example...).&lt;br /&gt;
&lt;br /&gt;
Arrays can be multi-dimensional, but the definition is not as nice as in other scripting languages:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable arr=Array_Type[2,3];&lt;br /&gt;
arr[0,[0:2]]=[1,2,3];&lt;br /&gt;
arr[1,[0:2]]=[5,4,3];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Note that also arrays with floating values can be created by the very similar syntax:  &amp;lt;code&amp;gt;[a:b:c]&amp;lt;/code&amp;gt; creates an array with values &amp;lt;code&amp;gt;[a, a+c, a+2*c,...]&amp;lt;/code&amp;gt;, such that the last value is still lower than b. Even more comfortable is the syntax &amp;lt;code&amp;gt;[a:b:#n]&amp;lt;/code&amp;gt;, which creates exactly an array of length n, with equally spaced values ranging from a to b.&lt;br /&gt;
&lt;br /&gt;
== Lists == &lt;br /&gt;
&lt;br /&gt;
Lists are ordered lists of things that can be of different data type. They are declared using curly brackets:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable lis={1,2,3};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Accessing the list elements uses the standard bracket syntax:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=lis[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Lists are important whenever you want to store different things in one variable. For example, the following is legal:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable lis2={1,[&amp;quot;a&amp;quot;,&amp;quot;b&amp;quot;,&amp;quot;c&amp;quot;],3.2};&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Operators ====&lt;br /&gt;
&lt;br /&gt;
Binary operators combine two expressions, &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt;, where &amp;lt;code&amp;gt;x&amp;lt;/code&amp;gt; and &amp;lt;code&amp;gt;y&amp;lt;/code&amp;gt; are constants, variables, functions and so on. The most important operators are:&lt;br /&gt;
&lt;br /&gt;
* '''arithmetic operators''': &lt;br /&gt;
** &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;*&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;/&amp;lt;/code&amp;gt;: basic arithmetic operators, the usual priority rules apply,&lt;br /&gt;
** &amp;lt;code&amp;gt;^&amp;lt;/code&amp;gt;: exponentiation (&amp;lt;code&amp;gt;2^3&amp;lt;/code&amp;gt; is two to the power of three),&lt;br /&gt;
** &amp;lt;code&amp;gt;mod&amp;lt;/code&amp;gt;: modulo operation&lt;br /&gt;
* ''' string concatenation''' is done with the &amp;lt;code&amp;gt;+&amp;lt;/code&amp;gt; operator.&lt;br /&gt;
* ''' comparison ''': is done with &amp;lt;code&amp;gt;&amp;lt;&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;&amp;lt;=&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;==&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;&amp;gt;=&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;&amp;gt;&amp;lt;/code&amp;gt;.  Note that like in all programming languages, you should ''never'' test two floating point variables for equality, this will most often not work in the way you expect...&lt;br /&gt;
&lt;br /&gt;
All of these operators can be used not only on scalar values but also on arrays. They are then used on an element basis. The resulting code is very fast. For example, to add two arrays:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=[1,2,3];&lt;br /&gt;
variable b=[6,5,3];&lt;br /&gt;
variable c=a+b;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an aside, one often wants to add/subtract something from a variable. S-Lang allows the following C-like shortcuts:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a+=5;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
is equivalent to&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
a=a+5;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
and similar &amp;lt;code&amp;gt;-=&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;*=&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;/=&amp;lt;/code&amp;gt; (I don't think I've ever used the last one, though...).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Program flow control ====&lt;br /&gt;
&lt;br /&gt;
=== Conditional execution ===&lt;br /&gt;
&lt;br /&gt;
Conditional execution is done with the &amp;lt;code&amp;gt;if&amp;lt;/code&amp;gt;-statement:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
if ( condition ) {&lt;br /&gt;
   true-code;&lt;br /&gt;
} else {&lt;br /&gt;
   false-code;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=+1;&lt;br /&gt;
variable b;&lt;br /&gt;
if ( a&amp;lt;0 ) {&lt;br /&gt;
  b=-1;&lt;br /&gt;
} else {&lt;br /&gt;
  b=+1;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that the &amp;lt;code&amp;gt;else&amp;lt;/code&amp;gt;-branch is optional.&lt;br /&gt;
&lt;br /&gt;
=== Loops ===&lt;br /&gt;
&lt;br /&gt;
== for-loop ==&lt;br /&gt;
&lt;br /&gt;
The syntax of the for loop is&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
for( initialize ; condition ; increment ) {&lt;br /&gt;
   code ;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where usually in &amp;lt;code&amp;gt;initialize&amp;lt;/code&amp;gt; a loop control variable is, well, initialized, and then incremented as long as condition is valid. An example would be &lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable i;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
for (i=0; i&amp;lt;npt; i++) {&lt;br /&gt;
    print (i);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
which counts from 0 to 9 (a count down is also possible, use &amp;lt;code&amp;gt;i-&amp;lt;/code&amp;gt;&amp;lt;code&amp;gt;-&amp;lt;/code&amp;gt;). Obviously, more than one line of code is possible...&lt;br /&gt;
&lt;br /&gt;
''Note:'' even though syntactically possible, never ever use anything else than an integer variable as the loop counter, unless explicitly necessary. &lt;br /&gt;
&lt;br /&gt;
== while loop ==&lt;br /&gt;
&lt;br /&gt;
The while loop is done while a condition is met:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
while ( condition ) {&lt;br /&gt;
  code ;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note that if &amp;lt;code&amp;gt;condition&amp;lt;/code&amp;gt; is not met when the while loop is hit first, &amp;lt;code&amp;gt;code&amp;lt;/code&amp;gt; is not executed at all. &lt;br /&gt;
&lt;br /&gt;
The above counting example can be implemented as follows:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable i=0;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
while ( i&amp;lt;npt ) {&lt;br /&gt;
  print(i);&lt;br /&gt;
  i++;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== do...while loop ==&lt;br /&gt;
&lt;br /&gt;
The &amp;lt;code&amp;gt;do...while&amp;lt;/code&amp;gt;-loop is a loop where the body of the loop is executed at least once, since the condition is only tested at the end of the first passage through the loop:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
do {&lt;br /&gt;
  code ;&lt;br /&gt;
} while (condition);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Functions ====&lt;br /&gt;
&lt;br /&gt;
Functions are subroutines which execute a sequence of instructions whenever they are called (i.e., whenever their name appears in a program). Functions ''can'', but do not have to, have arguments, i.e., variables that control the behavior of the routine. &lt;br /&gt;
&lt;br /&gt;
=== Intrinsic functions ===&lt;br /&gt;
&lt;br /&gt;
''Note 1:'' More information about individual functions can be obtained with isis' &amp;lt;code&amp;gt;help&amp;lt;/code&amp;gt;-function.&lt;br /&gt;
&lt;br /&gt;
''Note 2:'' Most simple functions also work on arrays.&lt;br /&gt;
&lt;br /&gt;
== Mathematical functions ==&lt;br /&gt;
&lt;br /&gt;
* sign functions: &amp;lt;code&amp;gt;abs&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;sign&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_diff&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_max&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_min&amp;lt;/code&amp;gt;&lt;br /&gt;
* rounding functions: &amp;lt;code&amp;gt;ceil&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;floor&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;nint&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;round&amp;lt;/code&amp;gt;&lt;br /&gt;
* basic algebraic functions: &amp;lt;code&amp;gt;sqr&amp;lt;/code&amp;gt; (square!), &amp;lt;code&amp;gt;sqrt&amp;lt;/code&amp;gt; (square-root), &amp;lt;code&amp;gt;hypot&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;polynom&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;mul2&amp;lt;/code&amp;gt;&lt;br /&gt;
* exponential and logarithm: &amp;lt;code&amp;gt;exp&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;expm1&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log10&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;log1p&amp;lt;/code&amp;gt;&lt;br /&gt;
* trigonometric functions (argument is in radian!): &amp;lt;code&amp;gt;sin&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;tan&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;asin&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;acos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atan&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atan2&amp;lt;/code&amp;gt;&lt;br /&gt;
* hyperbolic functions: &amp;lt;code&amp;gt;sinh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cosh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;tanh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;asinh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;acosh&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;atanh&amp;lt;/code&amp;gt;&lt;br /&gt;
* complex numbers: &amp;lt;code&amp;gt;Real&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Imag&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;Conj&amp;lt;/code&amp;gt;&lt;br /&gt;
* tests: &amp;lt;code&amp;gt;isinf&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;isnan&amp;lt;/code&amp;gt; (nan: not a number), &amp;lt;code&amp;gt;_ispos&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_isneg&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;_isnoneg&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Array functions ==&lt;br /&gt;
&lt;br /&gt;
* number of elements in an array: &amp;lt;code&amp;gt;length&amp;lt;/code&amp;gt;&lt;br /&gt;
* extrema: &amp;lt;code&amp;gt;max&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;min&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;maxabs&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;minabs&amp;lt;/code&amp;gt;&lt;br /&gt;
* summing array elements: &amp;lt;code&amp;gt;sum&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;sumsq&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;cumsum&amp;lt;/code&amp;gt;&lt;br /&gt;
* tests: &amp;lt;code&amp;gt;all&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;any&amp;lt;/code&amp;gt;&lt;br /&gt;
* get the indices for all or some elements for which a condition is met: &amp;lt;code&amp;gt;where&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherenot&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherefirst&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;wherelast&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Regular Expressions ==&lt;br /&gt;
&lt;br /&gt;
Regular expressions are extremely powerful to format, edit, and filter strings. The SLANG functions &amp;lt;code&amp;gt;string_match&amp;lt;/code&amp;gt; and derivatives take care of it.&lt;br /&gt;
&lt;br /&gt;
* The whitespace regex &amp;lt;code&amp;gt;\s&amp;lt;/code&amp;gt; must be replaced by &amp;lt;code&amp;gt;[ ]&amp;lt;/code&amp;gt;&lt;br /&gt;
* The word regex &amp;lt;code&amp;gt;\w&amp;lt;/code&amp;gt; must be replaced by &amp;lt;code&amp;gt;[A-Za-z0-9_]&amp;lt;/code&amp;gt;&lt;br /&gt;
* To extract individual sub-strings one can group the characters by &amp;lt;code&amp;gt;\(\)&amp;lt;/code&amp;gt;. This group can be accessed in the returned String array of &amp;lt;code&amp;gt;string_matches&amp;lt;/code&amp;gt;. Note that the zeroth entry contains the full string.&lt;br /&gt;
* Important: The regex string has to be followed by an &amp;lt;code&amp;gt;R&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
An are some examples:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable teststr = &amp;quot;1  |      0.00|4U 1850-03&amp;quot;;&lt;br /&gt;
variable regex  = &amp;quot;\(\d\)[ ]+|[ ]+\(\d+.\d+\)|\(.+ .+\)&amp;quot;R;&lt;br /&gt;
variable matches = string_matches(teststr, regex);&lt;br /&gt;
variable sourcename = matches[3]; % Note that matches[0] contains teststr&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable matches = string_matches(&amp;quot;The result is: 17.56&amp;quot;, &amp;quot;[A-Za-z0-9_ ]+: \(\d+\.\d+\)&amp;quot;R);&lt;br /&gt;
variable result = matches[1];&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Printing ==&lt;br /&gt;
&lt;br /&gt;
Output is done with the &amp;lt;code&amp;gt;print&amp;lt;/code&amp;gt; and the &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; functions. &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; uses a format similar to the C &amp;lt;code&amp;gt;printf&amp;lt;/code&amp;gt;-function to format the output. Examples include:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=1.2347;&lt;br /&gt;
vmessage(&amp;quot;%f&amp;quot;,a);   % print with full precision, note the roundoff error!&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
or&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
variable a=25;&lt;br /&gt;
vmessage(&amp;quot;%05d&amp;quot;,a); % print 5 digits, zero padded&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
''Note'': &amp;lt;code&amp;gt;vmessage&amp;lt;/code&amp;gt; works very similar to the function &amp;lt;code&amp;gt;printf&amp;lt;/code&amp;gt;, which exists in many programming languages (and also in S-Lang).&lt;br /&gt;
&lt;br /&gt;
=== user-defined functions ===&lt;br /&gt;
&lt;br /&gt;
your own function can be defined with the syntax&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
define functionname (arguments) {&lt;br /&gt;
   code;&lt;br /&gt;
   :&lt;br /&gt;
   code;&lt;br /&gt;
   return value;&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
where the return value statement is optional.&lt;br /&gt;
&lt;br /&gt;
Functions are very useful to structure your program. Use them liberally! An example would be that for a given data set, you write a function to load the data and do the rebinning. Additionally, giving useful names to your functions improves the readability of your code.&lt;br /&gt;
&lt;br /&gt;
A more silly example is the following, which returns the sum and difference of two numbers:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
define adddiff(a,b) {&lt;br /&gt;
   % return the sum and difference of two numbers.&lt;br /&gt;
   variable sum=a+b;&lt;br /&gt;
   variable diff=a-b;&lt;br /&gt;
   return [sum,diff];&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Note the comments. It is good style to comment your code well in order to allow you and others later to understand what the code is doing. You should always comment your code while writing, do not only do it at the end because somebody told you so, make the writing of comments part of your coding practice!&lt;br /&gt;
&lt;br /&gt;
'''Exercise 2'''&lt;br /&gt;
&lt;br /&gt;
Write a slang function 'midnight' which returns the roots of a quadratic equation &amp;lt;math&amp;gt;ax^2+bx+c&amp;lt;/math&amp;gt;. The routine should work for all possible values of &amp;lt;code&amp;gt;a&amp;lt;/code&amp;gt;, &amp;lt;code&amp;gt;b&amp;lt;/code&amp;gt;, and &amp;lt;code&amp;gt;c&amp;lt;/code&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
=== libraries ===&lt;br /&gt;
&lt;br /&gt;
Libraries are collections of S-Lang functions. They get loaded with the statement&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
require(&amp;quot;libraryname&amp;quot;);&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
Afterwards, all functions in &amp;quot;libraryname&amp;quot; are available. &lt;br /&gt;
&lt;br /&gt;
==== Isis programs ====&lt;br /&gt;
&lt;br /&gt;
Isis programs consist of a sequence of function declarations and a main program, stored in a file that can be written with any editor of your choice. To execute a program you have several choices. In the following, let's assume the program's filename is &amp;lt;code&amp;gt;test.sl&amp;lt;/code&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
  - you can execute the program from the Linux command line, by issuing the command &amp;lt;code&amp;gt;isis test.sl&amp;lt;/code&amp;gt;;&lt;br /&gt;
  - if you want to execute the program from within isis (e.g., because you want to work on its output interactively, use &amp;lt;code&amp;gt;()=evalfile(&amp;quot;test.sl&amp;quot;);&amp;lt;/code&amp;gt;.&lt;br /&gt;
  - to run a program under isis and immediately exit isis, use the &amp;quot;shebang&amp;quot; notation. For example:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
#!/usr/bin/env isis&lt;br /&gt;
&lt;br /&gt;
% stupid count down example&lt;br /&gt;
variable i;&lt;br /&gt;
variable npt=10;&lt;br /&gt;
for (i=0; i&amp;lt;npt; i++) {&lt;br /&gt;
    print (i);&lt;br /&gt;
}&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
then make the code executable under Linux:&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hde226868:~/&amp;gt; chmod ugo+x ./test.sl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
After this you can execute the code with&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
hde226868:~/&amp;gt; ./test.sl&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The name &amp;quot;shebang&amp;quot;-notation comes from the pronounciation of the hash-sign '#' as &amp;quot;she&amp;quot; and the exclamation mark as &amp;quot;bang&amp;quot;. yes, really.&lt;br /&gt;
&lt;br /&gt;
''' Exercise 3'''&lt;br /&gt;
&lt;br /&gt;
Write a S-Lang program that loads the gratings data from Exercise 3 of [[isis:tutorial:fitting1|Advanced Fitting Techniques, 1]]. The program should have functions that&lt;br /&gt;
# load the data, ignore the appropriate energy channels and rebin it. The function should return the indices of the PCA, HEXTE A, and HEXTE B data.&lt;br /&gt;
# setup the fit function and set the parameters to reasonable starting values&lt;br /&gt;
# call the above functions from a main program and perform the fit&lt;br /&gt;
# call a third function that makes a plot of the best-fit with residuals. Use the same color for the HEXTE A and B data points. ''Hint:'' note that the call to the plot functions is a list. The colors assigned to the data points through the &amp;lt;code&amp;gt;dcol&amp;lt;/code&amp;gt; qualifiers apply to the individual list elements. For example, if &amp;lt;code&amp;gt;dcol=[1,2]&amp;lt;/code&amp;gt; then the spectra corresponding to the  2nd list element are plotted in color number 2. The list describing the spectra is a ''list'', i.e., it can contain arrays as list elements...  In other words: call the plot functions such that all color qualifiers have only two elements. &lt;br /&gt;
# call a fourth function which calculates a 2D-error contour for &amp;lt;math&amp;gt;N_H&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;\Gamma&amp;lt;/math&amp;gt; (NOTE: the relevant information for this last point is not yet there and, because of carpal tunnel syndrome, will only be available on Tuesday).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Category:Isis / Slang]]&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=GRO_J1744-28&amp;diff=1892</id>
		<title>GRO J1744-28</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=GRO_J1744-28&amp;diff=1892"/>
		<updated>2019-09-26T12:54:25Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Accreting X-ray Pulsars]]&lt;br /&gt;
&lt;br /&gt;
'''Other names''' : 2EG J1746-2852 ([http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=GRO+J1744-28&amp;amp;submit=SIMBAD+search])&lt;br /&gt;
&lt;br /&gt;
'''Available data''': NuSTAR ObsID 80202027002&lt;br /&gt;
&lt;br /&gt;
'''Monitoring data''':&lt;br /&gt;
[https://swift.gsfc.nasa.gov/results/transients/weak/GROJ1744-28/ Swift/BAT]&lt;br /&gt;
&lt;br /&gt;
= Type = &lt;br /&gt;
&lt;br /&gt;
* Transient Low-mass X-ray Binary (Neutron Star)&lt;br /&gt;
* Type I and II X-ray bursts and pulsations (next to Rapid Burster one of a few sources where Type II bursts are observed)&lt;br /&gt;
* Discovered on 1995 December 2 with the Burst And Transient Source Experiment (BATSE) on-board the Compton Gamma Ray Observatory (&amp;lt;ref name=&amp;quot;Kouveliotou96A&amp;quot;/&amp;gt;) &lt;br /&gt;
&lt;br /&gt;
= Coordinates = &lt;br /&gt;
&lt;br /&gt;
RA 17h 44‘ 33.09“ DEC -28° 44‘ 27.0“&lt;br /&gt;
&lt;br /&gt;
= Binary system =&lt;br /&gt;
&lt;br /&gt;
* Distance: 7.5-8.5 kpc (&amp;lt;ref name=&amp;quot;Augusteijn97&amp;quot;/&amp;gt;, &amp;lt;ref name=&amp;quot;Nishiuchi99&amp;quot;/&amp;gt;)&lt;br /&gt;
* Optical companion: G4 III star (&amp;lt;ref name=&amp;quot;Gosling07A&amp;quot;/&amp;gt;, &amp;lt;ref name=&amp;quot;Masetti14Atel&amp;quot;/&amp;gt;) with M&amp;lt;0.3M&amp;lt;sub&amp;gt;sun&amp;lt;/sub&amp;gt; and inclination i&amp;gt;15° (&amp;lt;ref name=&amp;quot;Gosling07A&amp;quot;/&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
== Orbit ==&lt;br /&gt;
&lt;br /&gt;
Parameters inferred from 2014 outburst (see &amp;lt;ref name=&amp;quot;Pintore14Atel&amp;quot;/&amp;gt;, [https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/groj1744.html NSSTC Gamma Ray Astrophysics: GRO J1744])&lt;br /&gt;
&lt;br /&gt;
* P&amp;lt;sub&amp;gt;orb&amp;lt;/sub&amp;gt; = 11.836 days&lt;br /&gt;
* T&amp;lt;sub&amp;gt;π/2&amp;lt;/sub&amp;gt; = 2456696.19880 (JED)&lt;br /&gt;
* a&amp;lt;sub&amp;gt;x&amp;lt;/sub&amp;gt; sin(i) = 2.637 light-sec&lt;br /&gt;
* no constrains on the longitude of periastron or eccentricity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pulsations &amp;amp; Magnetic field ==&lt;br /&gt;
&lt;br /&gt;
GRO J1744-28 is special because it exhibits X-ray bursts and pulsations at the same time. Sources which show X-ray bursts are generally believed to have surface conditions (low B-fields) which do not allow pulsations.&lt;br /&gt;
&lt;br /&gt;
* Pulse period: 2.14Hz &amp;lt;ref name=&amp;quot;Finger96A&amp;quot;/&amp;gt;&lt;br /&gt;
* B = 2–6 x 10&amp;lt;sup&amp;gt;10&amp;lt;/sup&amp;gt; G from disk reflection models (&amp;lt;ref name=&amp;quot;Degenaar14A&amp;quot;/&amp;gt;)&lt;br /&gt;
* B = 5.27&amp;amp;#177;0.06 x 10&amp;lt;sup&amp;gt;11&amp;lt;/sup&amp;gt; G from CRSF measurements (&amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outbursts ==&lt;br /&gt;
&lt;br /&gt;
* 1995 December: Discovery and first report of Type II X-ray bursts (&amp;lt;ref name=&amp;quot;Finger96A&amp;quot;/&amp;gt;)&lt;br /&gt;
* 1996 December: Similar burst characteristics (&amp;lt;ref name=&amp;quot;Woods99&amp;quot;/&amp;gt;), CRSF report at 5keV in BeppoSAX data (not yet proven) (&amp;lt;ref name=&amp;quot;Doroshenko15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* 2014 February: Outburst after 18 years of quiescence (&amp;lt;ref name=&amp;quot;Younes15&amp;quot;/&amp;gt;, no CRSF), CRSF report at 5keV, 10keV and 15keV in XMM-Newton/INTEGRAL data, still under debate (&amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* 2017 February: Fourth outburst with ~two orders of magnitude lower luminosity (Koenig et al. in prep.)&lt;br /&gt;
&lt;br /&gt;
== X-ray Spectrum ==&lt;br /&gt;
&lt;br /&gt;
* Spectrum shows typical cut-off powerlaw, like expected from accreting X-ray binary&lt;br /&gt;
* Broad iron line at 6-7 keV (disk reflection or fast disk wind &amp;lt;ref name=&amp;quot;Degenaar14A&amp;quot;/&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
=== Cyclotron Features ===&lt;br /&gt;
&lt;br /&gt;
Cyclotron line in this source is under debate. GRO J1744−28 is one of the few LMXBs where a CRSF has been reported below 10 keV (other candidates are X1822−371, E&amp;lt;sub&amp;gt;CRSF&amp;lt;/sub&amp;gt;=0.7keV &amp;lt;ref name=&amp;quot;Iaria15A&amp;quot;/&amp;gt;, SWIFT J0051.8−7320, E&amp;lt;sub&amp;gt;CRSF&amp;lt;/sub&amp;gt;=5keV &amp;lt;ref name=&amp;quot;Maitra18A&amp;quot;/&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
* Fundamental CRSF at 4.68&amp;amp;#177;0.05 keV (gabs, XMM-Newton/INTEGRAL, 2014 outburst &amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;) / ∼4.5 keV (gabs, BeppoSAX, 1997 outburst &amp;lt;ref name=&amp;quot;Doroshenko15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* Indication of second and third harmonic at 10.4&amp;amp;#177;0.1 keV and 15.8&amp;lt;sup&amp;gt;+1.3&amp;lt;/sup&amp;gt;&amp;lt;sub&amp;gt;-0.7&amp;lt;/sub&amp;gt; keV in XMM-Newton/INTEGRAL data (using gabs) (&amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* No cyclotron line in low-flux 2017 February outburst (gabs strength upper limit at 0.07keV, 90% CL) (König et al. in prep.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''References'''&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
	<entry>
		<id>https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=GRO_J1744-28&amp;diff=1891</id>
		<title>GRO J1744-28</title>
		<link rel="alternate" type="text/html" href="https://www.sternwarte.uni-erlangen.de/wiki/index.php?title=GRO_J1744-28&amp;diff=1891"/>
		<updated>2019-09-26T12:33:39Z</updated>

		<summary type="html">&lt;p&gt;Koenig: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:Accreting X-ray Pulsars]]&lt;br /&gt;
&lt;br /&gt;
'''Other names''' : 2EG J1746-2852 ([http://simbad.u-strasbg.fr/simbad/sim-basic?Ident=GRO+J1744-28&amp;amp;submit=SIMBAD+search])&lt;br /&gt;
&lt;br /&gt;
'''Available data''': NuSTAR ObsID 80202027002&lt;br /&gt;
&lt;br /&gt;
'''Monitoring data''':&lt;br /&gt;
[https://swift.gsfc.nasa.gov/results/transients/weak/GROJ1744-28/ Swift/BAT]&lt;br /&gt;
&lt;br /&gt;
= Type = &lt;br /&gt;
Transient Low-mass X-ray Binary exhibiting Type I and II X-ray bursts and pulsations. Next to the Rapid Burster this is one of a few sources where Type II bursts are observed.&lt;br /&gt;
Discovered on 1995 December 2 with the Burst And Transient Source Experiment (BATSE) on-board the Compton Gamma Ray Observatory (&amp;lt;ref name=&amp;quot;Kouveliotou96A&amp;quot;/&amp;gt;) &lt;br /&gt;
&lt;br /&gt;
= Coordinates = &lt;br /&gt;
&lt;br /&gt;
RA 17h 44‘ 33.09“ DEC -28° 44‘ 27.0“&lt;br /&gt;
&lt;br /&gt;
= Binary system =&lt;br /&gt;
&lt;br /&gt;
* Distance: 7.5-8.5 kpc (&amp;lt;ref name=&amp;quot;Augusteijn97&amp;quot;/&amp;gt;, &amp;lt;ref name=&amp;quot;Nishiuchi99&amp;quot;/&amp;gt;)&lt;br /&gt;
* Optical companion: G4 III star (&amp;lt;ref name=&amp;quot;Gosling07A&amp;quot;/&amp;gt;, &amp;lt;ref name=&amp;quot;Masetti14Atel&amp;quot;/&amp;gt;) with M&amp;lt;0.4M_sun and inclination i&amp;gt;15° (&amp;lt;ref name=&amp;quot;Gosling07A&amp;quot;/&amp;gt;)&lt;br /&gt;
&lt;br /&gt;
== Orbit ==&lt;br /&gt;
&lt;br /&gt;
The orbital parameters were approximated to Porb = 11.836 days, T π/2 = 2456696.19880 (JED), ax sin(i) = 2.637 light-sec on the basis of the 2014 outburst with no constrains on the longitude of periastron or eccentricity (&amp;lt;ref name=&amp;quot;Pintore14Atel&amp;quot;/&amp;gt;).&lt;br /&gt;
See [https://gammaray.msfc.nasa.gov/gbm/science/pulsars/lightcurves/groj1744.html NSSTC Gamma Ray Astrophysics].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Pulsations &amp;amp; Magnetic field ==&lt;br /&gt;
&lt;br /&gt;
GRO J1744-28 is special because it exhibits X-ray bursts and pulsations at the same time. Sources which show X-ray bursts are generally believed to have surface conditions (low B-fields) which do not allow pulsations.&lt;br /&gt;
Pulse period: 2.14Hz &amp;lt;ref name=&amp;quot;Finger96A&amp;quot;/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The magnetic field strength deduced from disk reflection models lies in the 2–6×10^10 G range (Degenaar et al. 2014), mismatching the values deduced from the CRSF measurements (5.27±0.06 × 10^11 G &amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;) by one order of magnitude.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Outbursts ==&lt;br /&gt;
&lt;br /&gt;
* 1995 December: Discovery and first report of Type II X-ray bursts (&amp;lt;ref name=&amp;quot;Finger96A&amp;quot;/&amp;gt;)&lt;br /&gt;
* 1996 December: Similar burst characteristics (&amp;lt;ref name=&amp;quot;Woods99&amp;quot;/&amp;gt;), CRSF report at 5keV in BeppoSAX data (not yet proven) (&amp;lt;ref name=&amp;quot;Doroshenko15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* 2014 February: Outburst after 18 years of quiescence (&amp;lt;ref name=&amp;quot;Younes15&amp;quot;/&amp;gt;, no CRSF), CRSF report at 5keV, 10keV and 15keV in XMM-Newton/INTEGRAL data, still under debate (&amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* 2017 February: Fourth outburst with ~two orders of magnitude lower luminosity (Koenig et al. in prep.)&lt;br /&gt;
&lt;br /&gt;
== X-ray Spectrum ==&lt;br /&gt;
&lt;br /&gt;
* Spectrum shows typical cut-off powerlaw like expected from accreting X-ray binary&lt;br /&gt;
* Broad iron line at 6-7 keV&lt;br /&gt;
&lt;br /&gt;
=== Cyclotron Features ===&lt;br /&gt;
&lt;br /&gt;
Cyclotron line in this source is under debate&lt;br /&gt;
&lt;br /&gt;
* Fundamental CRSF at 4.68±0.05 keV (gabs, XMM-Newton/INTEGRAL, 2014 outburst &amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;) / ∼4.5 keV (gabs, BeppoSAX, 1997 outburst &amp;lt;ref name=&amp;quot;Doroshenko15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* Indication of second and third harmonic at 10.4±0.1 keV and 15.8+1.3−0.7 keV in XMM-Newton/INTEGRAL data (using gabs) (&amp;lt;ref name=&amp;quot;DAi15A&amp;quot;/&amp;gt;)&lt;br /&gt;
* GRO J1744−28 one of the few LMXBs where a CRSF has been reported below 10 keV (Other candidates are X1822−371 with a claimed cyclotron line energy of 0.7 keV (&amp;lt;ref name=&amp;quot;Iaria15A&amp;quot;/&amp;gt;) and SWIFT J0051.8−7320 at 5 keV (&amp;lt;ref name=&amp;quot;Maitra18A&amp;quot;/&amp;gt;)&lt;br /&gt;
* No cyclotron line in low-flux 2017 February outburst (König et al. in prep.)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''References'''&lt;/div&gt;</summary>
		<author><name>Koenig</name></author>
	</entry>
</feed>