[ misc ][ cdo ][ journals ][ fortran ][ mpi ][ paper notes ][ Beamte ]
This is a page for various notes and quick tests …
Many times I had someone on the other part of the line and wanted to talk about mathematics ... so you need some space where you can quickly type some ideas in LaTeX and the other person with internet connection can read it ...
$$ \frac{dx'}{dt}(\rho,t)\Big|_{t=0} = 0 \hspace{3cm} (3.4) $$
A general introductory talk into data assimilation for numerical weather prediction: http://scienceatlas.com/potthast/_media/talk_potthast_reading_2014_11.pdf and http://scienceatlas.com/potthast/_media/potthast_icon_bacy_tour.pdf
Aamir August 30. We try to approximate given function values $$ f(x_1), f(x_2), f(x_3), \ldots, f(x_m) $$ by a function of the form \begin{equation} g(x) = \sum_{\ell=-L}^{L} g_{\ell} e^{i \ell x}, \;\; x \in [0, 2\pi]. \end{equation} We do it
The following graphics shows some testing with RTTOV 10 and SEVIRI Radiances. The blue profile is the demo temperature profile provided by RTTOV. The other lines show the brightness temperature (BT) of SEVIRI (in degrees Celsius) given by the channels when an opaque cloud is put into the different heights (in hPa) shown on the left.
Embedding PHP works, writing “test” via php: test
I) Theory for the Stability of Classification in General
… we investigated the stability of image classes under the influence of the above compact operator $A$. Our main results show that
We note that the second point has consequences for non-linear classes as well, since in the smooth case they are made up of sequences of linear classes. If the nonlinear class has a sequence of normal vectors which span an infinite dimensional subspace, then the nonlinear image classes cannot be stably separable.
II) Investigate the Stability of Key Algorithms
We have investigated two well-known algorithms for classification: the FLD as a key method for supervised classification and the K-Means Algorithm combined with a PCA as a well established approach to unsupervised classfication.
III) Investigate the Stability of Classifications in Magnetic Tomography (in particular for Fuel Cells)
In Chapter 3, we presented a concise investigation of the ill-posedness of classifications for magnetic tomography, with a particular emphasis on classifications for fuel-cells as an important application area.
Hello
Statement: Let $A: X \rightarrow Y$ be a compact operator and $X$ be infinite dimensional. Then there is a sequence $\varphi_n \in X$ with $||\varphi_n||=1$ such that \begin{equation} A\varphi_n \rightarrow 0, \;\; n\rightarrow \infty. \end{equation}
$\newcommand{\tvarphi}{\tilde{\varphi}} \newcommand{\tpsi}{\tilde{\psi}}$ $Proof.$ Since $A^{-1}$ is unbounded, this means that there exists a sequence $\tpsi_n \in Y$ with norm $||\tpsi_n || = 1$ and \begin{equation} || A^{-1}\tpsi_n || \rightarrow \infty, \;\; n \rightarrow \infty. \end{equation} We define \begin{equation} \varphi_n := \frac{A^{-1} \tpsi_n}{||A^{-1}\tpsi_n||}, \;\; n \in \mathbb{N}. \end{equation} Then we have $$ || A \varphi_n || = \frac{|| \tpsi_n ||}{||A^{-1}\tpsi_n||} \rightarrow 0, \;\; n \rightarrow \infty, $$ and the proof is complete $\Box$.
Use $x$ for points in $\mathbb{R}^3$, then \begin{equation} \Omega = \Big\{ x \in \bigcup_{i=1}^{K} C_i \Big\} \end{equation}
probably better to write \begin{equation} \Omega = \bigcup_{C \in \cal{C}} C \end{equation} where \begin{equation} \cal{C} = \Big\{ L_{j} \ldots\Big\} \end{equation}
Consider a simple situation like
$x_1$ | $x_2$ |
$x_3$ | $x_4$ |
and now two slants, given by \begin{eqnarray} s_1 & = & x_1 + x_4 \\ s_2 & = & x_2 + x_4 \end{eqnarray} leading to a matrix equation \begin{equation} \left( \begin{array}{ccc} 1 & 0 & 0 & 1 \\ 0 & 1 & 0 & 1 \end{array} \right) \circ \left( \begin{array}{c} x_1 \\ x_2 \\ x_3 \\ x_4 \end{array} \right) = \left( \begin{array}{c} s_1 \\ s_2 \end{array} \right) \end{equation} We might include some weighting function for taking care of the lenght of the path of the ray through the area denoted by $x_j$, $j=1,\ldots,4$, then we obtain a weighted system of the form \begin{eqnarray} s_1 & = & w_{1,1} x_1 + w_{1,4} x_4 \\ s_2 & = & w_{2,2} x_2 + w_{2,4} x_4 \end{eqnarray}
A ray starting at $b \in \mathbb{R}^3$ with direction $\nu \in \mathbb{R}^3$ is given by \begin{equation} L = \{ x = b + \eta \cdot \nu| \; \eta > 0 \}. \end{equation} An integral along $L$ is written as \begin{equation} f(b,\nu) := \int_{L} \varphi(y) ds(y), \end{equation} where $ds(y)$ denotes the standard Eucliedean measure along $L$. Alternatively, when $||\nu|| = 1$, you might write \begin{equation} f(b,\nu) = \int_{0}^{\infty} \varphi(b + \eta \nu) d\eta \end{equation} We use the notation \begin{equation} (R\varphi)(b,\nu) := f(b,\nu), \;\; b \in \mathcal{B}, \; \nu \in \mathcal{N} \end{equation} where $\mathcal{B}$ and $\mathcal{N}$ denote the set of our base stations and the set of slant directions measured by the stations over some time interval. In discretized form, the operator ${\bf H}$ maps $\vec{\varphi} \in \mathbb{R}^n$ into $\vec{f} \in \mathbb{R}^m$. It is a linear operator and, thus, we obtain a matrix operator from $\mathbb{R}^n$ into $\mathbb{R}^m$. Thus, GPS tomography boiles down to solving a linear finite dimensional system \begin{equation} {\bf H} \vec{\varphi} = \vec{f} \end{equation} We solve it by 3dVar or Tikhonov Regularization, respectively, which is \begin{equation} \varphi^{(a)} = \varphi^{(b)} + B H^{T}( R + H B H^{T})^{-1}( f - H \varphi^{(b)} ), \end{equation} which is minimizing \begin{equation} J(\varphi) = ||\varphi - \varphi^{(b)}||^2_{B^{-1}} + ||f - H \varphi||^{2}_{R^{-1}}. \end{equation} Adaptive regularization would change the regularization term \begin{equation} ||\varphi - \varphi^{(b)}||^2_{B^{-1}} \end{equation}
We first select a subsequence of $\mathbb{N}$ such that a point in the support $\chi_i$ of $v_{k_{i}}$ has limiting point $x_{\ast} \in \Omega$. This is possible since any bounded sequence in $\mathbb{R}$ has a convergent subsequence. We call it $k_{i}$ again.
Given $\epsilon>0$ there is $i>0$ such that $$ || v_{k_{i}} - v_{\ast} || < \epsilon. $$ So we know that on the exterior of the support $\Omega_{k_{i}}$ of $v_{k_{i}}$ we have $$ || v_{\ast} ||_{L^2(\Omega \setminus \Omega_{k_{i}})} < \epsilon $$ For a function $v_{\ast} \in L^2(\Omega)$ we have that \begin{eqnarray} |\int_{\Omega_{k_{i}}} |v_{\ast}(y)|^2 dy | & \rightarrow & 0, \;\; for \;\; i \rightarrow \infty. \end{eqnarray} Now, we obtain \begin{eqnarray} || v_{\ast} ||_{L^2(\Omega)}^{2} & = & ||v_{\ast}||_{L^2(\Omega \setminus \Omega_{k_{i}})}^2 + ||v_{\ast}||_{L^2(\Omega_{k_{i}})}^2 \nonumber \\ & \rightarrow & 0, \;\; i \rightarrow \infty. \end{eqnarray}
Simple question: is $\frac{1}{x-y}$ analytic in $x$ for fixed $y$?
Answer 1) Yes, since the sum and product of analytic functions are analytic, and the quotient of analytic functions are analytic where the denominator is non-zero.
Answer 2) We use the series expansion $$ \frac{1}{x-y} = (-1) \sum_{n=1}^{\infty} \frac{x^n}{y^{n+1}} $$ for $|x|<|y|$, which I took from Wolfram. For $|x|<\rho \cdot |y|$ with $\rho<1$ we have $$ |\frac{x^n}{y^{n+1}}| < \frac{1}{y} \rho^n, $$ such that $$ |\sum_{n=1}^{\infty} \frac{x^n}{y^{n+1}}| < \frac{1}{|y|} \sum_{n=1}^{\infty} \rho^n $$ is absolutely convergent by the geometric series. Thus, $\frac{1}{x-y}$ is analytic on $|x|<|y|$. For $|x|>|y|$ we use $$ \frac{1}{x-y} = \sum_{n=1}^{\infty} \frac{y^n}{x^{n+1}} $$ analogously.
Let $g$ be a function in $L^2(\Omega)$ and let $\Omega_i$ be subsets of $\Omega$ with $|\Omega_i| \rightarrow 0$ for $i \rightarrow \infty$. Then we have \begin{equation} \label{eq1} \int_{\Omega_i} |g|^2 dy \rightarrow 0, \;\; i \rightarrow \infty. \end{equation}
Proof. Given $\epsilon$ there is a bounded function $f \in C^{\infty}(\Omega)$ such that \begin{equation} \label{eq2} || g -f ||_{L^2(\Omega)}^2 \leq \frac{\epsilon}{4}. \end{equation} Further, for a function $f \in C^{\infty}(\Omega)$ given $\epsilon$ we find $i_0>0$ such that \begin{equation} \label{eq3} \int_{\Omega_{i}} |f(y)|^2 dy \leq \frac{\epsilon}{4} \end{equation} for all $i\geq i_0$. Now, given $\epsilon$ we first choose $f \in C^{\infty}$ such that (\ref{eq2}) is satisfied. Then, we choose $i_0$ such that (\ref{eq3}) is true. This yields \begin{eqnarray} \int_{\Omega_i} |g|^2 dy & = & \int_{\Omega_i} |g-f + f|^2 dy \\ & \leq & 2 (\int_{\Omega_i} |g-f|^2 dy + \int_{\Omega_i} |f|^2 dy )\\ & \leq & 2(\frac{\epsilon}{4} + \frac{\epsilon}{4}) = \epsilon \end{eqnarray} for $i \geq i_0$. $\Box$
$A: X \rightarrow Y$ compact, $\varphi_n$ bounded sequence in $X$, then $\psi_n := A\varphi_n$ has a convergent subsequence, which we relable to use index $n$ again.
$\psi_n \rightarrow \psi_{\ast} \in Y$. If $\psi_{\ast} \in R(A)$, then there exists $\varphi_{\ast} \in X$ such that $A \varphi_{\ast} = \psi_{\ast}$. Defining $\tilde{\varphi}_{n}:= \varphi_n - \varphi_{\ast}$ we obtain a sequence such that $$ A \tilde{\varphi}_{n} \rightarrow 0, \;\; n \rightarrow \infty. $$
\begin{eqnarray} \tilde{Y}^{T} \tilde{R}^{-1} \tilde{Y} & = & Y^{T} A^{T} (A R A^{T})^{-1} A Y \nonumber \\ & = & Y^{T} A^{T} (A^{T})^{-1} R^{-1} A^{-1} A Y \nonumber \\ & = & Y^{T} R^{-1} Y \end{eqnarray}
Which metric to use?
$$ D{y} := \frac{y_{2}-y_{1}}{h} $$
in other words
$$ {\bf y} := \left( \begin{array}{c} y_{1} \\ D{y} \end{array} \right) $$ and $$ \| {\bf y} - {\bf y^{(o)}} \|^2 := \| y_{1} - y_{1}^{(o)} \|^2 + \| D{y} - D{y}^{(o)} \|^2 $$ or the original one \begin{eqnarray} \| {\bf y} - {\bf y^{(o)}} \|^2 & := & \| {\bf y} - {\bf y^{(o)}} \|_{\tilde{R}}^2 \nonumber \\ & = & ({\bf y} - {\bf y^{(o)}})^{T} (A^{T})^{-1} R^{-1} A^{-1} ({\bf y} - {\bf y^{(o)}}) \end{eqnarray}
$$ A = \left( \begin{array}{cc} 1 & 0 \\ -\frac{1}{h} & \frac{1}{h} \end{array}\right) $$