34 lines
1.5 KiB
TeX
34 lines
1.5 KiB
TeX
\chapter{Introduction}\label{ch: introduction}
|
|
|
|
(P) is the linear least squares problem
|
|
\[\min_{w}\ \left\lVert \hat{X}w-\hat{y} \right\rVert\]
|
|
where
|
|
\[
|
|
\hat{X} =
|
|
\begin{bmatrix}
|
|
X^T \\
|
|
\lambda I_m
|
|
\end{bmatrix},
|
|
\ \
|
|
\hat{y} =
|
|
\begin{bmatrix}
|
|
y \\
|
|
0
|
|
\end{bmatrix},
|
|
\]
|
|
with $X$ the (tall thin) matrix from the ML-cup dataset by prof. Micheli, $\lambda > 0$ and $y$ is a random vector.
|
|
\begin{itemize}
|
|
\item[--] (A1) is an algorithm of the class of limited-memory quasi-Newton methods.
|
|
\item[--] (A2) is a cothin QR factorization with Householder reflectors, in the variant where one does not form the matrix $Q$, but stores the Householder vectors $u_k$ and uses them to perform (implicitly) products with $Q$ and $Q^T$.
|
|
\end{itemize}
|
|
No off-the-shelf solvers allowed. In particular you must implement yourself the thin QR factorization, and the computational cost of your implementation should be at most quadratic in $m$.
|
|
|
|
\subsection*{Outline}
|
|
This report is organized as follows:
|
|
\begin{description}
|
|
\item[\autoref{ch: problem definition},] in which the problem is reformulated under the mathematical aspect;
|
|
\item[\autoref{ch: algorithms},] where we will include the implemented algorithms, with the analysis of convergence and complexity;
|
|
\item[\autoref{ch: experiments},] to evaluate and compare (A1) with (A2) for this task and provide different tests in order to examine deeper the algorithms;
|
|
\item[\autoref{ch: conclusion},] in which conclusions are drawn, offering a critical analysis of the results obtained.
|
|
\end{description}
|