24-26 September 2024
Bogolyubov Institute for Theoretical Physics (Section 1-4), Institute of Mathematics (Section 5)
Europe/Kiev timezone

Solving Classical Systems of Linear Equations with the Help of Non-Classical Gradient and Classical Algebraic Methods

Not scheduled
20m
Institute of Mathematics

Institute of Mathematics

3, Tereschenkivska Str., Kyiv, 01024, Ukraine
Poster MATHEMATICS MATHEMATICS

Speaker

Alisa Tsukanova (National Technical University of Ukraine “Igor Sikorsky Kiev Polytechnic Institute”)

Description

Since the beginning of time, before to start implementation of some ideas that people had planned earlier, they have made some optimal, more or less, decision. Initially this decision was made without any special analysis, it was based only on pure human experience. But over time it was no longer possible to realize this action without special mathematical methods that carry out global search for necessary optimum. Nowadays, in crazy time of active computerization, these methods allow us to take, in fact, innovative look at various complicated problems of classical mathematics.

The given paper presents basic results of analysis of classical Gauss method and new optimization method of gradient descent for solving general systems of linear algebraic equations, obtained after testing special program, written in ''Visual Basic for Applications''.

Gradient methods are numerical methods for solving unconditional optimization problems with the help of gradient. These methods are reduced to finding maximum or minimum of some function. An essence of our proposed numerical optimization method of gradient descent is that solving any system of linear algebraic equations of the following matrix form $ A \vec{x} = \vec{b} $ is reduced to searching of minimizer $ \vec{x}^{*} $, i.e. such vector, that $ \min \limits_{\vec{x} \in \mathbb{R}^{n}} f( \vec{x}) = f( \vec{x}^{*}) $, for the next residual function $ f(\vec{x}) = \bigl(A \vec{x} - \vec{b}, A \vec{x} - \vec{b} \bigr) = || A \vec{x} - \vec{b} ||^2 $, $ A = \begin{pmatrix} a_{11} & a_{12} & ... & a_{1n}\\ a_{21} & a_{22} & ... & a_{2n}\\ ... & ... & ... & ...\\ a_{m1} & a_{m2} & ... & a_{mn} \end{pmatrix} \in \mathbb{R}^{m \times n} \text{,}~ \vec{b} = \begin{pmatrix} b_{1}\\ b_{2}\\ ... \\ b_{m} \end{pmatrix} \in \mathbb{R}^{m}$, $m \leq n $.

Formally, the method consists in iterative generation of some sequence of such points $ \{ \vec{x}_{k}\}_{k \geq 0} $ (i.e. some descent trajectory, that is sometimes called a relaxation trajectory, that converges to our real solution $ \vec{x}^{*} $ while $ k \to \infty $), that $ f(\vec{x}_{k + 1}) \leq f(\vec{x}_{k}) $, $ k \geq 0 $, according to the following iterated scheme.
1. An arbitrary point is selected as an initial approximation $ \vec{x}_{0} $.
2. The point $ \vec{x}_{k + 1} $, $ k \geq 0 $, is determined by the formula $ \vec{x}_{k + 1} = \vec{x}_{k} - \lambda_{k} \cdot grad \bigl(f( \vec{x}_{k}) \bigr)$, $\lambda_{k} > 0$, $k \geq 0$.

At each step we have movement along the vector of anti-gradient $-grad(f( \vec{x}_{k}))$, $k \geq 0$, in the direction of the fastest decrease of $ f $, and, as a result, we get our necessary solution. Namely, if it turns out that modulus of our anti-gradient is zero (more precisely, less than predetermined accuracy), then we are at the minimum point we are looking for. If the criterion for the end of the iteration is not true (modulus of our anti-gradient is more than predetermined accuracy), then we return to the first step, otherwise we return the exact value of $ \vec{x}_{k + 1} $, $ k \geq 0 $. In this formula $ \lambda_{k} $, $ k \geq 0 $, determines the distance between $ \vec{x}_{k} $ and $ \vec{x}_{k + 1} $, $ k \geq 0 $.

The main problem in the process of choosing this step $ \lambda_{k} $, $ k \geq 0 $, is to ensure that the rule $ f(\vec{x}_{k + 1}) \leq f(\vec{x}_{k}) $, $ k \geq 0 $, is true. There are different ways to choose this step multiplier $ \lambda_{k} $, $ k \geq 0 $. Depending on this, different variants of numerical optimization method of gradient descent can be obtained. We have considered the next four methods: the method with adaptive step selection, the method with adaptive step correction, the modified descent method with adaptive step selection, and the fastest descent method. All these methods contribute to the accelerated approximation of our converging sequence $ \{ \vec{x}_{k}\}_{k \geq 0} $ to necessary real solution $ \vec{x}^{*} $ we are looking for.

The result of classical Gauss method is some conclusion about solvability or unsolvability of our system under consideration, based on application of the Kronecker-Capelli theorem from classical algebra to the reduced system. The result of the optimization part of the program is necessary solution, found by proposed gradient descent method, our method running time, as well as accuracy of calculations.

A comparative analysis of these methods has been carried out when the program has been testing different systems of linear algebraic equations. The Gaussian method algorithm gives results in a fraction of seconds, while the optimization method produces the most accurate result in a short time not for every system. For some systems the program just starts to loop. This is an experimental proof that effectiveness of our descent method depends on exact value of its step: in optimal area significant ''yawings'' may occur due to need to correct the step value of the descent method.

Primary author

Alisa Tsukanova (National Technical University of Ukraine “Igor Sikorsky Kiev Polytechnic Institute”)

Presentation Materials

There are no materials yet.