Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-23T19:27:43.088Z Has data issue: false hasContentIssue false

RATIONAL AND GENERALISED RATIONAL CHEBYSHEV APPROXIMATION PROBLEMS AND THEIR APPLICATIONS

Published online by Cambridge University Press:  13 January 2023

VINESHA PEIRIS*
Affiliation:
Department of Mathematics, School of Science, Computing and Engineering Technologies, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia
Rights & Permissions [Opens in a new window]

Abstract

Type
PhD Abstract
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Australian Mathematical Publishing Association Inc.

In Chebyshev (uniform) approximation, the goal is to minimise the maximum deviation of the approximation from the original function. Classical rational Chebyshev approximation is formed as a ratio of two polynomials (monomial basis). It is a flexible alternative to extensively studied uniform polynomial and piecewise polynomial approximations. In particular, rational functions exhibit accurate approximations to nonsmooth and non-Lipschitz functions, where the polynomial approximations are not efficient.

Optimisation problems appearing in univariate rational Chebyshev approximations are quasiconvex. Moreover, this property remains valid when the basis functions are not restricted to monomials (generalised rational Chebyshev approximation) and also in the case of multivariate settings.

In this research, we provide an extensive study of the optimisation problems, their extensions and results of numerical experiments, and a comparison with existing methods. We mainly use two methods to find the optimal solution: bisection method for quasiconvex optimisation and the differential correction method. In both methods, the auxiliary subproblems can be reduced to solving linear programming problems when the domain is discrete. The differential correction method has better computational properties in the case of univariate rational approximations with monomial basis functions. At the same time, the bisection method is more attractive when one needs to extend it to a broader class of approximations, including approximations with nonmonomial basis functions and multivariate settings. Moreover, it can be extended to approximations which are quasiaffine with respect to its parameters.

This research has many potential applications, in particular, in the area of data analysis, deep learning and some engineering applications. The flexibility of rational approximation makes it attractive for matrix function tasks. Matrix functions prove to be an efficient tool in applications such as solving ordinary differential equations (ODEs), engineering models, image denoising and graph neural networks.

We also use rational and generalised rational approximation as a preprocessing step to deep learning classifiers and demonstrate that the classification accuracy is significantly improved compared to the classification of the raw signals.

We investigate the potential for using a uniform norm-based loss function in the training of an artificial neural network. This leads to superior classification results in some special cases where the training data are reliable but limited in size or if the dataset contains under-represented classes. We also investigate the use of a uniform norm-based loss function from the quasidifferential standpoint. Furthermore, we incorporate rational functions as activation functions in a neural network.

Some of this research has been published in [Reference Díaz Millán, Peiris, Sukhorukova and Ugon1Reference Peiris, Sukhorukova and Roshchina6].

Footnotes

Thesis submitted to Swinburne University of Technology in February 2022; degree approved on 19 May 2022; principal supervisor Nadezda Sukhorukova, associate supervisors Julien Ugon and Vera Roshchina.

References

Díaz Millán, R., Peiris, V., Sukhorukova, N. and Ugon, J., ‘Multivariate approximation by polynomial and generalized rational functions’, Optimization 71(4) (2022), 11711187.CrossRefGoogle Scholar
Kalfon, E., Peiris, V., Sharon, N., Sukhorukova, N. and Ugon, J., ‘Flexible rational approximation for matrix functions’, Preprint, 2021, arXiv:2108.09357.Google Scholar
Peiris, V., ‘Rational activation functions in neural network with uniform norm based loss function and its application in classification’, Comm. Optim. Theory 2022 (2022), Article no. 3, 25 pages.Google Scholar
Peiris, V., Sharon, N., Sukhorukova, N. and Ugon, J., ‘Generalised rational approximation and its application to improve deep learning classifiers’, Appl. Math. Comput. 389 (2021), Article no. 125560.Google Scholar
Peiris, V. and Sukhorukova, N., ‘The extension of the linear inequality method for generalized rational Chebyshev approximation to approximation by general quasilinear functions’, Optimization 71(4) (2022), 9991019.CrossRefGoogle Scholar
Peiris, V., Sukhorukova, N. and Roshchina, V., ‘Deep learning with nonsmooth objectives’, Preprint, 2021, arXiv:2107.08800.Google Scholar