A Two-Stage Training Method for Modeling Constrained Systems With Neural Networks

Real-world systems are often formulated as constrained optimization problems. Techniques to incorporate constraints into Neural Networks (NN), such as Neural Ordinary Differential Equations (Neural ODEs), have been used. However, these introduce hyperparameters that require manual tuning through trial and error, raising doubts about the successful incorporation of constraints into the generated model. This paper describes in detail the two-stage training method for Neural ODEs, a simple, effective, and penalty parameter-free approach to model constrained systems. In this approach the constrained optimization problem is rewritten as two unconstrained sub-problems that are solved in two stages. The first stage aims at finding feasible NN parameters by minimizing a measure of constraints violation. The second stage aims to find the optimal NN parameters by minimizing the loss function while keeping inside the feasible region. We experimentally demonstrate that our method produces models that satisfy the constraints and also improves their predictive performance. Thus, ensuring compliance with critical system properties and also contributing to reducing data quantity requirements. Furthermore, we show that the proposed method improves the convergence to an optimal solution and improves the explainability of Neural ODE models. Our proposed two-stage training method can be used with any NN architectures..

Medienart:

Preprint

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

arXiv.org - (2024) vom: 05. März Zur Gesamtaufnahme - year:2024

Sprache:

Englisch

Beteiligte Personen:

Coelho, C. [VerfasserIn]
Costa, M. Fernanda P. [VerfasserIn]
Ferrás, L. L. [VerfasserIn]

Links:

Volltext [kostenfrei]

Themen:

000
510
Computer Science - Computational Engineering; Finance; and Science
Computer Science - Machine Learning
Mathematics - Optimization and Control

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

XCH042798701