Near-optimal control of motor drives via approximate dynamic programming

Document Type

Conference Proceeding

Publication Date

10-1-2019

Abstract

Data-driven methods for learning near-optimal control policies through approximate dynamic programming (ADP) have garnered widespread attention. In this paper, we investigate how data-driven control methods can be leveraged to imbue near-optimal performance in a core component in modern factory systems: The electric motor drive. We apply policy iteration-based ADP to an induction motor model in order to construct a state feedback control policy for a given cost functional. Approximate error convergence properties of policy iteration methods imply that the learned control policy is near-optimal. We demonstrate that carefully selecting a cost functional and initial control policy yields a near-optimal control policy that outperforms both a baseline nonlinear control policy based on backstepping, as well as the initial control policy.

Identifier

85076776250 (Scopus)

ISBN

[9781728145693]

Publication Title

Conference Proceedings IEEE International Conference on Systems Man and Cybernetics

External Full Text Location

https://doi.org/10.1109/SMC.2019.8914595

ISSN

1062922X

First Page

3679

Last Page

3686

Volume

2019-October

This document is currently not available here.

Share

COinS