Deep advantage learning for optimal dynamic treatment regime

Recently deep learning has successfully achieved state-of-the-art performance on many difficult tasks. Deep neural network outperforms many existing popular methods in the field of reinforcement learning. It can also identify important covariates automatically. Parameter sharing of convolutional neural network (CNN) greatly reduces the amount of parameters in the neural network, which allows for high scalability. However few research has been done on deep advantage learning (A-learning). In this paper, we present a deep A-learning approach to estimate optimal dynamic treatment regime. A-learning models the advantage function, which is of direct relevance to the goal. We use an inverse probability weighting (IPW) method to estimate the difference between potential outcomes, which does not require to make any model assumption on the baseline mean function. We implemented different architectures of deep CNN and convexified convolutional neural networks (CCNN). The proposed deep A-learning methods are applied to a data from the STAR*D trial and are shown to have better performance compared with the penalized least square estimator using a linear decision rule.

Medienart:

E-Artikel

Erscheinungsjahr:

2018

Erschienen:

2018

Enthalten in:

Zur Gesamtaufnahme - volume:2

Enthalten in:

Statistical theory and related fields - 2(2018), 1 vom: 12., Seite 80-88

Sprache:

Englisch

Beteiligte Personen:

Liang, Shuhan [VerfasserIn]
Lu, Wenbin [VerfasserIn]
Song, Rui [VerfasserIn]

Links:

Volltext

Themen:

Advantage Learning
Convexified Convolutional Neural Networks
Convolutional Neural Networks
Dynamic Treatment Regime
Inverse Probability Weighting
Journal Article

Anmerkungen:

Date Revised 03.04.2024

published: Print-Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.1080/24754269.2018.1466096

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM290535425