Speech-driven Personalized Gesture Synthetics : Harnessing Automatic Fuzzy Feature Inference

Speech-driven gesture generation is an emerging field within virtual human creation. However, a significant challenge lies in accurately determining and processing the multitude of input features (such as acoustic, semantic, emotional, personality, and even subtle unknown features). Traditional approaches, reliant on various explicit feature inputs and complex multimodal processing, constrain the expressiveness of resulting gestures and limit their applicability. To address these challenges, we present Persona-Gestor, a novel end-to-end generative model designed to generate highly personalized 3D full-body gestures solely relying on raw speech audio. The model combines a fuzzy feature extractor and a non-autoregressive Adaptive Layer Normalization (AdaLN) transformer diffusion architecture (DiTs-based). The fuzzy feature extractor harnesses a fuzzy inference strategy that automatically infers implicit, continuous fuzzy features. These fuzzy features, represented as a unified latent feature, are fed into the AdaLN transformer. The AdaLN transformer introduces a conditional mechanism that applies a uniform function across all tokens, thereby effectively modeling the correlation between the fuzzy features and the gesture sequence. This module ensures a high level of gesture-speech synchronization while preserving naturalness. Finally, we employ the diffusion model to train and infer various gestures. Extensive subjective and objective evaluations on the Trinity, ZEGGS, and BEAT datasets confirm our model's superior performance to the current state-of-the-art approaches. Persona-Gestor improves the system's usability and generalization capabilities, setting a new benchmark in speech-driven gesture synthesis and broadening the horizon for virtual human technology. Supplementary videos and code can be accessed at https://zf223669.github.io/Diffmotion-v2-website/.

Medienart:

E-Artikel

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

Zur Gesamtaufnahme - volume:PP

Enthalten in:

IEEE transactions on visualization and computer graphics - PP(2024) vom: 24. Apr.

Sprache:

Englisch

Beteiligte Personen:

Zhang, Fan [VerfasserIn]
Wang, Zhaohan [VerfasserIn]
Lyu, Xin [VerfasserIn]
Zhao, Siyuan [VerfasserIn]
Li, Mengjian [VerfasserIn]
Geng, Weidong [VerfasserIn]
Ji, Naye [VerfasserIn]
Du, Hui [VerfasserIn]
Gao, Fuxing [VerfasserIn]
Wu, Hao [VerfasserIn]
Li, Shunman [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Revised 30.04.2024

published: Print-Electronic

Citation Status Publisher

doi:

10.1109/TVCG.2024.3393236

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM37145753X