This blog is a database for the 3rd semester students' writing database.
Sunday, 7 January 2018
Article "Synface-Speech Driver Facial Animation for Virtual Speech Reading Support" (1st)
Clarisa Livia
16611022
This article tells about the SynFace, it is a supportive technology that aims at enhancing audio-based spoken communication in adverse acoustic conditions by providing the missing visual information in the form of an animated talking head. It is describe the system architecture, consisting of a 3D animated face model controlled from the speech input by a specifically optimised phonetic recogniser, it is report on speech intelligibility experiments with focus on multilinguality and robustness to audio quality too.
The purpose of SynFace is to enhance spoken communication for the hearing impaired, rather than solving the acoustic-to-visual speech mapping. The methods employed here are, therefore, tailored to achieving this goal in the most effective way. Beskow showed that, whereas data-driven visual synthesis resulted in more realistic lip movements, the rule-based system enhanced the intelligibility. Similarly, mapping from the acoustic speech directly into visual parameters is an appealing research problem. However, when the ambition is to develop a tool that can be applied in real-life conditions, it is necessary to constrain the problem
Labels:
clarisa
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment