DSpace Repository

A Deep Learning Approach to Learn Lip Sync from Audio

Show simple item record

dc.contributor.author Bhuiyan, Mazharul Islam
dc.contributor.author Mahato, Milon
dc.contributor.author Hassan, Md. Nazmul
dc.contributor.author Rahman, Md. Habibur
dc.date.accessioned 2021-09-16T10:28:35Z
dc.date.available 2021-09-16T10:28:35Z
dc.date.issued 2021-01-14
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/6158
dc.description.abstract With accurate lip-sync of speaker independent, we synthesize several high-quality videos for sake of generating expected target video clip from the composite synthesized video. In our work we explore several related works of lip-syncing, out-of-sync, talking face generation, speaker independent target video content creation from input audio stream containing some limitation & failure and we also implement as better lip-sync by training our models which is not rely on specific speaker. Besides, in this work we detect key reason for the mentioned problems and improve the difficult factors with new evaluation strategies then solve as better output returning like Wav2lip model. By the way, more realistic matched lip-syncing appearance of any individual speaking video from any voices or input audio clip along with mapping RNN is an incredible outcome since it can generate proper mouth texture. en_US
dc.language.iso en_US en_US
dc.publisher Daffodil International University en_US
dc.subject Lip-synching en_US
dc.subject Learning centers en_US
dc.title A Deep Learning Approach to Learn Lip Sync from Audio en_US
dc.type Other en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


My Account