DSpace Repository

A Hybrid Machine Learning Model for Hand Gesture Recognition

Show simple item record

dc.contributor.author Talukder, Jayed
dc.date.accessioned 2020-10-03T09:45:07Z
dc.date.available 2020-10-03T09:45:07Z
dc.date.issued 2019-08-26
dc.identifier.uri http://dspace.daffodilvarsity.edu.bd:8080/handle/123456789/4324
dc.description.abstract Sign language, a gesture based nonverbal technique of communication, is used by a significant number of populations throughout the world. Sign language recognition emerges as one of the most challenging exercises when the person interpreting it lacks the previous knowledge of signs. This paper proposes a method to recognize sign language using computer vision. Five features from a binary image of a hand shape, namely the area of the closed contour, the area of the convex hull, number of convexity defects, maximum depth of the defects, and the sum of the depths of the defects are extracted after preprocessing the image. In this study, for recognizing sign languages, the recognition task was accomplished by employing two classification algorithms: knearest neighbor (k-NN) and Support Vector Machine (SVM) individually. Then the system will decide the class of the gesture acquired from the result of a Logical AND operation from both of the classifiers. en_US
dc.language.iso en en_US
dc.publisher Daffodil International University en_US
dc.subject Communication en_US
dc.subject Machine learning en_US
dc.title A Hybrid Machine Learning Model for Hand Gesture Recognition en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Browse

My Account

Statistics