Abstract
Rather than using speech to communicate with one another, the deaf and dumb use a set of signs known as “sign language”. Yet, utilizing signs to interact with this society is too difficult for non-sign language speakers. To facilitate communication for the deaf public, an application that can identify sign language motions must be developed. Regarding its importance, there are approaches with differing degrees of accuracy for recognizing American Sign Language ASL. The study aims to enhance the accuracy of current ASL identification approaches by putting forward a deep-learning model. A CNN was developed and trained to correctly recognize hand gestures that describe the ASL letters (A-Z). The proposed model performs exceptionally well, attaining high accuracy on the dataset, with a test accuracy of 99.97%. The model is a possible tool for practical applications in assistive technology for the hearing impaired since the results show that it can distinguish between distinct ASL hand signs.
Keywords
American Sign Language
ASL letters.
CNN
Hand Gesture
Sign Recognition