GISiL

Gesture Interpreter for Sign Language

Aim of the project is to bridge the gap and enable effortless communication between users of conventional language and sign language by utilizing cost effective mini-potentiometric pots in our gloves with 3D printed parts to detect finger movements. The gloves are paired with a smartphone to perform sign translation using machine learning and natural language processing to produce audio and visual output

Using training data from publicly available images of American Sign Language (ASL) as source domain for inference on noisy sensor data by bridging the domain gaps through latent space transfer.

More information in our paper presentation.

References

2022

  1. Siheda.png
    SIHeDA-Net: Sensor to Image Heterogeneous Domain Adaptation Network
    Ishikaa Lunawat, Vignesh S, and S P Sharan
    In Medical Imaging with Deep Learning, 2022