Indian Sign Language Recognition (ISLAR)

Sample this – two cities in India; Mumbai and Pune, though only 80kms apart have a distinctly varied spoken dialect. Even stranger is the fact that their sign languages are also distinct, having some very varied signs for the same objects/expressions/phrases. While regional diversification in spoken languages and scripts are well known and widely documented, apparently, this has percolated in sign language as well, essentially resulting in multiple sign languages across the country. To help overcome these inconsistencies and to standardize sign language in India, I am collaborating with the Centre for Research and Development of Deaf & Mute (an NGO in Pune) and Google. Adopting a two-pronged approach: a) I have developed an Indian Sign Language Recognition System (ISLAR) which utilizes Artificial Intelligence to accurately identify signs and translate them into text/vocals in real-time, and b) have proposed standardization of sign languages across India to the Government of India and the Indian Sign Language Research and Training Centre.

As previously mentioned, the initiative aims to develop a lightweight machine-learning model, for 14 million speech/hearing impaired Indians, that is suitable for Indian conditions along with the flexibility to incorporate multiple signs for the same gesture. More importantly, unlike other implementations, which utilize additional external hardware, this approach, which utilizes a common surgical glove and a ubiquitous camera smartphone, has the potential of hardware-related savings at an all-India level. ISLAR received great attention from the open-source community with Google inviting me to their India and global headquarters in Bangalore and California, respectively, to interact with and share my work with the TensorFlow team.

 
 

Outline/Structure of the Demonstration

Outline

  • Background of the problem - understanding the problems faced by the deaf and mute community. [2 mins]
    • 14 million people in India have speech and hearing impairment.
    • Current solutions are neither scalable nor ubiquitous.
  • Defining a strong problem statement [2 min]
  • Key aspects while designing the application.[8 mins]
    • Building a low resource consuming machine learning model that can be deployed on the edge. [1 min]
    • Eliminate the need for external hardware. [1 min]
    • Phase 0: Localizing just hand gestures.[2 mins]
    • Phase 1: Adding your facial key points along with hand localization. [2mins]
    • Phase 2: Adding sequential information to each frame for carrying the context thus, enabling the model to pick up the entire context of the conversation.[2 mins]
  • Getting resources from Google and TensorFlow.
  • Results and conclusion [1 min]
  • Future aspects [1 min]

Demonstrations

  • Preparation [1 min]
  • ISLAR Phase 0 [1 min]
  • ISLAR Phase 1 [1 min]
  • Presentation at Google, Bangalore [1 min]
  • Presentation at Google, California [2 mins]

Learning Outcome

By the end of the session, the audience will have a clearer understanding of the problems being faced by an underrepresented community in India therefore, catalyzing the thought process of the attendees to address social issues in India as well as other developing countries.

Target Audience

Machine Learning enthusiasts as well as virtuosos.

Prerequisites for Attendees

None

Slides


Video


schedule Submitted 3 years ago

  • Kuldeep Singh
    keyboard_arrow_down

    Kuldeep Singh - Simplify Experimentation, Deployment and Collaboration for ML and AI Models

    20 Mins
    Demonstration
    Intermediate

    Machine Learning and AI are changing or would say have changed the way how businesses used to behave. However, the Data Science community is still lacking good practices for organizing their projects and effectively collaborating and experimenting quickly to reduce “time to market”.

    During this session, we will learn about one such open-source tool “DVC”
    which can help you in helping ML models shareable and reproducible.
    It is designed to handle large files, data sets, machine learning models, metrics as well as code

  • Kriti Doneria
    keyboard_arrow_down

    Kriti Doneria - Trust Building in AI systems: A critical thinking perspective

    Kriti Doneria
    Kriti Doneria
    Data Science
    Practitioner
    schedule 3 years ago
    Sold Out!
    20 Mins
    Talk
    Beginner

    How do I know when to trust AI,and when not to?

    Who goes to jail if a self driving car kills someone tomorrow?

    Do you know scientists say people will believe anything,repeated enough

    Designing AI systems is also an exercise in critical thinking because an AI is only as good as its creator.This talk is for discussions like these,and more.

    With the exponential increase in computing power available, several AI algorithms that were mere papers written decades ago have become implementable. For a data scientist, it is very tempting to use the most sophisticated algorithm available. But given that its applicability has moved beyond academia and out into the business world, are numbers alone sufficient? Putting context to AI, or XAI (explainable AI) takes the black box out of AI to enhance human-computer interaction. This talk shall revolve around the interpret-ability-complexity trade-off, challenges, drivers and caveats of the XAI paradigm, and an intuitive demo of translating inner workings of an ML algorithm into human understandable formats to achieve more business buy-ins.

    Prepare to be amused and enthralled at the same time.

  • Kuldeep Singh
    keyboard_arrow_down

    Kuldeep Singh - Leverage Docker, Kubernetes and Kubeflow for DS, ML and AI Workflow and Workload

    20 Mins
    Demonstration
    Intermediate

    DS, ML, and AI have moved very far from just running the models only at your local machine. Nowadays models are running in production and helping the business at decision making, which in turn increased the expectations for continuously running the models and making the changes online, but remember running this at a large scale is no easy task.
    During this session, we will learn about one such approach with Docker, Kubernetes and Kubeflow which can help us not only in developing but also deploying models at scale, allow us to use distributed setup and Hyperparameter tuning

help