Deep learning has significantly improved state-of-the-art performance for natural language processing (NLP) tasks, but each one is typically studied in isolation. The Natural Language Decathlon (decaNLP) is a new benchmark for studying general NLP models that can perform a variety of complex, natural language tasks. By requiring a single system to perform ten disparate natural language tasks, decaNLP offers a unique setting for multitask, transfer, and continual learning. decaNLP is maintained by salesforce and is publicly available on github in order to use for tasks like Question Answering, Machine Translation, Summarization, Sentiment Analysis etc.

 
2 favorite thumb_down thumb_up 0 comments visibility_off  Remove from Watchlist visibility  Add to Watchlist
 

Outline/Structure of the Talk

  • Introduction to DecaNLP
  • Objectives
  • Motivation
  • Innovativeness
  • Targeted NLP Tasks
  • Impact
  • Open Source Collaboration on github
  • Patents / Publications in NLP, Computer Vision, AI.

Learning Outcome

People will be able to understand different problems of NLP like:

1. Question Answering

2. Machine Translation

3. Summarization

4. Natural Language Inference

5. Sentiment Analysis

6. Semantic Role Labeling

7. Relation Extraction

8. Goal-Oriented Dialogue

9. Semantic Parsing

10. Commonsense Reasoning

People will know about a unified Framework provided by decaNLP to solve different NLP tasks mentioned above.

Target Audience

People having basic knowledge of NLP, Machine Learning and Deep Learning.

Prerequisites for Attendees

Read basic stuff about NLP, Machine Learning, Deep Learning.

schedule Submitted 1 month ago

Public Feedback

comment Suggest improvements to the Speaker

  • Liked Ishita Mathur
    keyboard_arrow_down

    Ishita Mathur - How GO-FOOD built a Query Semantics Engine to help you find the food you want to order

    Ishita Mathur
    Ishita Mathur
    Data Scientist
    GO-JEK Tech
    schedule 1 month ago
    Sold Out!
    45 Mins
    Case Study
    Beginner

    Context: The Search problem

    GOJEK is a SuperApp: 19+ apps within an umbrella app. One of these is GO-FOOD, the first food delivery service in Indonesia and the largest food delivery service in Southeast Asia. There are over 300 thousand restaurants on the platform with a total of over 16 million dishes between them.

    Over two-thirds of those who order food online using GO-FOOD do so by utilising text search. Search engines are so essential to our everyday digital experience that we don’t think twice when using them anymore. Search engines involve two primary tasks: retrieval of documents and ranking them in order of relevance. While improving that ranking is an extremely important part of improving the search experience, actually understanding that query helps give the searcher exactly what they’re looking for. This talk will show you what we are doing to make it easy for users to find what they want.

    GO-FOOD uses the ElasticSearch stack with restaurant and dish indexes to search for what the user types. However, this results in only exact text matches and at most, fuzzy matches. We wanted to create a holistic search experience that not only personalised search results, but also retrieved restaurants and dishes that were more relevant to what the user was looking for. This is being done by not only taking advantage of ElasticSearch features, but also developing a Query semantics engine.

    Query Understanding: What & Why

    This is where Query Understanding comes into the picture: it’s about using NLP to correctly identify the search intent behind the query and return more relevant search results, it’s about the interpretation process even before the results are even retrieved and ranked. The semantic neighbours of the query itself become the focus of the search process: after all, if I don’t understand what you’re trying to ask for, how will I give you what you want?

    In the duration of this talk, you will learn about how we are taking advantage of word embeddings to build a Query Understanding Engine that is holistically designed to make the customer’s experience as smooth as possible. I will go over the techniques we used to build each component of the engine, the data and algorithmic challenges we faced and how we solved each problem we came across.

  • Liked Joy Mustafi
    keyboard_arrow_down

    Joy Mustafi - Human-Machine Interaction through Multi-Modal Interface with Combination of Speech, Text, Image and Sensor Data

    45 Mins
    Talk
    Intermediate

    Introduction

    In the context of human–computer interaction, a modality is the classification of a single independent channel of sensory input / output between a computer and a human. A system is designated uni-modal if it has only one modality implemented, and multi-modal if it has more than one. When multiple modalities are available for some tasks or aspects of a task, the system is said to have overlapping modalities. If multiple modalities are available for a task, the system is said to have redundant modalities. Multiple modalities can be used in combination to provide complementary methods that may be redundant but convey information more effectively. Modalities can be generally defined in two forms: human-computer and computer-human modalities.

    With the increasing popularity of smartphones, the general public are becoming more comfortable with the more complex modalities. Speech recognition was a major selling point of the iPhone and following Apple products, with the introduction of Siri. This technology gives users an alternative way to communicate with computers when typing is less desirable. However, in a loud environment, the audition modality is not quite effective. This exemplifies how certain modalities have varying strengths depending on the situation. Other complex modalities such as computer vision in the form of Microsoft's Kinect or other similar technologies can make sophisticated tasks easier to communicate to a computer especially in the form of three dimensional movement.

    This talk is based on a physical robot (a personalized humanoid built in MUST Research), equipped with various types of input devices and sensors to allow them to receive information from humans, which are interchangeable and a standardized method of communication with the computer, affording practical adjustments to the user, providing a richer interaction depending on the context, and implementing robust system with features like; keyboard; pointing device; touchscreen; computer vision; speech recognition; motion, orientation etc.

    Cognitive computing makes a new class of problems computable. 

To respond to the fluid nature of users understanding of their problems, the cognitive computing system offers a synthesis not just of information sources but of influences, contexts, and insights. 

These systems differ from current computing applications in that they move beyond tabulating and calculating based on pre-configured rules and programs. 

They can infer and even reason based on broad objectives. In this sense, cognitive computing is a new type of computing with the goal of more accurate models of how the human brain or mind senses, reasons, and responds to stimulus. 

It is a field of study which studies how to create computers and computer software that are capable of intelligent behavior. This field is interdisciplinary, in which a number of sciences and professions converge, including computer science, electronics, mathematics, statistics, psychology, linguistics, philosophy, neuroscience and biology.

    Computer–Human Modalities

    Computers utilize a wide range of technologies to communicate and send information to humans:

    • Vision – computer graphics typically through a screen
    • Audition – various audio outputs
    • Tactition – vibrations or other movement
    • Gustation (taste)
    • Olfaction (smell)
    • Thermoception (heat)
    • Nociception (pain)
    • Equilibrioception (balance)

    Human–computer Modalities

    Computers can be equipped with various types of input devices and sensors to allow them to receive information from humans. Common input devices are often interchangeable if they have a standardized method of communication with the computer and afford practical adjustments to the user. Certain modalities can provide a richer interaction depending on the context, and having options for implementation allows for more robust systems.

    • Keyboard
    • Pointing device
    • Touchscreen
    • Computer vision
    • Speech recognition
    • Motion
    • Orientation

    Project Features

    Adaptive: They MUST learn as information changes, and as goals and requirements evolve. They MUST resolve ambiguity and tolerate unpredictability. They MUST be engineered to feed on dynamic data in real time.

    Interactive: They MUST interact easily with users so that those users can define their needs comfortably. They MUST interact with other processors, devices, services, as well as with people.

    Iterative and Stateful: They MUST aid in defining a problem by asking questions or finding additional source input if a problem statement is ambiguous or incomplete. They MUST remember previous interactions in a process and return information that is suitable for the specific application at that point in time.

    Contextual: They MUST understand, identify, and extract contextual elements such as meaning, syntax, time, location, appropriate domain, regulation, user profile, process, task and goal. They may draw on multiple sources of information, including both structured and unstructured digital information, as well as sensory inputs (visual, gestural, auditory, or sensor-provided).