Reccomendation engine: Theory and mathematical implementation

From our Tinder matches to movies we watch on Netflix, we tend to encounter recommendation engines on a day to day basis and with the data explosion in place, the number of recommendation engines at play would increase dramatically. In this talk, we look into the underlying principles of recommendation engines. You will learn about the main types of recommendation engine approaches. By the end of this session, you will have ideas on how each of this approaches can be implemented. You will also be able to understand the pros and cons of both these approaches.

 
 

Outline/Structure of the Talk

  • Significance of recommendation engines
  • Types of recommendation engines
  • Implementation of content-based filtering
  • Implementation of collaborative filtering
  • Pros and cons of both approaches

Learning Outcome

  • Appreciate the significance of recommendation engines
  • Understand the different approaches to recommendation
  • Learn the principles and concepts of content-based recommendation
  • Learn the principles and concepts of collaborative recommendation
  • Understand the application of both these approaches

Target Audience

Machine Learning Enthusiasts

schedule Submitted 1 year ago

Public Feedback

comment Suggest improvements to the Speaker

  • Liked Venkatraman J
    keyboard_arrow_down

    Venkatraman J - Detection and Classification of Fake news using Convolutional Neural networks

    20 Mins
    Talk
    Intermediate

    The proliferation of fake news or rumours in traditional news media sites, social media, feeds, and blogs have made it extremely difficult and challenging to trust any news in day to day life. There are wide implications of false information on both individuals and society. Even though humans can identify and classify fake news through heuristics, common sense and analysis there is a huge demand for an automated computational approach to achieve scalability and reliability. This talk explains how Neural probabilistic models using deep learning techniques are used to classify and detect fake news.

    This talk will start with an introduction to Deep learning, Tensor flow(Google's Deep learning framework), Dense vectors (word2vec model) feature extraction, data preprocessing techniques, feature selection, PCA and move on to explain how a scalable machine learning architecture for fake news detection can be built.

  • Liked Hariraj K
    keyboard_arrow_down

    Hariraj K - Big Data and Open data: as tools for empowering people

    Hariraj K
    Hariraj K
    Co-Founder
    FOSSMEC
    schedule 1 year ago
    Sold Out!
    20 Mins
    Talk
    Beginner

    With limited transparency, governments tend to become less accessible to the public. While data science remains as a dominating market in almost all day-to-day life industries, its possibilities in administration and governance are yet to be exploited. In this presentation, I address how emerging concepts such as open data and big data can be used to strengthen democracies and help governments serve the public better. We will explore the various possible ways big data and open data can be used to bridge income inequalities and implement proper resource and service allocation. We will also be looking at different initiative taken by individuals and communities and see the impact those initiatives have had on aiding governance. We will also emphasize the concept of open governance and government open data.

  • Liked Hariraj K
    keyboard_arrow_down

    Hariraj K - Importing and cleaning data with R

    Hariraj K
    Hariraj K
    Co-Founder
    FOSSMEC
    schedule 1 year ago
    Sold Out!
    45 Mins
    Workshop
    Intermediate

    We are experiencing a tremendous explosion in big data. A significant share of this data is unfit for direct analysis or machine learning. This presentation emphasizes on web scraping with powerful R packages such as httr and tools like XPath.This session will also introduce the principles of data cleaning. By the end of the session, you will be able to import raw data from most websites and transform them into proper robust datasets. In the due course of this session, we would build a robust dataset by implementing the above concepts ready for analysis