
Bargava Subramanian
Co-Founder
Binaize Labs
location_on India
Member since 3 years
Bargava Subramanian
Specialises In
Bargava Subramanian is a Deep Learning engineer and co-founder of an AI-based Cyber Security startup, Binaize Labs, in Bangalore, India. He has 15 years’ experience delivering business analytics and machine learning solutions to B2B companies. He mentors organizations in their data science journey.
He holds a Master's degree from the University of Maryland at College Park. He is an ardent NBA fan.
-
keyboard_arrow_down
Anomaly Detection for Cyber Security using Federated Learning
20 Mins
Experience Report
Beginner
In a network of connected devices, there are two critical aspects of the system to succeed:
- Security – with a number of internet-connected devices, securing the network from cyber threats is very important.
- Privacy - The devices capture business sensitive data that the Organisation has to safeguard to maintain their differentiation.
I've used Federated learning to build anomaly detection models that monitor data quality and cybersecurity – while preserving data privacy.
Federated learning enables Edge devices to collaboratively learn deep learning models but keeping all of the data on the device itself. Instead of moving data to the cloud, the models are trained on the device and only the updates of the model are shared across the network.
Using federated learning gave me the following advantages:- Ability to build more accurate models faster
- Low latency during inference
- Privacy-preserving
- Improved energy efficiency of the devices
I built deep learning models using tensorflow and deployed using uTensor. uTensor is a light-weight ML inference framework built on Mbed and Tensorflow.
In this talk, I will discuss in detail on how I built federated learning models on the edge devices. -
keyboard_arrow_down
Deep Learning in the Browser: Explorable Explanations, Model Inference, and Rapid Prototyping
Bargava SubramanianCo-FounderBinaize LabsAmit KapoorFounder & CEONarrativeViz Consultingschedule 2 years ago
Sold Out!45 Mins
Demonstration
Beginner
The browser is the most common end-point consumption of deep learning models. It is also the most ubiquitous platform for programming available. The maturity of the client-side JavaScript ecosystem across the deep learning process—Data Frame support (Arrow), WebGL-accelerated learning frameworks (deeplearn.js), declarative interactive visualization (Vega-Lite), etc.—have made it easy to start playing with deep learning in the browser.
Amit Kapoor and Bargava Subramanian lead three live demos of deep learning (DL) for explanations, inference, and training done in the browser, using the emerging client-side JavaScript libraries for DL with three different types of data: tabular, text, and image. They also explain how the ecosystem of tools for DL in the browser might emerge and evolve.
Demonstrations include:
- Explorable explanations: Explaining the DL model and allowing the users to build intuition on the model helps generate insight. The explorable explanation for a loan default DL model allows the user to explore the feature space and threshold boundaries using interactive visualizations to drive decision making.
- Model inference: Inference is the most common use case. The browser allows you to bring your DL model to the data and also allows you test how the model works when executed on the edge. The demonstrated comments sentiment application can identify and warn users about the toxicity of your comments as you type in a text box.
- Rapid prototyping: Training DL models is now possible in the browser itself, if done smartly. The rapid prototyping image classification example allows the user to play with transfer learning to build a model specific for a user-generated image input.
The demos leverage the following libraries in JavaScript:
- Arrow for data loading and type inference
- Facets for exploratory data analysis
- ml.js for traditional machine learning model training and inference
- deeplearn.js for deep learning model training and inference
- Vega and Vega-Lite for interactive dashboards
The working demos will be available on the web and as open source code on GitHub.
-
keyboard_arrow_down
Architectural Decisions for Interactive Viz
Amit KapoorFounder & CEONarrativeViz ConsultingBargava SubramanianCo-FounderBinaize Labsschedule 2 years ago
Sold Out!45 Mins
Talk
Beginner
Visualization is an integral part of the data science process and includes exploratory data analysis to understand the shape of the data, model visualization to unbox the model algorithm, and dashboard visualization to communicate the insight. This task of visualization is increasingly shifting from a static and narrative setup to an interactive and reactive setup, which presents a new set of challenges for those designing interactive visualization applications.
Creating visualizations for data science requires an interactive setup that works at scale. Bargava Subramanian and Amit Kapoor explore the key architectural design considerations for such a system and discuss the four key trade-offs in this design space: rendering for data scale, computation for interaction speed, adapting to data complexity, and being responsive to data velocity.
- Rendering for data scale: Envisioning how the visualization can be displayed when data size is small is not hard. But how do you render interactive visualization when you have millions or billions of data points? Technologies and techniques include bin-summarise-smooth (e.g., Datashader and bigvis) and WebGL-based rendering (e.g., deck.gl).
- Computation for interaction speed: Making the visualization reactive requires the user to have the ability to interact, drill down, brush, and link multiple visual views to gain insight. But how do you reduce the latency of the query at the interaction layer so that the user can interact with the visualization? Technologies and techniques include aggregation and in-memory cubes (e.g., hashcubes, InMEMS, and nanocubes), approximate query processing and sampling (e.g., VerdictDB), and GPU-based databases (e.g., MapD).
- Adapting to data complexity: Choosing a good visualization design for a singular dataset is possible after a few experiments and iterations, but how do you ensure that the visualization will adapt to the variety, volume, and edge cases in the real data? Technologies and techniques include responsive visualization to space and data, handling high cardinality (e.g., Facet Dive), and multidimensional reduction (e.g., Embedding Projector).
- Being responsive to data velocity: Designing for periodic query-based visualization refreshes is one thing, but streaming data adds a whole new level of challenge to interactive visualization. So how do you work decide between the trade-offs of real-time and near real-time data and their impact on refreshing visualization? Technologies and techniques include optimizing for near real-time visual refreshes and handling event- and time-based streams.
-
keyboard_arrow_down
Hypotheses-Driven Problem Solving Approach for Data Science
Bargava SubramanianCo-FounderBinaize LabsAmit KapoorFounder & CEONarrativeViz Consultingschedule 3 years ago
Sold Out!90 Mins
Tutorial
Beginner
The ever-increasing computational capacity has enabled us to acquire, process and analyze larger data-sets and information. We increasingly want to take a data-driven lens to solve business problems. But business problems are inherently 'wicked in nature' - with multiple stakeholders, different problem definition, different solutions interdependence, constraints, amplifying loops etc.
There is no one trick to solve them. What is required is learning a structured approach to problem-solving that can be applied to a large set of these problems. One possible way is to use a Hypotheses Driven Approach - problems definition, scoping, issue identification and hypothesis generation - as a starting point for this. In this workshop, you will learn how to apply a hypothesis-driven approach to any business problem through seven pragmatic steps:
- Frame
- Acquire
- Refine
- Transform
- Explore
- Model
- Insight
The focus will be to learn the principles through an applied case study and using an iterative and agile methodology.
-
keyboard_arrow_down
Building and Scaling Data Science Capabilities
Amit KapoorFounder & CEONarrativeViz ConsultingBargava SubramanianCo-FounderBinaize Labsschedule 3 years ago
Sold Out!45 Mins
Case Study
Beginner
Building and scaling data science capability is an imperative for enterprises and startups aiming to adopt a data-driven lens for their business. However, crafting a successful data-science strategy is not straightforward and requires answering the following questions:
- Strategy & Tactics: What part of the business should I target first for adoption? Should I take a jump-start approach or a bootstrap approach?
- Process & Systems: How should I set up an initial process for data science? How to integrate data-driven processes with existing business systems?
- Structure & Roles: Should I adopt a functional or a business-focused data science structure? What specialized roles should I be hiring for Data engineering, ML expert, Visualisation expert, and /or Data Analyst?
- Tools & Stack: Should I build a vertical or horizontal data science stack? How do I integrate data science models with existing applications?
- Engineering & Technical: What are the pitfalls to watch out for? How to avoid pre-mature over-engineering of data science? How to manage the ongoing technical debt for data science?
- Skills & Competencies: How do I up-skill and build differentiated data-science competency across the organization?
The speakers draw upon their experiences in setting up and advising data science teams at enterprises and startups to share best practices on how to craft a successful data strategy and then go on to execute it. The will use case-studies to discuss what worked and failure points to watch out for.
-
No more submissions exist.
-
No more submissions exist.