
Amit Kapoor
Founder & CEO
NarrativeViz Consulting
location_on India
Member since 3 years
Amit Kapoor
Specialises In (based on submitted proposals)
I am passionate about structuring & syntheses and you will often find me practicing my skills of listening and storytelling - in that order. I believe in the “Less is More” philosophy and consider simplicity as a driving principle in doing any work. I am currently experimenting with data visualization, teaching, storytelling, djembe, freelance consulting, coding, and cooking in a tandoor. When not playing with my computer or reading a book, you will find me working with my son - Siddharth - to learn new skills.
-
keyboard_arrow_down
Deep Learning in the Browser: Explorable Explanations, Model Inference, and Rapid Prototyping
Bargava SubramanianCo-FounderBinaize LabsAmit KapoorFounder & CEONarrativeViz Consultingschedule 2 years ago
Sold Out!45 Mins
Demonstration
Beginner
The browser is the most common end-point consumption of deep learning models. It is also the most ubiquitous platform for programming available. The maturity of the client-side JavaScript ecosystem across the deep learning process—Data Frame support (Arrow), WebGL-accelerated learning frameworks (deeplearn.js), declarative interactive visualization (Vega-Lite), etc.—have made it easy to start playing with deep learning in the browser.
Amit Kapoor and Bargava Subramanian lead three live demos of deep learning (DL) for explanations, inference, and training done in the browser, using the emerging client-side JavaScript libraries for DL with three different types of data: tabular, text, and image. They also explain how the ecosystem of tools for DL in the browser might emerge and evolve.
Demonstrations include:
- Explorable explanations: Explaining the DL model and allowing the users to build intuition on the model helps generate insight. The explorable explanation for a loan default DL model allows the user to explore the feature space and threshold boundaries using interactive visualizations to drive decision making.
- Model inference: Inference is the most common use case. The browser allows you to bring your DL model to the data and also allows you test how the model works when executed on the edge. The demonstrated comments sentiment application can identify and warn users about the toxicity of your comments as you type in a text box.
- Rapid prototyping: Training DL models is now possible in the browser itself, if done smartly. The rapid prototyping image classification example allows the user to play with transfer learning to build a model specific for a user-generated image input.
The demos leverage the following libraries in JavaScript:
- Arrow for data loading and type inference
- Facets for exploratory data analysis
- ml.js for traditional machine learning model training and inference
- deeplearn.js for deep learning model training and inference
- Vega and Vega-Lite for interactive dashboards
The working demos will be available on the web and as open source code on GitHub.
-
keyboard_arrow_down
Architectural Decisions for Interactive Viz
Amit KapoorFounder & CEONarrativeViz ConsultingBargava SubramanianCo-FounderBinaize Labsschedule 2 years ago
Sold Out!45 Mins
Talk
Beginner
Visualization is an integral part of the data science process and includes exploratory data analysis to understand the shape of the data, model visualization to unbox the model algorithm, and dashboard visualization to communicate the insight. This task of visualization is increasingly shifting from a static and narrative setup to an interactive and reactive setup, which presents a new set of challenges for those designing interactive visualization applications.
Creating visualizations for data science requires an interactive setup that works at scale. Bargava Subramanian and Amit Kapoor explore the key architectural design considerations for such a system and discuss the four key trade-offs in this design space: rendering for data scale, computation for interaction speed, adapting to data complexity, and being responsive to data velocity.
- Rendering for data scale: Envisioning how the visualization can be displayed when data size is small is not hard. But how do you render interactive visualization when you have millions or billions of data points? Technologies and techniques include bin-summarise-smooth (e.g., Datashader and bigvis) and WebGL-based rendering (e.g., deck.gl).
- Computation for interaction speed: Making the visualization reactive requires the user to have the ability to interact, drill down, brush, and link multiple visual views to gain insight. But how do you reduce the latency of the query at the interaction layer so that the user can interact with the visualization? Technologies and techniques include aggregation and in-memory cubes (e.g., hashcubes, InMEMS, and nanocubes), approximate query processing and sampling (e.g., VerdictDB), and GPU-based databases (e.g., MapD).
- Adapting to data complexity: Choosing a good visualization design for a singular dataset is possible after a few experiments and iterations, but how do you ensure that the visualization will adapt to the variety, volume, and edge cases in the real data? Technologies and techniques include responsive visualization to space and data, handling high cardinality (e.g., Facet Dive), and multidimensional reduction (e.g., Embedding Projector).
- Being responsive to data velocity: Designing for periodic query-based visualization refreshes is one thing, but streaming data adds a whole new level of challenge to interactive visualization. So how do you work decide between the trade-offs of real-time and near real-time data and their impact on refreshing visualization? Technologies and techniques include optimizing for near real-time visual refreshes and handling event- and time-based streams.
-
keyboard_arrow_down
Hypotheses-Driven Problem Solving Approach for Data Science
Bargava SubramanianCo-FounderBinaize LabsAmit KapoorFounder & CEONarrativeViz Consultingschedule 3 years ago
Sold Out!90 Mins
Tutorial
Beginner
The ever-increasing computational capacity has enabled us to acquire, process and analyze larger data-sets and information. We increasingly want to take a data-driven lens to solve business problems. But business problems are inherently 'wicked in nature' - with multiple stakeholders, different problem definition, different solutions interdependence, constraints, amplifying loops etc.
There is no one trick to solve them. What is required is learning a structured approach to problem-solving that can be applied to a large set of these problems. One possible way is to use a Hypotheses Driven Approach - problems definition, scoping, issue identification and hypothesis generation - as a starting point for this. In this workshop, you will learn how to apply a hypothesis-driven approach to any business problem through seven pragmatic steps:
- Frame
- Acquire
- Refine
- Transform
- Explore
- Model
- Insight
The focus will be to learn the principles through an applied case study and using an iterative and agile methodology.
-
keyboard_arrow_down
Building and Scaling Data Science Capabilities
Amit KapoorFounder & CEONarrativeViz ConsultingBargava SubramanianCo-FounderBinaize Labsschedule 3 years ago
Sold Out!45 Mins
Case Study
Beginner
Building and scaling data science capability is an imperative for enterprises and startups aiming to adopt a data-driven lens for their business. However, crafting a successful data-science strategy is not straightforward and requires answering the following questions:
- Strategy & Tactics: What part of the business should I target first for adoption? Should I take a jump-start approach or a bootstrap approach?
- Process & Systems: How should I set up an initial process for data science? How to integrate data-driven processes with existing business systems?
- Structure & Roles: Should I adopt a functional or a business-focused data science structure? What specialized roles should I be hiring for Data engineering, ML expert, Visualisation expert, and /or Data Analyst?
- Tools & Stack: Should I build a vertical or horizontal data science stack? How do I integrate data science models with existing applications?
- Engineering & Technical: What are the pitfalls to watch out for? How to avoid pre-mature over-engineering of data science? How to manage the ongoing technical debt for data science?
- Skills & Competencies: How do I up-skill and build differentiated data-science competency across the organization?
The speakers draw upon their experiences in setting up and advising data science teams at enterprises and startups to share best practices on how to craft a successful data strategy and then go on to execute it. The will use case-studies to discuss what worked and failure points to watch out for.
-
No more submissions exist.
-
No more submissions exist.