Edward will be presenting the following sessions
filter_list help_outline
  • Edward Kmett

    Edward Kmett - Logic Programming à la Carte

    45 Mins

    I've been working on a logic programming framework in Haskell, called guanxi (關係) with an eye towards scalability. To build it I leaned heavily on my previous work on propagators and a bunch of other little bits and pieces of algebra and category theory in the design process. A number of patterns have arisen repeatedly throughout the process of building this library. I'll give a tour through the current state of guanxi and try to extract some of the more reusable bits of its design for your inspection.

  • Naresh Jain

    Naresh Jain / Edward Kmett - Q & A Session With Functional Conf Speakers

    45 Mins

    During the conference you might have had questions that did not get answered, this is your opportunity to get them answered by our expert panel group

  • 45 Mins

    There are a lot of algorithms that revolve around iterating a form of information propagation until it attains a deterministic fixed point. CRDTs, Datalog, SAT solving, functional reactive programming, and constraint programming all fit into this mold.

    One framework for these sorts of algorithms is the notion of a “propagator” due to Sussman and Radul, but until now little rigor has applied to know how such algorithms terminate with consistent results. Another framework is Lindsey Kuper’s work on the notion of “lattice variables” (LVars), which addresses termination, parallelism and eventual consistency well, but not iteration.

    By blending these frameworks, I’ll build up a series of sufficient conditions for propagators to terminate with consistent results and proceed to show how we can use this common framework to steal insights and quirks from each individual domain to try to optimize the rest.

  • Edward Kmett

    Edward Kmett / Tony Morris - Let's Lens

    480 Mins

    Let's Lens presents a series of exercises, in a similar format to the Data61 functional programming course material. The subject of the exercises is around the concept of lenses, initially proposed by Foster et al., to solve the view-update problem of relational databases.

    The theories around lenses have been advanced significantly in recent years, resulting in a library, implemented in Haskell, called lens.

    This workshop will take you through the basic definition of the lens data structure and its related structures such as traversals and prisms. Following this we implement some of the low-level lens library, then go on to discuss and solve a practical problem that uses all of these structures.

1. What got you into Functional Programming (FP)?

The earliest code that I wrote that could be considered "functional" was written in perl, and glued together all the billing and provisioning services at a now-dead ISP and phone company that I helped build in the mid-90s. At the time though, map was just a loop to me that could also be used to return a new hash or array. I still didn't get functional programming.

Later on I wrote a javascript compiler in javascript that compiled javascript to javascript. I wanted a way to run long-running computations in the browser and work around the "this page is taking too long" notifications you get when you run big javascript loops in such a page. Transforming the javascript into continuation passing style and using tricks to manually trampoline it to work around the lack of tail-call optimization in javascript introduced me to a few more functional programming concepts, mostly out of the scheme world, and to Appel's Modern Compiler Implementation in ML. I still didn't get functional programming. e.g. My version of CPS used all sorts of evil tricks to try to reduce the number of stack frames it used for the imperative language I was compiling, and I was still generally okay with that.

Meanwhile, Google released the Google Web Toolkit and, despite its flaws, the basic idea was better than what I'd built. I was compiling an untyped language to an untyped language. They could get IDE support. I sold off my project to a company that was building javascript debuggers and looked around for something else to do.

At the time, there was work on a new Javascript standard being done. And Dave Herman was trying to add a ton of features to the language. Some of those features were inspired by functional languages. Algebraic data types, lambdas, etc. He was building a reference implementation in Standard ML, and this seemed like a big push with Mozilla behind it, so this was a very serious thing by serious people.

So I decided to transfer all my javascript expertise to try to write a new javascript engine that could run server side -- node.js didn't yet exist. The new javascript had (optional) types, so that would fix some of my infrastructure and tooling problems. But it also had a lot of fancy type system features. The only real language I could think of that felt like it had the mixture of mixin traits and other classes that they were bandying about at the time was... perl.The working implementation of perl 6 at the time was called pugs and was written in this other programming language Haskell. So I decided to write my new compiler in Haskell, whatever that was, and whenever I didn't know how to implement something I'd just copy whatever pugs was doing, and I'd learn a new language in the process. It is all just syntax right?

The proposed javascript standard in question was scrapped because Microsoft said no and Brendan Eich and company folded. Ecmascript "Harmony" was an attempt to pick up the pieces and make minor incremental progress instead.

Fourteen years later I'm still learning that new language.

I'll get around to that javascript compiler some day.

2. What has been your best moment or highlight working with FP?

There was a window of time spread over a couple of months several years back where I was writing an interpreter or compiler every day, working on ways to more efficiently implement the semantics of samplers for a ray-tracer.

At that point in time the old version of me that had been obsessed with graphics and low level implementation concerns and the new version of me that was obsessed with high-level functional programming felt like they had finally fused together into a whole being.

I've been chasing that Nirvanic sensation ever since.

3. What are some of the greatest challenges of working with FP?

There are a lot of areas where I do not feel current approaches in functional programming, logic programming and proof do not scale, in lots of different directions.

  • Haskell is very good at pushing work down to individual cores, but not so good at exploiting the SIMD unit in your CPU, often leaving 6x performance on the table. Mind you, Java is also leaving this on the table, but efforts like the Intel SPMD Program Compiler (ispc) at least hint that it can be done. How does this transfer into FP?
  • We don't have (good) functional programming tools for GPUs and TPUs.
  • Theorem proving seems to be a largely linear effort, which is process constrained to small teams. If we ever have to prove something really important on short timelines, we don't have the infrastructure to efficiently turn money into proof.
  • There are aspects of languages with Haskell-style typeclasses that cause the library ecosystem to scale poorly. There are aspects of languages without Haskell-style typeclasses that cause generated code to be hard to reason about or for you to get stuck in little traps where it is hard to prove your way out.
4. All the mainstream programming languages are adding functional programming features. Most new languages and frameworks are strongly influenced by FP. What is your advice to object-oriented programmers?

A lot of the benefit of functional programming only really shows up when you go all-in, and eschew traditional imperative programming techniques entirely.

Passing around functions is all well and good, but the real workhorse of FP is equational reasoning, which is aided greatly by the lack of side-effects.

However, this is where the real learning curve takes off. Once you accept immutable data structures into your previously imperative programs, everything you learned in your second year data structures and algorithms class about amortized analysis of programs is wrong! You now have multiple "futures", old objects are still lying around, so you can't earn credit by performing earlier steps cheaper and using it to pay down the cost of future steps any more!

Laziness is the missing tool needed to recover that power! Chris Okasaki's "Purely Functional Data Structures" covers how to re-think your entire computer science education around amortized data structures in appropriate terms with this in mind.

But then once you have this epiphany, you realize that Haskell is really the only "serious" language that I feel takes that conclusion. Languages that try to straddle the divide (like scala) usually have lots of space leaks or blow the stack when you go to use functional programming techniques. Lazy vals aren't thunks, and building lazy vals that depend on other lazy vals just means you hold onto an entire forest of objects with very much wrong space and time behavior, so none of the tools you're taught by the FP folks work right, because they are handicapped by trying to comply with a domain model that isn't suited to the extra functional bits they've bolted on.

So, if I had to give one piece of advice, it would be to learn Haskell, and then pick up Okasaki's book and work through it and internalize the lessons.

5. What will be some of the key takeaways from your sessions at the conference?

I'm teaching three sessions.

First is a keynote on what we can do to make logic programming languages like Prolog or Kanren scale by exploiting techniques that make modern SMT solvers fast. This is mostly an overview talk, trying to help people understand the problems in this space. I hope that people walk away with a sense of what can be done here and limitations of current approaches.

Second is a talk on propagators. I'm hoping that people walk away with a sense of what propagators are, how to use them in their own work to reason about distributed, parallel, or incremental systems, and learn a few tricks to improve the performance of them by stealing optimization techniques that have made past problems that just happen to be propagator problems faster.

Third session is a workshop on lenses with Tony Morris. Lenses have really helped shift the way folks think about functional programming, giving us a way to replace the concise "getters and setters" of the imperative world, adding a lot of extra expressive power along the way. They turn out to basically serve as a common language for accessing into your state, fiddling with tree structures like json or xml, offering basically everything that a typical CRUD app or microservice developer needs in their day to day existence. I'm hoping the session with demystify lenses for folks, teach them how they are built and how to build your own, and help the audience learn a few new abstractions along the way.

6. The conference has more than 50 sessions. Which ones are you most looking forward to attending and why?

My usual conference ritual is to spend my time ambushing people in the hallway and talking with them.

I can usually find talks online, and even watch them sped up. If I have to give up half way through I can get up and walk out of the talk without offending someone, and if I need to go back over a thing I can rewind. But I can't find all of the people I find at workshops online.

If you see me in the hallway, don't be shy.

Grab me. Talk. Ask questions. Teach me things.

It is why I'm there!