Author Archives: Jesse Johnson

Properties of Interpretability

In my last two posts, I wrote about model interpretability, with the goal of trying to understanding what it means and how to measure it. In the first post, I described the disconnect between our mental models and algorithmic models, … Continue reading

Posted in Interpretability | 1 Comment

Goals of Interpretability

In my last post, I looked at the gap that arises when we delegate parts of our thought processes to algorithmic models, rather than incorporating the rules they identify directly into our mental models, like we do with traditional statistics. I … Continue reading

Posted in Interpretability, Uncategorized | 1 Comment

Interacting with ML Models

The main difference between data analysis today, compared with a decade or two ago, is the way that we interact with it. Previously, the role of statistics was primarily to extend our mental models by discovering new correlations and causal … Continue reading

Posted in Interpretability | 6 Comments

LSTMs

In past posts, I’ve described how Recurrent Neural Networks (RNNs) can be used to learn patterns in sequences of inputs, and how the idea of unrolling can be used to train them. It turns out that there are some significant … Continue reading

Posted in Neural Networks | 3 Comments

Rolling and Unrolling RNNs

A while back, I discussed Recurrent Neural Networks (RNNs), a type of artificial neural network in which some of the connections between neurons point “backwards”. When a sequence of inputs is fed into such a network, the backward arrows feed … Continue reading

Posted in Uncategorized | 3 Comments

Continuous Bayes’ Theorem

Bayes’ Rule is one of the fundamental Theorems of statistics, but up until recently, I have to admit, I was never very impressed with it. Bayes’ gives you a way of determining the probability that a given event will occur, or … Continue reading

Posted in Modeling | 2 Comments

The TensorFlow perspective on neural networks

A few weeks ago, Google announced that it was open sourcing an internal system called TensorFlow that allows one to build neural networks, as well as other types of machine learning models. (Disclaimer: I work for Google.) Because TensorFlow is designed … Continue reading

Posted in Neural Networks | 2 Comments