
Recent Posts
Recent Comments
Akhil Kumar on Gaussian kernels Think Big, a Teradat… on Kmodes A Gentle Introductio… on Rolling and Unrolling RNN… santaraxita on Neural Networks 1: The ne… Jon B. on PageRank Archives
 December 2016
 November 2016
 October 2016
 June 2016
 April 2016
 January 2016
 November 2015
 October 2015
 July 2015
 June 2015
 May 2015
 January 2015
 September 2014
 June 2014
 May 2014
 March 2014
 February 2014
 January 2014
 December 2013
 October 2013
 September 2013
 August 2013
 July 2013
 June 2013
 May 2013
 April 2013
 March 2013
Categories
Meta
Author Archives: Jesse Johnson
Properties of Interpretability
In my last two posts, I wrote about model interpretability, with the goal of trying to understanding what it means and how to measure it. In the first post, I described the disconnect between our mental models and algorithmic models, … Continue reading
Posted in Interpretability
1 Comment
Goals of Interpretability
In my last post, I looked at the gap that arises when we delegate parts of our thought processes to algorithmic models, rather than incorporating the rules they identify directly into our mental models, like we do with traditional statistics. I … Continue reading
Posted in Interpretability, Uncategorized
1 Comment
Interacting with ML Models
The main difference between data analysis today, compared with a decade or two ago, is the way that we interact with it. Previously, the role of statistics was primarily to extend our mental models by discovering new correlations and causal … Continue reading
Posted in Interpretability
6 Comments
LSTMs
In past posts, I’ve described how Recurrent Neural Networks (RNNs) can be used to learn patterns in sequences of inputs, and how the idea of unrolling can be used to train them. It turns out that there are some significant … Continue reading
Posted in Neural Networks
3 Comments
Rolling and Unrolling RNNs
A while back, I discussed Recurrent Neural Networks (RNNs), a type of artificial neural network in which some of the connections between neurons point “backwards”. When a sequence of inputs is fed into such a network, the backward arrows feed … Continue reading
Posted in Uncategorized
3 Comments
Continuous Bayes’ Theorem
Bayes’ Rule is one of the fundamental Theorems of statistics, but up until recently, I have to admit, I was never very impressed with it. Bayes’ gives you a way of determining the probability that a given event will occur, or … Continue reading
Posted in Modeling
2 Comments
The TensorFlow perspective on neural networks
A few weeks ago, Google announced that it was open sourcing an internal system called TensorFlow that allows one to build neural networks, as well as other types of machine learning models. (Disclaimer: I work for Google.) Because TensorFlow is designed … Continue reading
Posted in Neural Networks
2 Comments