-
Recent Posts
Recent Comments
Max Moroz on Gaussian kernels Popular DS, AI, ML B… on K-modes TuringBot (@turing_b… on Genetic algorithms and symboli… ct98.aspx on Random forests Paul on Kernels Archives
- December 2016
- November 2016
- October 2016
- June 2016
- April 2016
- January 2016
- November 2015
- October 2015
- July 2015
- June 2015
- May 2015
- January 2015
- September 2014
- June 2014
- May 2014
- March 2014
- February 2014
- January 2014
- December 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
Categories
Meta
Author Archives: Jesse Johnson
Properties of Interpretability
In my last two posts, I wrote about model interpretability, with the goal of trying to understanding what it means and how to measure it. In the first post, I described the disconnect between our mental models and algorithmic models, … Continue reading
Posted in Interpretability
1 Comment
Goals of Interpretability
In my last post, I looked at the gap that arises when we delegate parts of our thought processes to algorithmic models, rather than incorporating the rules they identify directly into our mental models, like we do with traditional statistics. I … Continue reading
Posted in Interpretability, Uncategorized
3 Comments
Interacting with ML Models
The main difference between data analysis today, compared with a decade or two ago, is the way that we interact with it. Previously, the role of statistics was primarily to extend our mental models by discovering new correlations and causal … Continue reading
Posted in Interpretability
7 Comments
LSTMs
In past posts, I’ve described how Recurrent Neural Networks (RNNs) can be used to learn patterns in sequences of inputs, and how the idea of unrolling can be used to train them. It turns out that there are some significant … Continue reading
Posted in Neural Networks
5 Comments
Rolling and Unrolling RNNs
A while back, I discussed Recurrent Neural Networks (RNNs), a type of artificial neural network in which some of the connections between neurons point “backwards”. When a sequence of inputs is fed into such a network, the backward arrows feed … Continue reading
Posted in Uncategorized
3 Comments
Continuous Bayes’ Theorem
Bayes’ Rule is one of the fundamental Theorems of statistics, but up until recently, I have to admit, I was never very impressed with it. Bayes’ gives you a way of determining the probability that a given event will occur, or … Continue reading
Posted in Modeling
2 Comments
The TensorFlow perspective on neural networks
A few weeks ago, Google announced that it was open sourcing an internal system called TensorFlow that allows one to build neural networks, as well as other types of machine learning models. (Disclaimer: I work for Google.) Because TensorFlow is designed … Continue reading
Posted in Neural Networks
2 Comments
Neural networks, linear transformations and word embeddings
In past posts, I’ve described the geometry of artificial neural networks by thinking of the output from each neuron in the network as defining a probability density function on the space of input vectors. This is useful for understanding how … Continue reading
Posted in Neural Networks
10 Comments
Recurrent Neural Networks
So far on this blog, we’ve mostly looked at data in two forms – vectors in which each data point is defined by a fixed set of features, and graphs in which each data point is defined by its connections … Continue reading
Posted in Neural Networks
6 Comments
GPUs and Neural Networks
Artificial neural networks have been around for a long time – since either the 1940s or the 1950s, depending on how you count. But they’ve only started to be used for practical applications such as image recognition in the last … Continue reading
Posted in Neural Networks
11 Comments