In this post I want to talk about some techniques for dealing with skewed data, especially left-skewed data. Left-skewed data is a bit of a rarity. It’s something you don’t see very often, kind of like a left-handed unicorn. It can also be difficult to work with if you’re not prepared.
Local Interpretable Model-agnostic Explanations (LIME) is an important technique for explaining the predictions of machine learning models. It is called “model-agnostic” because it can be used to explain any machine learning model, regardless of the model’s architecture or how it was trained. The key to LIME is to “zoom in” on a decision boundary and learn an interpretable model around that specific area. Then we can see exactly how various factors affect the decision boundary. In this post, I’ll show how to use LIME to explain an image classification model.
This post is a tutorial of how to use Grad-CAM to explain a neural network outputs. Grad-CAM is a technique for visualizing the regions in an image that are most important for a convolutional neural network (CNN) to make a prediction. It can be used with any CNN, but it is most commonly used with image classification models. This tutorial uses some code from the keras tutorial.
This post shows how to load and evaluate the model we built in the previous post.
This is post is a walkthrough of creating a Siamese network with FastAI. I had planned to simply use the tutorial from FastAI, but I had to change so much to be able to load the model and make it all work with the latest versions that I figured I would turn it into a blog post. This is really similar to my other post on Siamese Networks with FastAI, except that in this one I will follow on with a post about how to evaluate the model.
Distributions are super important. In this post I’ll talk about some common distributions, how to plot them, and what they can be used for.
This post contains instructions for how to work with personal access tokens on Github.