In this post, I’ll demonstrate how to connect to Google Cloud Platform (GCP) instances using VSCode’s Remote SSH extention. We’ll assume you already have working GCP instances that you can ssh into. To connect to a remote host, VSCode needs to know the HostName, User, and IdentityKey. In this guide, we’ll go over how to find these. For simplicity, we’ll assume you’re trying to connect to an instance named my_instance and your zone is europe-west4-b. You’ll need to find these values and change them in the instructions below.
It’s important to know whether a model has been trained well or not. One way to do this is to look at the model weights. But it’s hard to know what exactly they’re telling you - you need something to compare the weights to. In this post, I’m going to look at the weight statistics for a couple of well-trained networks, which can be used as comparison points.
It’s often a good idea to construct deep learning code so that experiments can be conducted simply from the command line, allowing you to quickly iterate and record results. argparse makes this super easy by allowing configurations to be passed as command-line arguments. But sometimes you want to do the opposite. You find a repo that has a train.py file ready-to-go but you want to walk through it line-by-line in a Jupyter Notebook.
This post contains some of my notes on the argparse package.
In the research paper Group Normalization by Yuxin Wu and Kaiming He, they introduce the idea of group normalization. They show that it can be applied very easily by including the necessary code in the paper. This post walks through the code behind group normalization.
There are a lot of great resources out there so I thought I would try to compile some of my favorites. This list contains books on deep learning, machine learning, and computer vision. These textbooks are all made freely available by the publisher.
This post is going to demonstrate how to do data augmentation for computer vision using the albumentations library. The exact data augmentations you use are going to be specific to your use-case. For example, if you’re training on overhead imagery the augmentations you use will probably be somewhat different than on an ImageNet-like dataset (although there will also be considerable overlap).