Kaggle Learn review: there is a deep learning track and it is worth your time
Right from my undergrad days when I was starting out with machine learning to this date, my admiration for Kaggle continues to grow. In addition to being synonymous with and popularizing data science competitions, the platform has served as a launching pad and breeding ground for countless data science and machine learning practitioners around the world, including yours truly. In fact, skills I'd picked up from the platform are part of the reason that I recently got to join SocialCops, a company I'd admired for years. However, I hadn't been on the platform in 2017 as much as I would have liked. So when I saw Ben Hamner's tweet launching Kaggle Learn, a set of interactive data science tutorials, I made up my mind to give it a shot.
Zeroing in on deep learning
Learn currently hosts tutorials about 4 topics - introductory machine learning, R programming, data visualisation and deep learning. I'd stumbled across machine learning for the first time in the form of neural networks (NN) more than 3 years back. Since then, I'd studied the theoretical details of NN at various points of time but somewhat ironically, I'd never got into practical deep learning except for a few tutorials. Hence, I decided to get started with the deep learning track.
The reason I mentioned my past experience with ML and NN was to point out the fact that I was not a complete beginner when I had gotten started with this track and if you are, start with the machine learning track instead.
Getting started
If you are unfamiliar with neural networks or haven't come across them recently, it would be a good idea to get some theoretical foundation before starting with hands-on tutorials. There are a number of introductory resources out there, both text and video. I used an excellent video by 3Blue1Brown, a YouTube channel, as a refresher.
Choice of framework
The track uses the high-level Keras API and a Tensorflow backend. Even with numerous frameworks out there, this combination seems to find favor as a beginner-friendly choice among a large portion of the deep learning community. Personally, I admire Keras for being well-designed, user-friendly and playing a big role in democratizing access to deep learning methods.
The track
The deep learning track is currently comprised of six sections. They are:
-
Intro to Deep Learning and Computer Vision: Starting off with a computer vision example is a great way to get acquainted with machine learning. This is the application which had put deep learning in the limelight and the data (images) is something most of us deal with on an everyday basis. The accompanying exercise allows you to play around with basic convolutions and images.
-
Building Models from Convolutions: Convolutional neural networks (ConvNet) have received wide praise and coverage for being extremely successful with image recognition tasks. The basics of ConvNets are discussed and the stage is set up for their implementation.
-
Programming in Tensorflow and Keras: You get to see TF+Keras in action for the first time and you'll be amazed at the ease with which you can get up and running. There's a lot of hand-holding here so getting the code to work alone won't be very useful. Try to understand the code, including helper functions, as much as possible.
-
Transfer Learning: It was a great decision by Dan Becker to include this, and it is my favorite part of the tutorial. Prior to this, my perception of transfer learning was as an advanced topic which would require a decent amount of know-how to even get started. I am delighted to tell you that that I couldn't have been more wrong. Even if all you know are the very basics of NN, the idea of transfer learning itself is fascinating and I've decided to spend some time in near future to research about the topic. Prior to starting this section, I'd gone through the following video by the one and only Andrew Ng.
- Data Augmentation: Simply put, data augmentation is a handy technique which results in increased number of data points for your machine learning algorithm. This section discusses the technique as well as its implementation in Keras.
Related - What you need to know about data augmentation for machine learning
- A Deeper Understanding of Deep Learning: The code used in the previous sections, particularly the various parameters, are discussed in more detail. Also, stochastic gradient descent and backpropogation are briefly discussed.
Conclusion and additional resources
When learning a new topic, I've always found it best to start with a high-level overview. That's precisely what this track aims to offer and for most part, delivers. For a considerable amount of time, setting up deep learning frameworks used to be a roadblock to getting started with the topic. To that end, Kaggle leverages its platform's capabilities to host the code and while doing so, showcases its potential for being useful for collaboration. All that being said, this topic only scratches the surface, even if in a better manner than most tutorials out there. You can plan out your path from here on. If it helps, below are some of the resources I plan to dive into or explore over the next few weeks.
- Neural networks and deep learning by Michael Nielsen (e-book)
- Neural networks and deep learning by Andrew Ng and deeplearning.ai team (MOOC)
- Practical deep learning for coders by Jeremy Howard and fast.ai team (MOOC)
If there's any other useful resource you can think of, feel free to mention it in the comments below.
If you read and liked the article, sharing it would be a good next step.
Additionally, you can check out some of my open source projects on Github.
Drop me a mail, or hit me up on Twitter or LinkedIn in case you want to get in touch.
This post was originally published on Tech and Mortals.