10 Deep Learning Projects With Datasets (Beginner & Advanced)

Here are 10 deep learning projects from beginner to advanced that you can do with TensorFlow or PyTorch. For each project the links to the datasets are included.

1 MNIST

The MNIST dataset is a large set of handwritten digits and the goal is to recognize the correct digit. This project is fairly easy, it should make you comfortable with your deep learning framework and you should learn how you can implement and train your first Artificial Neural Network. It also teaches you how to do multiclass classification problems instead of just binary problems.

MNIST can be loaded directly from within TensorFlow and PyTorch.

http://yann.lecun.com/exdb/mnist/

2 CIFAR-10

This project is similar but a little bit more difficult than the first one. It contains color images of 10 different classes like airplanes, birds, dogs, and other objects. Here it’s a little bit harder to get a good classification model. Now instead of just using a simple neural net, you should implement a Convolutional Neural Net and learn how they work.

CIFAR-10 can be loaded directly from within TensorFlow and PyTorch.

https://www.cs.toronto.edu/~kriz/cifar.html

3 Dogs vs. Cats

The third project is the Dogs vs. Cats challenge on Kaggle. As the name suggests, the dataset only contains images of either a dog or a cat. This classification task is actually a little bit simpler than in the previous task, because now we only deal with a binary classification problem. But the challenging part could be to learn how to download the data and load it with the correct format into your model. If you are ambitious you can then submit your results to Kaggle and compete with other people.

To get a really good performance you could also have a look at a technique that is called Transfer Learning. This is a very important concept that you should learn sooner or later, so now would be a good point to try this. If you want to learn more about this then I have a tutorial for you here.

Dogs vs. Cats | Kaggle

4 Breast Cancer Classification

The medical field is one of the most common use cases of deep learning. There are many applications out there that help to detect diseases and help physicians to make their diagnosis. Here you can help to improve these applications and bring your knowledge to a good use.

The particular project I selected for beginners is about Breast Cancer Classification. Here you have to train a model to classify cancer subtypes based on 2D Medical Histopathology images. Breast cancer is the most common form of cancer in women, and accurately identifying and categorizing breast cancer subtypes is an important clinical task. If you can come up with a reliable automated method here then this can be used to save time and reduce errors in hospitals.

Breast Cancer Classification: Breast Histopathology Images | Kaggle

5 Natural Language Processing with Disaster Tweets

Up until now we had four computer vision projects. Now let’s switch the field and have a look at Natural Language Processing - or short NLP. This is another field where deep learning is widely used. Here we don’t deal with images but instead with words and sentences.

To get started I recommend the Disaster Tweet project. Again you find this on Kaggle in the NLP getting started category. You have to classify Twitter Tweets and predict if they are about real disasters or not.

This would be a nice time to learn about RNNs - Recurrent Neural Networks, and LSTMs - Long Short Term Memory. These are two special types of neural networks that are extremely important when working with text data. You can find a tutorial about them here.

Natural Language Processing with Disaster Tweets | Kaggle

6 Chatbot

Next I suggest a project I think almost everyone will enjoy. And this is about chatbots. Build your own chatbot from scratch and put it to test with a simple chat application.

To get data for this task I can point you to two large open source datasets which should be enough for the beginning. The first is the Conversational Question Answering dataset provided by Stanford NLP, and the other one is the Google Natural Questions dataset.

If you don’t know how to get started then I can point you to my tutorial where we build a simple chatbot with an RNN together. Once you've understood the concepts of RNNs then creating a - maybe not advanced but decent - chatbot is not that hard anymore.

Google’s Natural Questions
CoQA: A Conversational Question Answering Challenge

7 Recommender System

Now let’s go to a task almost every company needs. Have a look at Netflix, YouTube, Instagram, Spotify, and all the other big names. They all need Recommender Systems. Based on the information they collect on each user they want to recommend other content that the user might enjoy.

To get started with this I suggest to build a movie recommender system, You can either use the MovieLens 100K Dataset or the official Netflix dataset on Kaggle.

This is also a good time to learn about a technique that is called collaborative filtering. You could solve this with „classical“ Data Science techniques, but you can also build deep recommender systems using deep learning.

MovieLens 100K Dataset
Netflix Prize data | Kaggle

8 Forecasting

Next let’s have a look at Forecasting. This is another interesting field where we deal with a time series and you can practice your knowledge about RNNs again. We want to predict the values of a time series in the future. A very popular example for this is stock price prediction.

As dataset here I actually encourage you to scrape or download the stock data yourself from Yahoo! Finance. This should not be too hard and there is also a python package (yfinance) that you can simply use.

So get some stock data, use the time data only up to a certain point in the past to train your model, then see how it predicts the prices from the rest of the data up to the present time, and then build a system to predict the prices in the future.

Yahoo! finance
yfinance Python Package

9 Object Detection

The last two projects are advanced Computer Vision tasks. First let’s have a look at Object Detection. The goal is to identify the specified objects and mark the positions in the image. So you have to check if there is an object, and then where it is, and also deal with possible multiple objects in an image.

This is indeed a very advanced task, and you could try to recreate the popular YOLO object detection model from scratch, but I recommend to just use a pertained model.

Then you still have to implement the whole object detection pipeline, and you should learn about OpenCV here. A very important Computer Vision library that is used here for example to draw the bounding boxes.

As datasets I can point you to the Raccoon dataset or the Annotated Driving Dataset that is used for self driving cars.

Raccoon Dataset
Annotated Driving Dataset

Helpful Articles:
Towardsdatascience - Build Your Own Object Detector
Towardsdatascience - Object Detection With TensorFlow Object Detector API

10 Style Transfer

As last project I suggest to have a look at style transfer, a very interesting use of deep learning. You train a model and can then feed a style image to this model, and after training it is able to apply this style to any other given image you want.

Here again you don’t have to implement this from scratch but can use an existing framework like the TensorFlow fast style transfer or the PyTorch fast neural style implementation.

To retrain your model for your own style images you should use the COCO dataset. COCO is a large-scale object detection, segmentation, and captioning dataset, and it’s one of the most important deep learning datasets for computer vision that you should definitely check out!

TensorFlow Fast Style Transfer
PyTorch Fast Neural Style
COCO dataset

Final Words

I hope you will enjoy these projects! And if you need help you can always join our community in the Discord server: https://discord.gg/FHMg9tKFSN

Check out my Courses