
With the git clone command, we’ve downloaded all the architecture of the Neural Network (layers of the model, functions to train it, use it, evaluate it, …) but to use it, we also need the weights.

We then want to download the weights of the Neural Network. Next, we have to install all the necessary libraries to use YOLOv7.įortunately, only one line of code is needed to install all these dependencies: !pip install -r requirements.txt Then, we place ourselves in the folder we just downloaded: %cd yolov7 To do this, we’ll use the git clone command to download it to our Notebook: !git clone To use YOLOv7, we first need to download the Github repository! If you want to run YOLOv7 on a terminal/locally, just remove the first “!” or “%” from each line of code. The following lines of code work for any Notebook/Colab instance. Here are the results of YOLOv7 compared to other versions on the COCO dataset: YOLOv7 results It’s just a nice hint to start understand both models.

We’ll use the same image as a test to compare both models performance… but keep in mind, performance on one image isn’t performance of the entire model. This post will be pretty similar as we want to stick to the basics. We’ve already a tutoriel on how to use YOLOv6. Since then, frequent updates are made with the latest improvements: faster computation, better accuracy. The first version of YOLO was released in 2016.

It is a Deep Learning model used for detection on images and videos. YOLOv7 is the second version of YOLO to be published this year 2022. In this article, we will learn how to use YOLOv7: how to implement it, understand the results and use different weights!
