Name: Anas
Last Name: Mounsif
Mat: 59465
DARKNET
In this section we will proceed to configure our Darknet network,
If you don't have colab pro, you can crash the session and get 5 gb of additional RAM! (once activated the session will keep the changes)
Simulate_High_RAM_Session = False #@param { type:"boolean" }
if simulate_high_RAM_session:
print("Crashing Session, please wait...")
a = []
while(1):
a.append("1")
All the configuration files are contained in the drive, so we will proceed to mount Google Drive on the Colab session,
ADVICE: copy the root folder to your drive, and for a correct execution of the code, leave the parameters as they are.
Mounting DRIVE... specifies the root folder.
from google.colab import drive
print("mounting DRIVE...")
drive.mount('/content/gdrive')
root_folder = 'VisionePercezione_Progetto_AnasMounsif_mat:59465' #@param {type:"string"}
!ln -s /content/gdrive/My\ Drive/$root_folder /my_drive
Now we will proceed to clone the repository, we're going to set some configuration parameters such as:
The next step is to compile.
Proceed with the compilation by selecting the desired configuration parameters.
!git clone https://github.com/AlexeyAB/darknet
%cd darknet
OPENCV = True #@param {type:"boolean"}
GPU = True #@param {type:"boolean"}
CUDNN = True #@param {type:"boolean"}
CUDNN_HALF = True #@param {type:"boolean"}
LIBSO = False #@param {type:"boolean"}
print("setting properties...")
if OPENCV:
print("activating OPENCV...")
!sed -i 's/OPENCV=0/OPENCV=1/' Makefile
if GPU:
print("engines CUDA...")
!/usr/local/cuda/bin/nvcc --version
print("activating GPU...")
!sed -i 's/GPU=0/GPU=1/' Makefile
if CUDNN:
print("activating CUDNN...")
!sed -i 's/CUDNN=0/CUDNN=1/' Makefile
if CUDNN_HALF:
print("activating CUDNN_HALF...")
!sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
if LIBSO: #under processing
print("activating LIBSO...")
!sed -i 's/LIBSO=0/LIBSO=1/' Makefile
print("making...")
!make
print("FINISH!")
To proceed we will load the dataset in order to use it for training.
ADVICE: if it is your intention to use an external dataset using the file system insert in a folder called obj all the images with the relative files.txt and then compress the folder.
Enter the name of the folder containing the dataset (the dataset by convention must be named "obj.zip")
dataset_folder = 'dataset' #@param {type:"string"}
print("loading dataset...")
!cp /my_drive/$dataset_folder/obj.zip ../
print("unziping dataset...")
!unzip ../obj.zip -d data
It is important to also load the main yolo-obj.cfg configuration file, which will contain information for the construction of the network, such as the size of the images, the number of classes, filters, any augmentation techniques and more,
- for more specific information: NET CFG Parameters, Layers CFG Parameters.
The main changes to be made on the .cfg.
ADVICE: The modification of the parameters must be done according to the objectives that you want to achieve, changing the parameters randomly will result in bad execution if not errors in Runtime.
Darknet needs 2 more files:
obj.names, which will contain the name of the classes,
ADVICE: the names must be in the same order as the classes.txt file used in the dataset preparation phase.
class 0
class 1
class 2
class 3
...
obj.data, which contain information about training and the number of classes
classes = number of classes
train = path/to/train.txt
valid = path/to/test.txt
names = path/to/obj.names
backup = path/to/backup_folder
ADVICE: Darknet copies a backup of your trained weights every 100 iterations. As magic, this file will be synced to your local drive on your computer, as well as the backups darknet do every 1000 iterations, saving it on a separate file.
Enter the name of the folder containing the configuration files, provide:
configuration_folder = 'configuration_files' #@param {type:"string"}
print("loading yolo-obj.cfg...")
!cp /my_drive/$configuration_folder/yolo-obj.cfg ./cfg
print("loading obj.names...")
!cp /my_drive/$configuration_folder/obj.names ./data
print("loading obj.data...")
!cp /my_drive/$configuration_folder/obj.data ./data
Darknet needs a .txt file for training which must contain the paths for each image, so i wrote a script that does this:
import os
image_files = []
os.chdir(os.path.join("data", "obj"))
for filename in os.listdir(os.getcwd()):
if filename.endswith(".jpg"):
image_files.append("data/obj/" + filename)
os.chdir("..")
with open("train.txt", "w") as outfile:
for image in image_files:
outfile.write(image)
outfile.write("\n")
outfile.close()
os.chdir("..")
The only thing to do now is to load and run it,
Enter the name of the folder containing scripts and the name of the script that generates the .txt file.
script_folder = 'py_scripts' #@param {type:"string"}
script_file = 'generate_train.py' #@param {type:"string"}
print("loading script...")
!cp /my_drive/$script_folder/$script_file ./
print("performing script...")
!python $script_file
The pre_trained weights are used to speed up training, their use is possible thanks to the transfer learning, consists of using pre-trained layers to construct a different network that migth have similarities in the first layers.
reasons:
Enter the name of the folder containing weights and the name of the pre-trained weights file.
weights_folder = 'backup' #@param {type:"string"}
pre_trained_weights_file = 'yolov4.conv.137' #@param {type:"string"}
print("loading pre_trained weights...")
!cp /my_drive/$weights_folder/$pre_trained_weights_file ./
After having correctly configured the network we can proceed with the next section.
In this section we will train the network,
Choose whether to show the mAP (mean average precisions) calculation every 100 iterations and whether you want to start or resume the training.
train_using_mAP = True #@param {type:"boolean"}
option = 'RESUME TRAINING' #@param ["START TRAINING FROM BEGINNING", "RESUME TRAINING"]
if option == 'START TRAINING FROM BEGINNING':
if train_using_mAP:
!./darknet detector train data/obj.data cfg/yolo-obj.cfg $pre_trained_weights_file -dont_show -map
else:
!./darknet detector train data/obj.data cfg/yolo-obj.cfg $pre_trained_weights_file -dont_show
else:
if train_using_mAP:
!./darknet detector train data/obj.data cfg/yolo-obj.cfg /my_drive/$weights_folder/yolo-obj_last.weights -dont_show -map
else:
!./darknet detector train data/obj.data cfg/yolo-obj.cfg /my_drive/$weights_folder/yolo-obj_last.weights -dont_show
Every 100 iterations darknet will also save a graph that will explain the progress of the training
Save the graph in the Drive, set the range of iterations made.
initial_iteration_number = 100 #@param {type:"slider", min:100, max:10000, step:100}
final_iteration_number = 100 #@param {type:"slider", min:100, max:10000, step:100}
chart_name = "mAP-chart_iter:{}-{}.png".format(initial_iteration_number, final_iteration_number)
print("saving chart...")
!cp chart.png /my_drive/charts/$chart_name
Enter the name of the weights on which to calculate the metrics
weights_name = 'yolo-obj_1000.weights' #@param {type:"string"}
!cp /my_drive/backup/$weights_name ./
!./darknet detector map data/obj.data cfg/yolo-obj.cfg $weights_name
In this section we will perform object detection on the videos and save the results in the Drive,
video_test_folder = 'test_videos' #@param {type:"string"}
input_name = 'video_test.mp4' #@param {type:"string"}
weights_type = 'yolo best' #@param ["yolo best", "yolo last"]
predictions_folder = 'predictions' #@param {type:"string"}
output_name = 'video_test' #@param {type:"string"}
prediction_version = 9 #@param {type:"integer"}
prediction_name = "{}_prediction_version:{}.avi".format(output_name, prediction_version)
if weights_type == "yolo last":
yolo_weights = "yolo-obj_last.weights"
else:
yolo_weights = "yolo-obj_best.weights"
print("detecting...")
!./darknet detector demo data/obj.data cfg/yolo-obj.cfg /my_drive/backup/$yolo_weights -dont_show /my_drive/$video_test_folder/$input_name -i 0 -out_filename prediction.avi
print("copying prediction in Drive...")
!cp prediction.avi /my_drive/$predictions_folder/$prediction_name