dc.contributor.author |
Ali, Muhammad |
|
dc.date.accessioned |
2023-08-10T11:32:22Z |
|
dc.date.available |
2023-08-10T11:32:22Z |
|
dc.date.issued |
2019 |
|
dc.identifier.other |
00000119183 |
|
dc.identifier.uri |
http://10.250.8.41:8080/xmlui/handle/123456789/36273 |
|
dc.description |
Supervisor: Dr. Arslan Shaukat Co-Supervisor Dr Usman Akram |
en_US |
dc.description.abstract |
Few decades ago, experienced Marine biologists started a campaign to safeguard Right
Whales with a name of National Oceanic Atmospheric Administration. Different aerial
campaigns were started by them, with the primary objective of studying their health and
counting their population. Photographs were taken by helicopters and then these were
compared with an online database. Manual comparing of Right Whales was very time
consuming and lot of training and experience was needed to do it.
So to overcome this issue NOAA in collaboration with Kaggle decided to launch a
competition, the objective was to launch a system to monitor Right Whales and efforts must
be done to free a Right Whale that has been accidentally caught in fishing gear. We only have
4544 training images in our dataset; training deep convolutional neural networks with such a
small number is a real challenge. These photographs are taken by helicopters, so they are
badly focused. They are taken at different times of day with different quality of cameras.
Some images of Right Whales are of really bad contrast with poor exposure. Moreover the
dataset was not balanced i.e. the number of pictures per whale varies a lot. We have about 20
whales with just 1 image, some whales have around 40 images, the average number of
images per whale was around 10.
With such a sparse distribution, it was really challenging for us to train our deep
convolutional neural networks. To minimize the effect of small dataset, we have divided our
problem wisely. Instead of training our neural network on whole images, which is no use to
us, as most of the images contains ocean waves. We first localize head of the whale, in this
way we can focus more on our desired feature, that is the callosity pattern on Whale’s head.
After localizing head we find two points Bonnet and Blowhead, callosity pattern lies between
these two points. We then aligns whale’s head in such a way, that Bonnet is on right
blowhead is on left and whale’s head is pointing towards east. Now images are align, so our
classifier can only focus on area of interest. This has improved our results significantly aswe
have achieved an accuracy of 78.70%. The results can be improved if we have more images
in the dataset or if we have more clearer images with better resolution and area of interest i.e
Callosity Pattern more clearly visible. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
College of Electrical & Mechanical Engineering (CEME), NUST |
en_US |
dc.subject |
Key Words: Kaggle, Deep convolutional neural networks, deep learning, fully convolutional network, Right Whales , Callosity Patterns , Bonnet and Blowhead |
en_US |
dc.title |
Deep Learning based Classification Framework to save Right Whales |
en_US |
dc.type |
Thesis |
en_US |