Covid RX

COVID-19

Introduction

COVID-19 was a global pandemic and was spread around the whole world. The hospital resources were in a complicated crisis because supply chain was broken due to regulations and controls.

In developing countries, at the beginning of the pandemic, detecting COVID-19 was a hard challenge. Typically, one test might take around 3-15 days to give a result. However, this time did not let doctor to diagnostic quickly and effectively.

Using Artificial Intelligence (AI) through a classification algorithm with X-ray chest scans was purposed like an alternative to avoid test by chemical methods such as PCR.

Data information:
  1. The data comes from the following Kaggle repository: https://www.kaggle.com/datasets/pranavraikokte/covid19-image-dataset/code?datasetId=627146&sortBy=dateRun&tab=profile
  2. The data was split into 80% for training and 20% for validation. From the training data set was obtained a validation data set which was 20% of the original training dataset. The final training dataset had the rest of the original training dataset.
  3. Three labels were available, and they were covid, normal, and viral pneumonia.
Final model results
  • Loss = 0.65
  • Accuracy = 0.92
Data Treatment Process

Three approaches were done to solve the classification. A Convolutional Neural Network (CNN) was built in all the cases.

  • Data analysis, data treatment and data visualization of data from radiation detectors.
  • Developing modules to simplify radiation detector calibration with Python.
  • Tools used: Numpy, Pandas, Matplotlib, Seaborn.
Approach 1

It was built a CNN with 4 Convolutional Layers, a Flatten layer and two Dense layers. The Convolutional Layers had MaxPooling, Batch Normalization and Dropout layers too. The trainable parameters were roughly 851 000 parameters. During the training stage, it was used a callback to save the model with the best validation accuracy metric.

However, the model converged around 0.44 of accuracy and it did not improve throughout the epochs. The model was evaluated with the test dataset, and it got 0.39 of accuracy.

This approach did not work out, so it was thought that overfitting was a reason of having a low accuracy.

Approach 2

Considering that overfitting is present, this model only used a Convolutional Layer with Max Pooling and Batch Normalization, then it was added a flatten layer and a dense layer. The trainable parameters were roughly 1 000 000.

Throughout the epochs, the model got an accuracy of 0.8 with the validation data. It was tested if the model was able to generalize well the learning process. The accuracy got with test dataset was of 0.71.

Even though the model improved in this approach, the accuracy gotten did not met the requirement. This application can be used in medical diagnosis, so the accuracy must be as high as possible.

Approach 3

Transfer Learning is the process where a Neural Network used in a similar task is trained to do another similar task. In this case, it was used the ResNet152V2 model. The ResNet152V2 parameters of the layers were frozen to retain what the model previously learns for another task. In this case, it was added a Flatten layer and then three dense layers. The final model has roughly 58 M of non-trainable parameters and 13 M of trainable parameters. The model at the end of the training had an accuracy of 0.92 for validation dataset.

Finally, the capability of generalize it was proven. The final model got an accuracy of 0.9242 for the test dataset. The best solution in this case, it was using transfer learning because it was not available enough data to build an own model.

Show code

Sebastián

Sarasti

Follow me on my social media channels to know more about my projects.

Follow Us

Get In Touch

Pujilí, Cotopaxi, Ecuador

sebitas.alejo@hotmail.com

© Sebastián Sarasti Zambonino. All Rights Reserved.

Designed by HTML Codex

Edited by Sebastián Sarasti and Angel Bastidas