Skin Cancer Detection System

Tech Stack
Overview
A deep learning-based system for detecting melanoma from dermoscopic images using transfer learning. The system employs MobileNetV2, a lightweight CNN architecture pre-trained on ImageNet, fine-tuned on the HAM10000 dataset (10,015 dermoscopic images). The model achieves a test accuracy of 86.6% and an AUC score of 82.7% for binary classification between benign lesions and melanoma.
The training uses a two-phase transfer learning strategy: feature extraction (30 epochs, all backbone layers frozen) followed by fine-tuning (10 epochs, last 30 layers unfrozen). A balanced data augmentation strategy addresses the ~9:1 class imbalance using geometric transformations exclusively, intentionally preserving diagnostically critical color information. A Streamlit web interface is included for practical deployment.
Challenges
- Handling severe class imbalance (~90% benign vs ~10% melanoma) without degrading recall for the critical minority class
- Preserving diagnostically relevant color information while applying data augmentation to address imbalance
- Balancing model complexity with inference speed for practical clinical deployment
Solutions
- Applied asymmetric augmentation with stronger transforms for melanoma class and stratified sampling across splits
- Excluded color transformations (brightness, contrast, channel shift) from augmentation pipeline — geometry only
- Chose MobileNetV2 as backbone for lightweight inference and deployed via Streamlit for accessible web-based screening