| dc.description.abstract |
In this work I developed a complete deep learning pipeline that classifies bean and eggplant leaf images into affected and healthy classes. I captured the source images myself using an iPhone and converted them to JPG before processing and modeling in VS Code with Python and TensorFlow/Keras. I standardized all images to 224×224 with square padding, addressed class imbalance through systematic data augmentation (rotations, flips, brightness scaling, Gaussian blur, and light noise), and applied photometric normalization (contrast stretching and gamma correction) to improve robustness to illumination differences. I then merged the variants and created an 80/20 train–test split. I trained and compared four ImageNet pretrained backbones—VGG19, MobileNetV2, InceptionV3, and DenseNet201—using a uniform classifier head (GlobalAveragePooling → Dense(1024, ReLU) → Dropout(0.5) → Softmax) and identical hyperparameters. On the 25,620 image test set, DenseNet201 performed best with 86.42% accuracy and 0.31 loss, while MobileNetV2 and InceptionV3 achieved 83.80% and 80.83% accuracy respectively; VGG19 reached 73.72%. I provide per class precision/recall/F1, a confusion matrix, and an error analysis that highlights remaining failure cases (subtle symptoms and challenging lighting). The work shows that a well-engineered preprocessing and augmentation pipeline, coupled with transfer learning, yields reliable leaf disease recognition without specialized hardware or mobile deployment. |
en_US |