Abstract:
Diabetic Foot Ulcer (DFU) is one of the major health complications for people with diabetes, which, if not detected at the early stage and treated properly, may lead to amputation and sometimes life-threatening situations. Around 15% to 25% of diabetic patients contain the possibility of developing DFU at a later stage if proper foot care is not taken. The treatment of this disease is a global health care problem and the currently available clinical treatments greatly rely on the vigilance of patients and doctors thus resulting in high diagnostic costs along with lengthy treatment procedures. These available clinical treatment procedures involve a thorough evaluation of the patient's medical history as well as careful examination of the feet wounds by a DFU specialist. Sometimes, the treatment processes may require some additional tests too like CT scans, X-Ray etc. Even though this treatment procedure gives the patient a powerful and positive outcome, it requires a significant amount of time as well as creates a notable amount of financial implication on the patient's family. Hence, the necessity of a cost- effective, remote and fitting DFU diagnosis technique is imminent. In this paper, we propose a deep learning-based approach to detect diabetic foot ulcers through images taken from the diabetic patient's feet. The design of our proposed method is based on Faster R-CNN algorithm. We introduced some modifications and a few changes in the parameter settings of this algorithm to make it perform better in case of DFU detection. The image dataset that has been used for this work is a part of the Diabetic Foot Ulcers Grand Challenge 2020 (DFUC2020) challenge. We used a total of 2000 images from this dataset for this experiment and randomly divided them into 1600 images (80%) and 400 images (20%) for the training set and testing set respectively. The images included in this dataset were captured with different types of cameras with different focal lengths from various view angles resulting in varying blurring and zooming levels. Along with these issues, our training dataset has only 1600 images which is considered a small amount in case of deep learning models. So, we introduced data augmentation in order to overcome these issues. Also, in case of DFU related images, sometimes the area of interests is too hard to detect for the Faster R-CNN with default configuration values due to the smaller size of the lesions. In order to include those cases, we modified the anchors size and ratio so that the smaller regions don't get missed and the accuracy of detecting those regions is increased. In some other cases, the DFU infected areas of the foot may not display any significant difference with the healthy skins on the foot in which cases the algorithm may have a hard time detecting those regions. To solve this problem and improve the number of accurately detected areas, before passing the images to the algorithm we process the photos accordingly to highlight the border areas of the infectious regions so that those areas
are not ignored. Along with these improvements in order to reduce the response time and increase the precision of the algorithm a value of 50 ROIs is used rather than the standard value of 300 ROIs. In our experiment, the area detected by the algorithm is considered to be correct if the value of intersection over union (IoU) is greater than 0.5. The model is then trained for 100 epochs using the pre-trained weights for ResNet-50 on ImageNet image dataset. Our proposed technique achieved precision, recall, F1-score and mean average precision of 77.3%, 89.0%, 82.7%, and 71.3% respectively in DFU detection which is better than results obtained by the original Faster R-CNN.