dc.description.abstract |
The manufacture of a wide variety of sweets is on the rise in the entire Bengal (both Bangladesh and West Bengal). As a consequence, the sweet’s name escapes the vast majority of individuals in our country. Computer vision advancements have made object recognition from photos easier in recent years. Using computer vision to automatically categorize sweets is still a challenge because of the similarity between various sorts and characteristics such as their placement or lighting conditions. Classifying sweets may be useful in a variety of domains, including autonomous economic robots and the creation of mobile apps for identifying certain sweets on the market. In this article, we employed deep convolutional neural network (DCCN) methods to evaluate five alternative models for sweet detection. The endemic Bengali delicacies we used to train my model included Inception-v3, ResNet-50, VGG15, AlexNet, and CNN. This model was efficient. Our dataset comprised images of confections from thirteen distinct sweet categories. Two portions of the dataset were separated: 80% for training and 20% for testing. The training dataset was enhanced and increased to make preparation simpler. Using the Inception-v3 model, we were able to attain a 100% accuracy rate with our dataset. |
en_US |