Abstract
Unmanned Aerial Vehicles (UAVs), commonly known as drones, in recent years continue to gain popularity in various fields, ranging from the entertainment sector to the service in-dustry. Their application areas are expanding every day, making them increasingly versatile. They are widely used in the defense industry, contributing to the have a voice of the coun-tries that possess them. Given this context, object detection, tracking, and other customized tasks carried out using imagery obtained from UAVs have become significantly important. However, the images obtained from UAVs are generally low resolution and quality, as they need to be captured from a safe flight distance. This situation is a disadvantage for object detection applications. To reduce this disadvantage, various Super Resolution techniques have been developed. In this paper, the focus is on the critical importance of improvements in this field, especially within the defense sector, by utilizing ESRGAN and YOLO together to enhance the resolution of images captured from UAVs. The primary objective of this study is to enhance the efficacy of object detection by simultaneously augmenting the num-ber of detected objects and improving the accuracy of the detection process. This research presents a comparative analysis of the outcomes achieved through two distinct approaches. Firstly, object detection is executed utilizing a pre-trained YOLO-V7 model on a LR image extracted from the VisDrone Dataset. Subsequently, the same YOLO-V7 model is deployed, but object detection is carried out on the SR version of the same LR image obtained from the ESRGAN network. The findings from this investigation unequivocally demonstrate that conducting object detection on the SR image not only results in a notable increase in the quantity of detected objects but also leads to a significant enhancement in the overall accu-racy of the detection process.