General Computing Using CUDA Technology on NVIDIA GPU
DOI:
https://doi.org/10.15584/jetacomps.2025.6.17Keywords:
CUDA, NVIDIA, GPU, Technology, Gaussian Blur, Parallel ComputeAbstract
The article presents a detailed analysis of the computing capabilities of the GPU (Graphics Processing Unit) using NVIDIA Compute Unified Device Architecture (NVIDIA CUDA) compared to traditional sequential computing methods. For this purpose, an application implementing the Gaussian blur algorithm was developed. Then, an implementation of the problem was created in the form of a program. The next step presented the methodology of conducting a study comparing the efficiency of solving the problem with several test configurations. Then, research was carried out during which the data obtained in the form of program implementation times were collected. This paper aims to evaluate the computational capabilities of the GPU using NVIDIA CUDA compared to traditional sequential computing methods. The comparison was made through a developed appli-cation that implements the Gaussian fuzzy algorithm. The article can serve as a valuable educational resource for teaching parallel programming and algorithm optimization using GPU and CUDA tech-nologies. The conducted analysis also provides a strong example of an educational project that com-bines algorithm theory with practical application in the context of improving computational perfor-mance.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Journal of Education, Technology and Computer Science

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.