GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform

This paper introduces a (finite difference time domain) FDTD code written in Fortran and CUDA for realistic electromagnetic calculations with parallelization methods of Message Passing Interface (MPI) and Open Multiprocessing (OpenMP). Since both Central Processing Unit (CPU) and Graphics Processing...

Full description

Saved in:
Bibliographic Details
Main Authors: Ronglin Jiang, Shugang Jiang, Yu Zhang, Ying Xu, Lei Xu, Dandan Zhang
Format: Article
Language:English
Published: Wiley 2014-01-01
Series:International Journal of Antennas and Propagation
Online Access:http://dx.doi.org/10.1155/2014/321081
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832564438793715712
author Ronglin Jiang
Shugang Jiang
Yu Zhang
Ying Xu
Lei Xu
Dandan Zhang
author_facet Ronglin Jiang
Shugang Jiang
Yu Zhang
Ying Xu
Lei Xu
Dandan Zhang
author_sort Ronglin Jiang
collection DOAJ
description This paper introduces a (finite difference time domain) FDTD code written in Fortran and CUDA for realistic electromagnetic calculations with parallelization methods of Message Passing Interface (MPI) and Open Multiprocessing (OpenMP). Since both Central Processing Unit (CPU) and Graphics Processing Unit (GPU) resources are utilized, a faster execution speed can be reached compared to a traditional pure GPU code. In our experiments, 64 NVIDIA TESLA K20m GPUs and 64 INTEL XEON E5-2670 CPUs are used to carry out the pure CPU, pure GPU, and CPU + GPU tests. Relative to the pure CPU calculations for the same problems, the speedup ratio achieved by CPU + GPU calculations is around 14. Compared to the pure GPU calculations for the same problems, the CPU + GPU calculations have 7.6%–13.2% performance improvement. Because of the small memory size of GPUs, the FDTD problem size is usually very small. However, this code can enlarge the maximum problem size by 25% without reducing the performance of traditional pure GPU code. Finally, using this code, a microstrip antenna array with 16×18 elements is calculated and the radiation patterns are compared with the ones of MoM. Results show that there is a well agreement between them.
format Article
id doaj-art-ed596d3fd7cb4694b4123feef97e967d
institution Kabale University
issn 1687-5869
1687-5877
language English
publishDate 2014-01-01
publisher Wiley
record_format Article
series International Journal of Antennas and Propagation
spelling doaj-art-ed596d3fd7cb4694b4123feef97e967d2025-02-03T01:11:08ZengWileyInternational Journal of Antennas and Propagation1687-58691687-58772014-01-01201410.1155/2014/321081321081GPU-Accelerated Parallel FDTD on Distributed Heterogeneous PlatformRonglin Jiang0Shugang Jiang1Yu Zhang2Ying Xu3Lei Xu4Dandan Zhang5Research and Development Department, Shanghai Supercomputer Center, Shanghai 201203, ChinaSchool of Electronic Engineering, Xidian University, Xi’an 710071, ChinaSchool of Electronic Engineering, Xidian University, Xi’an 710071, ChinaResearch and Development Department, Shanghai Supercomputer Center, Shanghai 201203, ChinaResearch and Development Department, Shanghai Supercomputer Center, Shanghai 201203, ChinaResearch and Development Department, Shanghai Supercomputer Center, Shanghai 201203, ChinaThis paper introduces a (finite difference time domain) FDTD code written in Fortran and CUDA for realistic electromagnetic calculations with parallelization methods of Message Passing Interface (MPI) and Open Multiprocessing (OpenMP). Since both Central Processing Unit (CPU) and Graphics Processing Unit (GPU) resources are utilized, a faster execution speed can be reached compared to a traditional pure GPU code. In our experiments, 64 NVIDIA TESLA K20m GPUs and 64 INTEL XEON E5-2670 CPUs are used to carry out the pure CPU, pure GPU, and CPU + GPU tests. Relative to the pure CPU calculations for the same problems, the speedup ratio achieved by CPU + GPU calculations is around 14. Compared to the pure GPU calculations for the same problems, the CPU + GPU calculations have 7.6%–13.2% performance improvement. Because of the small memory size of GPUs, the FDTD problem size is usually very small. However, this code can enlarge the maximum problem size by 25% without reducing the performance of traditional pure GPU code. Finally, using this code, a microstrip antenna array with 16×18 elements is calculated and the radiation patterns are compared with the ones of MoM. Results show that there is a well agreement between them.http://dx.doi.org/10.1155/2014/321081
spellingShingle Ronglin Jiang
Shugang Jiang
Yu Zhang
Ying Xu
Lei Xu
Dandan Zhang
GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform
International Journal of Antennas and Propagation
title GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform
title_full GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform
title_fullStr GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform
title_full_unstemmed GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform
title_short GPU-Accelerated Parallel FDTD on Distributed Heterogeneous Platform
title_sort gpu accelerated parallel fdtd on distributed heterogeneous platform
url http://dx.doi.org/10.1155/2014/321081
work_keys_str_mv AT ronglinjiang gpuacceleratedparallelfdtdondistributedheterogeneousplatform
AT shugangjiang gpuacceleratedparallelfdtdondistributedheterogeneousplatform
AT yuzhang gpuacceleratedparallelfdtdondistributedheterogeneousplatform
AT yingxu gpuacceleratedparallelfdtdondistributedheterogeneousplatform
AT leixu gpuacceleratedparallelfdtdondistributedheterogeneousplatform
AT dandanzhang gpuacceleratedparallelfdtdondistributedheterogeneousplatform