TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble
An implementation with the CUDA technology in a single and in several graphics processing units (GPUs) is presented for the calculation of the forward modeling of gravitational fields from a tridimensional volumetric ensemble composed by unitary prisms of constant density. We compared the performanc...
Saved in:
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2013-01-01
|
Series: | Journal of Applied Mathematics |
Online Access: | http://dx.doi.org/10.1155/2013/437357 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832552523915853824 |
---|---|
author | Carlos Couder-Castañeda Carlos Ortiz-Alemán Mauricio Gabriel Orozco-del-Castillo Mauricio Nava-Flores |
author_facet | Carlos Couder-Castañeda Carlos Ortiz-Alemán Mauricio Gabriel Orozco-del-Castillo Mauricio Nava-Flores |
author_sort | Carlos Couder-Castañeda |
collection | DOAJ |
description | An implementation with the CUDA technology in a single and in several graphics processing units (GPUs) is presented for the calculation of the forward modeling of gravitational fields from a tridimensional volumetric ensemble composed by unitary prisms of constant density. We compared the performance results obtained with the GPUs against a previous version coded in OpenMP with MPI, and we analyzed the results on both platforms. Today, the use of GPUs represents a breakthrough in parallel computing, which has led to the development of several applications with various applications. Nevertheless, in some applications the decomposition of the tasks is not trivial, as can be appreciated in this paper. Unlike a trivial decomposition of the domain, we proposed to decompose the problem by sets of prisms and use different memory spaces per processing CUDA core, avoiding the performance decay as a result of the constant calls to kernels functions which would be needed in a parallelization by observations points. The design and implementation created are the main contributions of this work, because the parallelization scheme implemented is not trivial. The performance results obtained are comparable to those of a small processing cluster. |
format | Article |
id | doaj-art-b480b568629a43268725bae3376e80c8 |
institution | Kabale University |
issn | 1110-757X 1687-0042 |
language | English |
publishDate | 2013-01-01 |
publisher | Wiley |
record_format | Article |
series | Journal of Applied Mathematics |
spelling | doaj-art-b480b568629a43268725bae3376e80c82025-02-03T05:58:33ZengWileyJournal of Applied Mathematics1110-757X1687-00422013-01-01201310.1155/2013/437357437357TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms EnsembleCarlos Couder-Castañeda0Carlos Ortiz-Alemán1Mauricio Gabriel Orozco-del-Castillo2Mauricio Nava-Flores3Mexican Petroleum Institute, Eje Central Lázaro Cárdenas 152, Colonia San Bartolo Atepehuacan, 07730 México, DF, MexicoMexican Petroleum Institute, Eje Central Lázaro Cárdenas 152, Colonia San Bartolo Atepehuacan, 07730 México, DF, MexicoMexican Petroleum Institute, Eje Central Lázaro Cárdenas 152, Colonia San Bartolo Atepehuacan, 07730 México, DF, MexicoDivisión de Ingeniería en Ciencias de la Tierra, Facultad de Ingeniería, Universidad Nacional Autónoma de México, Circuito Interior S/N, Colonia Ciudad Universitaria, 04510 México, DF, MexicoAn implementation with the CUDA technology in a single and in several graphics processing units (GPUs) is presented for the calculation of the forward modeling of gravitational fields from a tridimensional volumetric ensemble composed by unitary prisms of constant density. We compared the performance results obtained with the GPUs against a previous version coded in OpenMP with MPI, and we analyzed the results on both platforms. Today, the use of GPUs represents a breakthrough in parallel computing, which has led to the development of several applications with various applications. Nevertheless, in some applications the decomposition of the tasks is not trivial, as can be appreciated in this paper. Unlike a trivial decomposition of the domain, we proposed to decompose the problem by sets of prisms and use different memory spaces per processing CUDA core, avoiding the performance decay as a result of the constant calls to kernels functions which would be needed in a parallelization by observations points. The design and implementation created are the main contributions of this work, because the parallelization scheme implemented is not trivial. The performance results obtained are comparable to those of a small processing cluster.http://dx.doi.org/10.1155/2013/437357 |
spellingShingle | Carlos Couder-Castañeda Carlos Ortiz-Alemán Mauricio Gabriel Orozco-del-Castillo Mauricio Nava-Flores TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble Journal of Applied Mathematics |
title | TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble |
title_full | TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble |
title_fullStr | TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble |
title_full_unstemmed | TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble |
title_short | TESLA GPUs versus MPI with OpenMP for the Forward Modeling of Gravity and Gravity Gradient of Large Prisms Ensemble |
title_sort | tesla gpus versus mpi with openmp for the forward modeling of gravity and gravity gradient of large prisms ensemble |
url | http://dx.doi.org/10.1155/2013/437357 |
work_keys_str_mv | AT carloscoudercastaneda teslagpusversusmpiwithopenmpfortheforwardmodelingofgravityandgravitygradientoflargeprismsensemble AT carlosortizaleman teslagpusversusmpiwithopenmpfortheforwardmodelingofgravityandgravitygradientoflargeprismsensemble AT mauriciogabrielorozcodelcastillo teslagpusversusmpiwithopenmpfortheforwardmodelingofgravityandgravitygradientoflargeprismsensemble AT mauricionavaflores teslagpusversusmpiwithopenmpfortheforwardmodelingofgravityandgravitygradientoflargeprismsensemble |