Home | deutsch  | Legals | Data Protection | Sitemap | KIT

Integrating logical and physical file models in the MPI-IO implementation for Clusterfile

Integrating logical and physical file models in the MPI-IO implementation for Clusterfile
Name:

Konferenzartikel 

Year:

2006 

Author:

Florin Isailă, David Singh, Jesús Carretero, Félix Garcia, Gábor Szeder, Thomas Moschny 

Links:PDF

Zusammenfassung

This paper presents the design and implementation of the MPI-IO interface for the Clusterfile parallel file system. The approach offers the opportunity of achieving a high corelation between the file access patterns of parallel applications and the physical file distribution. First, any physical file distribution can be expressed by means of MPI data types. Second, mechanisms such as views and collective I/O operations are portably implemented inside the file system, unifying the I/O scheduling strategies of the MPI-IO library and the file system. The experimental section demonstrates performance benefits of more than one order of magnitude.

Beteiligte Mitarbeiter (zufällige Reihenfolge)
Titel Vorname Nachname

Bibtex

@inproceedings{,
author={Florin Isailă, David Singh, Jes{\´u}s Carretero, F{\´e}lix Garcia, Gábor Szeder, Thomas Moschny},
title={Integrating logical and physical file models in the MPI-IO implementation for Clusterfile},
year=2006,
booktitle={Sixth IEEE International Symposium on Cluster Computing and the Grid (CCGRID)},
publisher={IEEE Computer Society},
url={https://ps.ipd.kit.edu/downloads/ka_2006_integrating_logical_physical_file_models_mpi_io.pdf},
abstract={This paper presents the design and implementation of the MPI-IO interface for the Clusterfile parallel file system. The approach offers the opportunity of achieving a high corelation between the file access patterns of parallel applications and the physical file distribution. First, any physical file distribution can be expressed by means of MPI data types. Second, mechanisms such as views and collective I/O operations are portably implemented inside the file system, unifying the I/O scheduling strategies of the MPI-IO library and the file system. The experimental section demonstrates performance benefits of more than one order of magnitude.},