Journal Article
The International Journal of High Performance Computing Applications, vol. 29, iss. 1, pp. 107-116, 2014
Authors
Lynn Wood, Jeff Daily, Michael Henry, Bruce Palmer, Karen Schuchardt, Donald Dazlich, Ross Heikes, David Randall
Abstract
Fine cell granularity in modern climate models can produce terabytes of data in each snapshot, causing significant I/O overhead. To address this issue, a method of reducing the I/O latency of high-resolution climate models by identifying and selectively outputting regions of interest is presented. Working with a global cloud-resolving model and running with up to 10,240 processors on a Cray XE6, this method provides significant I/O bandwidth reduction depending on the frequency of writes and the size of the region of interest. The implementation challenges of determining global parameters in a strictly core-localized model and properly formatting output files that only contain subsections of the global grid are addressed, as well as the overall bandwidth impact and benefits of the method. The gains in I/O throughput provided by this method allow dual output rates for high-resolution climate models: a low-frequency global snapshot as well as a high-frequency regional snapshot when events of particular interest occur.