SDSC Thread Graphic Issue 13, April 2007





RSS RSS Feed (What is this?)

User Services Director:
Anke Kamrath

Editor:
Subhashini Sivagnanam

Graphics Designer:
Diana Diehl

Application Designer:
Fariba Fana


Featured Story Corner

New GPFS-WAN at SDSC

—Mahidhar Tatineni

Users now have access to the new TeraGrid GPFS-WAN (Global Parallel File System-Wide Area Network). The new GPFS-WAN is a 700-TB storage system mounted on several TeraGrid platforms across the country. GPFS-WAN is currently mounted on the TeraGrid Linux clusters at SDSC, NCSA, and UC/ANL, as well as all the DataStar p655 and p690 nodes and the Blue Gene at SDSC.

GPFS-WAN has three distinct partitions, each with its own policy for access, allocation, and data preservation.

  • The Long-term Collections Area is a 150-TB partition for data collections that need the unique functionality of a global file system. Users can request this collection space via the same peer-review process used to request compute allocations by submitting a Data Allocation Proposal through the Partnerships Online Proposal System (POPS).
  • The Project Area is a 475-TB partition intended as a temporary, unpurged space for multi-site analysis. Any TeraGrid project (i.e., an active allocation award) that can benefit from the unique capabilities of the global file system can request space by submitting the GPFS-WAN Projects Space Request Form. Quotas are determined and enforced based upon space approved under this request. Duration of space availability is based on the duration of the TeraGrid project specified in the request. Data will be removed from GPFS-WAN at the end of the project.
  • The Scratch Area is a 75-TB partition accessible by all TeraGrid users without the need to submit any request. This partition can be used for short-term analysis before moving data to archival storage. Use is unlimited within 75 TB and is shared among all active users. Users can simply access the partition from any of the resources mounting GPFS-WAN, can create directories, and can store their data. Inactive files will be purged regularly in this partition as they age beyond two weeks.

Data on the old GPFS-WAN:
Since April 1st, 2007, the old GPFS-WAN has been mounted in a READ ONLY mode at /oldgpfs-wan. During this time, SDSC staff migrated the data from the projects area of the old GPFS-WAN to the projects area of the new GPFS-WAN. Please e-mail us at consult@sdsc.edu if your data has not been migrated from the old /gpfs-wan projects area or if you have any questions regarding the migration.

Features of the new GPFS-WAN:
The new system configuration consists of 16 p575 nodes with eight cores and 16 GB of memory per node. Each node is divided into two Logical Partitions (LPARs) for a total of 32 NSD servers. There is a total of 2 Gb/s of bandwidth from each NSD server. The total capacity of GPFS-WAN has been increased from ~250 TB (mirrored) to ~700 TB (RAID 6). There is redundancy built into various components of the system. All of the nodes are directly and redundantly connected to the storage; the storage controllers are redundant; and the RAID 6 configuration (two parity disks) insures that two disks in a RAID set need to fail before the set fails. The power, management, and Ethernet networks are redundantly connected as well.

Did you know ..?

that SDSC has limited the core file size to 32MB.
To make good use of the core file size it is recommneded using the MP_COREFILE_FORMAT environment variable (or its associated command-line flag -corefile_format) to set the format of corefiles to lightweight corefiles.
See Thread article Issue 4, February 2006 for more details.