User menu

Menu

Main menu

LTS Research Computing Services

We offer a high performance computing cluster, Sol and a Ceph based storage cluster for the Lehigh Research Community.

  • Sol: a 85-node High Performance Computing Cluster. Sol is a condominium cluster and can be expanded by interested research groups by investing in the Condo Program.
  • Ceph: a 600TB storage cluster based on the Ceph distributed storage system

The computing and data storage resources are available for use in the classroom for registrar scheduled courses. See Teaching Uses below.

All users of Lehigh's research computing systems must obtain an account for their exclusive use with their standard Lehigh University userid.

Compute Resources

An HPC account provides access to Sol, a 85-node condo cluster for research. A faculty member may obtain access to Sol by purchasing a minimum allocation of 50,000 core-hours or service units (SU) for $500. Additional allocations can be purchased in increments of 10,000 SUs for $100 each. An allocation cycle begins on Oct. 1 and ends on Sep. 30 of the next year and will not roll over to the next allocation cycle. At launch, there are 1,400,000 SUs available per allocation cycle. Faculty members who require more computing time or want a gauranteed share of total available computing time annually should consider a Condo Investment.

The faculty member or PI can request additional accounts sharing the same allocation for an annual charge of $50 (each). Each service level 2 user has a home directory quota of 150GB.

Account Request: To obtain a HPC account or allocation or both on Sol, the Faculty sponsoring the user account should contact Manager of Research Computing for more information.

 

 

Ceph Storage

In Fall 2018, Research Computing will deploy a refreshed 600TB Ceph Storage solution. Faculty, Departments, Centers and Colleges can purchase a Ceph project in units of 1TB. 

  • Data is replicated on across three disks on three nodes, secured against simultaneous failure of two full nodes in the EWFM cluster.
  • Ceph software performs self-healing (maintaining three replicas) if one or two replicas are lost due to disk or node failure.
  • Ceph software performs daily and weekly data scrubbing to ensure replicas remain consistent to avoid bit rot. 
  • Data which is deleted is NOT recoverable.
  • Data is NOT protected against catastrophic cluster failures or loss of the EWFM datacenter.

Research groups can opt for backup of data to a secondary Ceph cluster in which case

  • Data is stored on two distinct clusters in two locations. The primary cluster is located in the EWFM datacenter, the backup in the Packard datacenter.
  • Data is replicated on across three disks on three nodes in each cluster, secured against simultaneous failure of two full nodes in either cluster or simultaneous failure of five nodes in both clusters.
  • Ceph software performs self-healing (maintaining three replicas) if one or two replicas are lost due to disk or node failure.
  • Ceph software performs daily and weekly data scrubbing to ensure replicas remain consistent to avoid bit rot.
  • Data is snapshotted and stored weekly.
  • Data is protected in the event of catastrophic failure of the primary cluster or loss of the EWFM datacenter, so long as the Packard site remains operational.

Ceph Charges

  • All Ceph projects are purchased for a 5 year duration at a rate of $375/TB. No snapshots and backups provided.
  • PIs can request snapshots and backups to a secondary cluster for an additional $375/TB (5 year duration).

HPC Research groups and Ceph

Research groups that purchase a minimum of 1TB Ceph project for a 5 year duration have the option of using their Ceph projects space as their home directory on Sol. If PIs opt for Ceph projects as home directory forgoing the 150GB home storage quota provided by the $50/user account fee will have their annual user fees waived for the life of the Ceph project. Such PIs can add users to their Sol allocation at no charge during the 5 year life of the Ceph project.

Account Request: To obtain a Ceph project, please contact Manager of Research Computing for more information.

Condo Investments

Faculty, Departments, Centers and Colleges can invest in Sol by purchasing additional compute nodes to support their research thereby increasing the overall capacity of Sol. Such investors, Condo Investors, will be provided with an annual allocation proportional to their investment that can be shared with their collaborators. Condo Investors who need more computing time than their investment can purchase allocation, if available, in blocks of 10,000 SUs for $100. These increments must be expended during the allocation cycle that they were purchased and cannot be rolled over to the next cycle.

A Condo Investor can request additional accounts sharing their allocation for an annual charge of $50 (each).

Prospective Investors should review the Condo Program before contacting HPC for investing in Sol.

 

Additional Storage: Additional storage (Level 1 & 2) is available by purchasing a Ceph project volume @ $200/TB/year. To request additional home directory storage, please submit request to http://www.lehigh.edu/go/rchelp.
Definition of One Core-hour/Service Unit/SU: 1 hour of computing on 1 core. The base compute node on Sol with 20 cores will consume 20 SUs per hour of computing.
More Information: The Research Computing wiki contains details on using the HPC resources, and can be accessed from on-campus locations or via VPN). Polaris and Service Level 2 resources are accessible via ssh. SSH clients are available on all Linux distributions and on MacOS (via Terminal.app). Windows SSH/SFTP clients are available from software.lehigh.edu.

Teaching uses of Research Computing Systems

Faculty members considering use of research computing facilities for teaching purposes should submit a request at http://www.lehigh.edu/go/rchelp at least eight weeks prior to the class start date with an anticipated enrollment count, a proposed syllabus, and details of their proposed use of HPC systems.

These accounts are typically associated with a rostered course, and last for the duration of that course (up to one semester).  The course instructor can request these accounts for his/her students, and the department offering the course is responsible for paying the associated fees.  

A course allocation provides 1TB Ceph space and an allocation based on number of students in the course. The fee is broken down as follows

  • 1 TB Ceph space.: $200

  • Charge per student: $15

    • Provides 500 SU/student.

    • A course with 10 students will thus have a total allocation of 5000 SU that is shared among all students in the course.

  • Additional allocations in units of

    • 1000 SUs can be purchased for $10 up to 10K SUs and

    • 10K SU units for $100 for allocation sizes > 10K

Instructors requiring assistance with estimating total or per student SU requirements for the course should contact Research Computing Staff at least 4 weeks prior to beginning of the semester.

 

 

Usage Policy: Student accounts cannot be shared and will be active until two weeks past the end of the semester. All compute intensive tasks must be submitted via the batch scheduler. On request, LTS Research Computing staff will guest lecture on how to use the resource, write and submit job scripts and monitor jobs. Compute intensive task is defined as any operation on the HPC resource other than editing, copying, moving or deleting files, submitting and monitoring jobs and issuing simple commands such as ls, cp, mv, mkdir, rm, tail, tar, gzip/gunzip, more, cat and less. All student data not saved in the Ceph project space will be purged when accounts are deactivated.

 

Ceph Storage Resource

Faculty can request a Ceph project to provide storage resources for coursework independent of accounts on Research Computing clusters. The cost of a Ceph allocation is $200/TB/year and must be paid by the department offering the course. Any request for a Ceph volume for course work will need to be accompanied by

  1. Total storage space requested.
  2. List of students (rostered or auditing), instructor, teaching assistants, and support staff (if any) who need access.
  3. Type of project volume, Managed or Open (see Ceph FAQ).
    By default, a Managed volume will be created with each account given 10GB personal space and any remaining space shared among all users listed in 2. above.
  4. Access to the Ceph volume will be for the current semester only plus an additional two weeks following the end of the semester to allow students to backup/transfer their data.

 

Acknowledging use of LTS Research Computing Services

Please acknowledge Lehigh University in publications, reports and presentations that utilize LTS Research Computing Services with the following statement:

Portions of this research were conducted with research computing resources provided by Lehigh University

Last Updated: June 30, 2020