top of page

The computational facilities of the Biggs Institute NeuroImaging Core

All Biggs institute high performance computing equipment were consolidated into the Advanced Data Center (ADC), including the newly installed GENIE cluster system (see below). The ADC is managed by the Information Management and Services (IMS).

The Genomics, Epigentics, NeuroImaging, advanced computational analytics Environment (GENIE) HPC Resources

GENIE is a High-Performance Computer (HPC) Cluster was developed in July 2020. It is collaboratively funded by the Biggs Institute and the GCCRI

Picture1.png
Picture2.png

Equipment Configuration and Description 

The GENIE  consists of two (2) head nodes, two (2) login nodes, one (1) storage node, eighteen (18) generic compute nodes, two (2) high memory nodes, five (5) T4-GPU nodes and five (5) V100s-GPU  nodes. In total there are 600 Xeon CPU cores, 9 NVIDIA T4-GPU cards, 9 NVIDIA V100s-GPU cards, 12.3 TB aggregate RAM, 57 TB mirrored-SSD as scratch space.  Cluster communication is connected to an InfiniBand switch device that provides up to 200 Gb/s full bidirectional bandwidth per port.  GENIE is a combination of cutting-edge technologies, massive computing resource, with technical sophistication suitable for advance biomedical research that handles enormous MRI and genomics data from hundreds of thousands of subjects, processing them with the most demanding fast turn-around requirement and sophisticated Deep Learning and artificial intelligence applications for medical image processing and genomic data interpretation.

The Texas Advanced Computing Center (TACC)

EOqP0NEWkAAtzUx.jpeg
EOqP2ekWoAEvrAr.jpeg
EOqP0mMWAAAX2BV.jpeg
EOqP1-CXkAAI3Hc.jpeg

The Biggs Institute Neuroimaging Core has established strong and close collaboration with the The Texas Advanced Computing Center (TACC), one of the world's most powerful computing resources in academia (More information about the TACC can be found online: https://www.tacc.utexas.edu). The IC code and analysis pipelines are taking advantage of the super computing power and huge storage capacities at the TACC in analyzing cohort-based studies with very large imaging data bases such as the UK Biobank project. Briefly, TACC HPC consist of various stand-alone clusters such as : 

  • Stampede: One of the 10 top supercomputers in the world with 10 peta FLOPS dedicated to scientific research. This Dell PowerEdge cluster is equipped with 6,400 Nodes. Each node has two 8-core Xeon E5 processors, one 61- core Xeon Phi co-processor, and 32 GB of RAM. 

  • Lonestar cluster has 1252 Cray XC40 compute nodes, each with two 12-core Intel® Xeon® processing cores for a total of 30,048 compute cores and 45 TB aggregated RAM.

  •  Maverick cluster has 132 nodes with two Intel Xeon E5-2680 v2 Ivy Bridge sockets and 10 CPU cores per socket. It has 132 nodes each with 250 GB of RAM dedicated to memory- intensive computation. 

  • Ranch is a Sun Microsystems StorageTek Mass Storage Facility. It as 2 petabytes of online storage for data transfer and capacity for 160 petabytes of offline tape storage. 

Other server Computers at UTHSA for Neuroimage analysis and Bioinformatics Tasks

 

7x Linux servers with 96-cores, 64-cores, 40-cores, and 16-cores, each with 512GB RAM, and up to 30TB per system for complex computation (>200 computation cores).

 

1x 1PB shared data storage (UTH Advanced Data Center) covered with a paid annual license agreement with UTH Advanced Data Center.

 

1x 10Gig internet connection to the University’s central network and storage support, as well as to the University of Texas’ system-wide support, including the UTSA’s CBI facility.

bottom of page