ExTreme Collaboration, Innovation, and Technology
The ExTreme Collaboration, Innovation, and Technology, or xCITE, laboratory is a state-of-the-art software development and data/visual analytics innovation facility within the Atmospheric Sciences Research Center (ASRC). xCITE is a multi-disciplinary collaboration space open to the UAlbany public as well as public and private partners. With a unique combination of atmospheric scientists and computer engineers, the lab equips the scientific community with the tools and resources they need to take their research to the next level.
Facilities and Equipment
CPU/GPU-Based Desktop Scientific Visualization and ML Platforms
The xCITE laboratory contains three high-end CPU/GPU-based desktop scientific visualization and Machine Learning (ML) platforms. These three work systems are ideal for model development, testing, hyperoptimization, postprocessing, and visualization.
- These tools allow for the development and deployment of AI, machine and deep learning applications across local GPU resources such as workstations, data center solutions, and eventual deployment to cloud-based infrastructure.
- Scientific visualization capabilities, with links to a 3x3 HD multi-tile display wall providing a pixel space of 18.7 million pixels.
High-end water-cooled Linux servers
32 CPU cores
128GB of system memory
1-3 NVidia Titan RTX GPUs
NVMe Flash storage
Containers to meet specialized needs
Easy and secure access
Web graphical user interface
AI Deep Learning Server
For final machine learning operations (training on bulk images) the xCITE Lab has a fourth, much more powerful “big iron” system in the ultra-high-end AI Deep Learning server (DGX-1). The server is housed in the University at Albany’s Tier-3 Data Center, and managed/maintained by the xCITE laboratory.
- Deep learning frameworks with pre-installed libraries 100s of pre-configured models available instantly
- Containerization tools to provide fully built and pre-configured environments for instant use Industry leading deep learning and accelerated analytics applications
- Resources to implement deep learning algorithms, using weather data on some of the fastest machines currently available
- Access to massive parallel graphics processors providing higher throughput for compute intensive workloads at a fraction of the cost
- No hidden fees for data transfers and storage
- Docker swarm with resource management
- Docker image customization Isolation of individual containers and GPUs
The DGX-1 V100 system contains 8 Tesla V100 GPUs with a combined total of 40,960 CUDA (graphics) cores, 5120 Tensor cores, 256GB of GPU memory, and 1TB of RAM/system memory all linked by NVIDIAs 300GB/s NVLINK interconnect. The system employs dual 20-core Intel Xeon E5-2698 v4 CPUs running at 2.2GHz, and has 68TB of SSD storage configured in RAID 6 for redundancy and speed.