The Jetstream CloudCast Newsletter

The Jetstream2 AI Fellows Program announces its first cohort

Group photo of the AI Fellows

The Jetstream2 AI Fellows Program, supported by the National AI Research Resource Pilot, equips established and early-career researchers and educators with the skills, knowledge, and experience necessary to leverage Jetstream2 in pioneering AI-based projects.

The five individuals selected for the Jetstream2 AI Fellows Program Pilot represent a wide range of disciplines and research areas including health sciences, genetics, applied physics, computational social sciences, and engineering. In addition to the contributions made in their respective fields, the Jetstream2 AI fellows will also advance their technical expertise and experience working with AI tools. As part of the Jetstream2 program, fellows can tap into the latest AI resources and techniques without the limitations of their local infrastructure. Throughout the program the fellows will consult with the cyberinfrastructure professionals of the Jetstream2 team who will serve as architects, helping the fellows to translate their domain-specific goals into computational AI blueprints.

Meet the cohort

Jetstream2 launches large language model (LLM) inference service

Jetstream2 has recently unveiled a Large Language Model (LLM) inference service tailored to Jetstream2 users. This service provides access to advanced open-weight LLMs through two primary interfaces: a browser-based chat interface via Open WebUI similar to ChatGPT, and OpenAI-compatible inference APIs for seamless integration into various projects and applications.

The Jetstream2 inference service is especially valuable as it provides unlimited access to powerful large language models (LLMs) at no cost to researchers, educators, and students within the Jetstream2 community. Unlike LLMs running on personal machines or standard Jetstream2 instances, this service utilizes significantly larger and more capable models. Whether refining research papers, debugging code, brainstorming for projects, or summarizing complex texts, the Jetstream2 inference service can boost productivity and help facilitate innovation.

Users can explore AI-driven workflows with confidence knowing that security and privacy are key considerations for the Jetstream2 inference service. All data is processed exclusively within the IU Bloomington Data Center, ensuring that user interactions remain confidential. Prompt and response data are encrypted in transit, and the system does not store conversations or use data for AI training. 

Users can engage with the broader Jetstream2 community for support and collaboration by joining the inference-service channel in the Jetstream2 community chat. This space enables discussions on best practices, troubleshooting, and sharing ideas into how these LLMs can be effectively applied across various domains.

As the state of the art advances rapidly, the models offered are subject to change. Current models available include:

  • DeepSeek R1, a 671-billion-parameter chains-of-thought reasoning model
  • Llama 4 Scout, our latest vision-language model
  • Llama-3.3-70B-Instruct, a general-purpose instruct-tuned model

To access the inference service and connect with other users, visit the Inference Service Overview page.

Learn more and try out the Jetstream2 inference service

Jetstream2 announcements and system updates

The latest news, updates, and announcements from the Jetstream2 team. 

Jetstream2 resource availability dashboard

Jetstream2's resource offerings are robust, but in times of high demand, we may run low on larger instance flavors and public (floating) IP addresses.

To help keep our community informed of Jetstream2's current capacity, we've created a dashboard showing vacant resources available for immediate use. If a given resource has a yellow or red numerical value, you may not be able to create one at that moment, but check back later as these numbers fluctuate throughout the day. Please note: This data is not perfectly real-time—there is a display delay of approximately 5 minutes. 

View the dashboard

Coming soon: Multi-card GPUs

With an increasing number of projects in the AI space, the last few months have seen an uptick in demand for Jetstream2's GPU flavors, as well as increasing requests for multi-card GPUs.

To help meet this need, our engineers are currently working to install new hardware that will broaden the amount and variety of GPU resources Jetstream can offer. We'll be introducing new multi-card GPU flavors that will be available on a limited basis by usage approval.

We anticipate these new GPUs being online in late spring 2025. Be on the lookout for a coming announcement with more details!

Sunsetting support for Rocky Linux 8

With the release of RHEL/Rocky Linux 10 on the horizon and Rocky Linux 9 nearly three years old, Jetstream2 will no longer update the Rocky Linux 8 build in our featured images pipeline.

Users can still create instances from this remaining image and are welcome to bring in their own Rocky Linux 8 images; however, Jetstream2 will consider this version to be in "maintenance only" status after the retirement.

Calling all Jetstream2 Manila shared filesystems users

Are you using Manila shared filesystems with Jetstream2? We want to hear from you! Your feedback is incredibly valuable in helping us fine-tune this service. Please take a moment to fill out this brief, 3-question survey on about your experience.

Take the survey

As always, the Jetstream2 team actively welcomes community feedback that can help us improve. If you have comments or questions about our features, services, or training, let us know at help@jetstream-cloud.org!

Upcoming events

  • May 21: 2025 RMACC HPC Symposium. Jetstream2 will be presenting two sessions at the Rocky Mountain Advanced Computing Consortium's HPC Symposium this year: Collaborative Cloud Science: Deploying The Littlest JupyterHub on Jetstream2 and Deploy & Manage Kubernetes on Jetstream2 using OpenStack Magnum.
  • May 28–30: 2025 MS-CC Annual MeetingJetstream2 is a sponsor for the third annual meeting of the Minority Serving Cyberinfrastructure Consortium. Stop by and say hello!
  • July 20–24: PEARC 2025Jetstream2 will be hosting one workshop and two tutorials at this year's Practice and Experience in Advanced Research Computing conference. Watch the event schedule for details!

View Jetstream2 training and events

Jetstream2 AI Fellow Advances Renewable Energy Research

Colin Bundschu uses CPU and Large Memory nodes on Jetstream2 to advance work on transition metal oxide replacements for costly platinum catalysts in hydrogen technologies.  

Read the article

Minnesota Summer Camp Includes Intro to Jetstream2

Macalester College Associate Professor of Computer Science Getiria Onsongo and fellow researchers will use Jetstream2 during their 2025 “Food, Agriculture, and U” summer camp for middle school students.

Read the article

 

An Algorithmic Answer for Safer Medicine

Undergraduate student David Cooper and his professor David Minh at Illinois Institute of Technology used a number of ACCESS resources, including Jetstream2, to develop technology that could speed up drug research.

Read the article

Training Chemistry Students and Faculty with ACCESS Resources

The National Science Foundation has awarded Sonoma State University's Mark J. Perri a nearly $1m grant to enhance ChemCompute, a free online chemistry platform he created utilizing Jetstream2. 

Read the article