Initiative Lead Q&A: Akanksha Menon on Designing the Future of Buildings

Stylized headshot of Ankanksha Menon

Akanksha Menon leads the “Multifunctional materials for energy-efficient buildings: from homes to data centers” research initiative for the Institute for Matter and Systems at Georgia Tech. Her research in this role focuses on advancing the efficiency and sustainability of buildings (from homes to data centers) using a materials-to-systems approach. Menon is an assistant professor in the George W. Woodruff School of Mechanical Engineering.

In this brief Q&A, Menon discusses her research focus, how it relates to Matter and System’s core research focuses, and the national impact of this initiative.

What is your field of expertise and at what point in your life did you first become interested in this area? 

My expertise is in energy systems and thermal science/engineering, and I direct the Water-Energy Research Lab (WERL) at Georgia Tech. In high school I read “Storms of my Grandchildren” by climate scientist James Hansen and became aware of global warming and how we are contributing to climate change. That got me interested in the field of energy systems and sustainability — in undergrad I realized that thermodynamics forms the basis of how we use/convert energy, and that heat is the most dominant end-use of energy. Since then, I have focused my research on waste heat recovery, energy storage, and advanced separations that leverage thermally responsive materials for clean energy and water. 

What questions or challenges sparked your current research? 

I am passionate about developing technology solutions to global grand challenges, and I believe that clean energy and clean water are the two critical resources that can unlock everything else. The key challenge lies in making these technologies efficient and low-cost, and this is where functional materials and novel phase transitions can play an important role. 

Matter and systems refer to the transformational technological and societal systems that arise from the convergence of innovative materials, devices, and processes. Why is your initiative important to the development of the IMS research strategy? 

My initiative focuses on the built environment, i.e., buildings ranging from homes to data centers, and this is one of the four IMS research areas. Buildings not only consume energy and water but also must maintain thermal comfort for humans and machines (servers). This requires an integrated approach of designing multifunctional materials and hybrid systems, as well as evaluating their performance in relevant operating environments. This understanding can transform buildings into dynamic systems with optimal energy-water use. 

What are the broader global and social benefits of the research you and your team conduct? 

This research contributes directly to the sustainability, efficiency, and resiliency of buildings. This is especially timely given the boom of data centers all around us that will significantly impact our energy and water resources. 

What are your plans for engaging a wider Georgia Tech faculty pool with the Institute for Matter and Systems research?

This research requires interdisciplinary expertise – from materials scientists and thermal engineers to architects and experts in manufacturing and life cycle/technoeconomic analysis. To bring these different faculty together, I will organize a series of lunch-and-learn sessions and brainstorming meetings. Given the energy and sustainability themes, I also plan to engage with SEI and BBISS to potentially grow this initiative into a program or center in the future. The growth of data centers and energy manufacturing in Georgia, as well as our unique water resources in the Southeast make this the right time and place to pursue this initiative.

 
News Contact

Amelia Neumeister | Research Communications Program Manager

The Institute for Matter and Systems

From Socrates to ChatGPT: The Ancient Lesson AI-powered Language Models Have Yet to Learn

AI-generated image of Socrates, sculpted in marble, looking contemplatively at a laptop.

An Adobe Stock AI-generated image of Socrates, sculpted in marble, looking contemplatively at a laptop.

Although developed by some of the brightest minds of the 21st century, AI-powered large language models (LLMs) could learn something from one of the greatest minds of the 1st century BCE.

Socrates, widely regarded as the founder of Western philosophy, declared, "I know that I know nothing." This simple statement highlights the wisdom of acknowledging the limits of one's own knowledge.

A simple statement, yes, but like some people, LLMs struggle with saying “I don’t know.” In fact, LLMs often can't admit that they don't know something because of the way they are trained, according to a research team that includes a Georgia Tech computer science (CS) professor.

Pre-training LLMs involves them learning to predict the next word correctly by training on massive datasets of text, images, or other data. Models are evaluated and adjusted based on their performance against standard benchmarks, which are "rewarded" for preferred outputs or answers.

Current evaluation protocols, however, penalize non-responses the same as incorrect answers and do not include an "I don't know" option.

According to CS Professor Santosh Vempala, these pre- and post-training shortcomings are what lead to LLMs providing seemingly plausible but false responses known as hallucinations.

Vempala is a co-author of Why Language Models Hallucinate, a research study from OpenAI released in September. He says that there is a direct correlation between an LLM's hallucination rate and its misclassification rate regarding the validity of a given response.

"This means that if the model can't tell fact from fiction, it will hallucinate," Vempala said. 

"The problem persists in modern post-training methods for alignment, which are based on evaluation benchmarks that penalize 'I don't know' as much as wrong answers."

Because of the penalties for knowing that it knows nothing – to paraphrase Socrates – guessing is a more rewarding option for current LLMs than admitting uncertainty or ignorance.

The OpenAI paper incorporates and builds on prior work from Vempala and Adam Kalai, an OpenAI researcher and lead author of the current paper. Their earlier work found that LLM hallucinations are mathematically unavoidable for arbitrary facts, given current training methodologies

"We've been talking about this for about two years. One corollary of our paper is that, for arbitrary facts, despite being trained only on valid data, the hallucination rate is determined by the fraction of missing facts in the training data," said Vempala, Frederick Storey II Chair of Computing and professor in the School of CS.

To illustrate this point, imagine you have a huge Pokémon card collection. Pikachu is so familiar that you can confidently describe its moves and abilities. However, accurately remembering facts about Pikachu Libre, an extremely rare card, would likely be more difficult.

“More to the point, if your collection has a large fraction of unique cards, then it is likely that you are still missing a large fraction of the overall set of cards. This is known as the Good-Turing estimate,” Vempala said.

[OpenAI Blog: Why Language Models Hallucinate]

According to Kalai and Vempala, the same is true for LLMs based on current training protocols.

“Think about country capitals,” Kalai said. “They all appear many times in the training data, so language models don’t tend to hallucinate on those.

“On the other hand, think about the birthdays of people’s pets. When those are mentioned in the training data, it may just be once.

“So, pre-trained language models will hallucinate on those. However, post-training can and should teach the model not to guess randomly on facts like those.”

Vempala thinks tinkering with pre-training methods could be risky because, overall, they work well and deliver accurate results. However, he and his co-authors offered suggestions for reducing the occurrence of hallucinations with changes to the evaluation and post-training process.

Among the team's recommended changes is that more value be placed on the accuracy of an LLM's responses rather than on how comprehensive their responses are. The team also suggests implementing what it refers to as “behavioral calibration.”

Using this methodology, LLMs would only answer if their confidence level exceeds target thresholds. These thresholds would be tuned for different user domains and prompts. They would also appropriately reduce penalties for “I don’t know” responses, along with appropriate expressions of uncertainty and wrong answers.

Vempala believes that implementing some of these modifications could result in LLMs that are trained to be more cautious and truthful. This shift could lead to more intelligent systems in the future that can handle nuanced, real-world conversations more effectively.

"We hope our recommendations will lead to more trustworthy AI," said Vempala. "However, implementing these modifications to how LLMs are currently evaluated will require acceptance and support from AI companies and users."

Georgia Tech CS Professor Santosh Vempala shares his insights during a conference at the College of Computing. Vempala is a co-author of a recent OpenAI paper exploring why LLMs hallucinate.

Photo by Terence Rushin, College of Computing

 
News Contact

Ben Snedeker, Comms. Mgr. II
Georgia Tech College of Computing
albert.snedeker@cc.gatech.edu

Lack of Charging Station Data Deters Widespread Adoption of Electric Vehicles

Omar Isaac Asensio

Electric vehicles (EVs) can be environmentally friendly and more cost-effective — until drivers plan a road trip. Charging stations aren’t as prevalent as traditional gas stations, and even if they can be found along the route, they may not be functioning or may already be occupied by other cars. 

While EV charging locator apps can show drivers where the nearest charger is, they aren’t always accurate enough to show real-time status, such as whether a charger is working and available. How are drivers supposed to hit the road when they aren’t sure where their next charge is coming from? This uncertainty can be enough to deter drivers from purchasing an EV altogether.

New research from Georgia Tech, Harvard University, and Massachusetts Institute of Technology suggests that state governments should step in to help. The right policy could inspire data transparency by station hosts, ensuring that EV drivers have reliable networks — and thus encourage EV ownership. The researchers presented their findings in the paper, “Charger Data Transparency: Curing Range Anxiety, Powering EV Adoption,” in September’s Brookings

Data Deserts

The researchers conducted a field experiment to discover the extent of the problem. This analysis showed that just 34% of EV charging stations provide real-time status updates across six major interstates in 40 U.S. states. The researchers found 150 to 350-mile stretches without real-time charger availability, longer than the stated range of many EV models. This leaves thousands of miles of highways in a data desert. 

“We just don't have real-time data infrastructure necessary to build confidence in the reliability of charging, especially in communities along transit corridors,” said Omar Asensio, an associate professor in the Jimmy and Rosalynn Carter School of Public Policy. “It's not that the capability isn’t there. It's that there aren't clear incentives to encourage EV charging station operators to do the right thing and share the data.” 

Charging Transparency

Government regulation is necessary to improve charging reliability, according to the researchers. State governments could offer funding for charging stations only if the station host agrees to data transparency. A simpler policy proposal would be for all fast chargers on highways to post their real-time status to an application programming interface, where software developers could access it. This approach would provide reliable information on whether a public charger is operational, and it can make government spending more efficient by leveraging network effects. The research team is already collaborating with state governments from Massachusetts to Georgia to discuss how to make this government regulation a reality. 

State governments will also benefit, as EVs can help them close the gap on decreasing carbon emissions. 

“Electric vehicles are a key strategy for decarbonizing the transportation sector and delivering public health co-benefits, but consumers need to trust that public chargers will work when they need them,” Asensio said. “Until real-time data disclosure standards are addressed, reliable, widespread adoption will be hard. A data-centric approach can enhance the efficiency of existing transportation investments.”

Many states, including Georgia, have also supported EV manufacturing. EV brand Rivian recently broke ground on an assembly plant outside Atlanta. More widespread EV adoption is paramount to making these plants economic successes. Data transparency regulations could be a start toward finally making EVs the ideal road trip vehicle. 

 
News Contact

Tess Malone, Senior Research Writer/Editor

tess.malone@gatech.edu

Georgia Tech’s Growing Climate Innovation Footprint: Reflections From Climate Week NYC

Collage of four images taken at the New York Climate Exchange 2025 events with Georgia Tech participants.

Beril Toktay with Amanda Ehrenhalt, Rohan Datta, and Kjersti Lukens at Super South; Patritsia Stathatou and Xiao Liu presenting at Climate Tech Fellows; Nicole Kennard with NYU colleagues at the Climate Storytelling Workshop.


Beril Toktay, Regents’ Professor and Brady Family Chair, Scheller College of Business
Executive Director, Brook Byers Institute for Sustainable Systems
Board of Directors, New York Climate Exchange

I returned from Climate Week NYC energized by what I witnessed: Georgia Tech faculty, students, and startups showcasing the breadth and depth of our climate innovation work on one of the world's biggest stages.

Climate Week NYC brings together more than 900 events, but what stood out wasn’t the scale — it was the substance. Across five New York Climate Exchange partner events, the Georgia Tech community demonstrated something essential. Georgia Tech bridges research and real-world impact where it matters most — in people’s lives.

At the Super South event, we flipped the script on where climate innovation happens and demonstrated the Southeast as a climate tech powerhouse. Too often, conversations about climate tech center on coastal hubs. But Georgia Tech-affiliated entrepreneurs Tarek Rakha (Lamarr.AI), Mya Love Griesbaum (Mycorrhiza Fashion), Joe Metzler (Metzev), Laura Stoy (Ph.D. ECE 2021, Rivalia Chemical), Charlie Cichetti (MGT 2004, Skema), Joseph Mooney (research engineer, School of Civil and Environmental Engineering, WattAir), Lewis Motion (MBA 2017, WEAV3D), and Ramtin Motahar (IE 2004, ECON 2004, M.S. AE 2017, Joulea) showed that the Southeast isn’t just participating in the clean energy transition — we’re leading it.

The Climate Tech Fellowship Showcase was personal. Seeing two Georgia Tech teams — Patricia Stathatou and Christos Athanasiou’s yeast-based water purification system, and Xiao Liu’s AI-powered wildfire management platform — selected for the inaugural cohort reminded me why partnerships like the New York Climate Exchange matter. These early-stage innovators need more than good ideas. They need networks, mentorship, and funding pathways. NYCE provides those connections.

From flooding to batteries, two symposia highlighted GT faculty doing research that matters. At Weathering the FutureIris Tien joined experts from AECOM, NVIDIA, and the NYC Department of Environmental Protection to discuss integrating resilience into urban infrastructure. Her work on coastal adaptation and infrastructure resilience addresses real vulnerabilities that cities face today. The Global Battery Alliance Leadership Meeting and Urban Battery Forum brought Yuanzhi Tang into conversations about building sustainable, circular battery value chains. As EVs scale and stationary storage grows, how we manage battery lifecycles — from securing raw resources to manufacturing to second-life reuse/recycling — will determine how we balance electrification, sustainability, environmental considerations, and economics; more details can be found in the NYCE report on battery circularity co-authored by Wyatt Williams (M.S. CEE 2024, MBA 2024).

Nicole Kennard’s leadership in the Climate Storytelling Workshop reinforced something I believe deeply: Technical solutions alone won’t solve the climate crisis. We need approaches that center community voices, acknowledge environmental justice concerns, and build trust. This became particularly clear in Kennard’s lecture for NYU’s Center for Urban Science and Progress: "Food, Place, and Belonging: From Global Visions to Local Sustainability." Presented with Janelle Wright (M CP 2022) from the West Atlanta Watershed Alliance, this lecture demonstrated how sustainable food systems can draw on global frameworks but must center community values and honor the history of place.

A few insights emerged from the week:

1. Geography matters — and so does bridging it. Collaborative platforms like NYCE that create genuine partnerships across regions will be more effective in achieving Georgia Tech’s vision of doing climate work that is grounded in Georgia and global in impact.

2. Visibility accelerates impact. Several faculty and entrepreneurs told me that Climate Week NYC opened doors — to investors, to funders, to partners, and to media. Platforms like NYCE amplify work that might otherwise stay local.

3. Students are passionate about climate opportunities. Every conversation about internships, fellowships, and experiential learning generated immediate interest. We need to build more pathways for students like Rohan Datta and Amanda Ehrenhalt to engage in climate work across both New York and Atlanta ecosystems — creating opportunities for hands-on experience, knowledge diffusion across regions, and the professional networks that will define their careers.

4. Our community extends far beyond campus. Meeting alumnus Alan Warren (PHYS 1978) drove this message home. Alan brings a unique vantage point on coastal resilience challenges faced in New York — and he’s energized by what our partnership can achieve. His offer to serve as Georgia Tech’s “envoy” in NYC, connecting our climate work to networks and opportunities there, is exactly the kind of volunteer leadership that accelerates impact. Alan’s own inspirational story of resilience and regeneration makes his commitment to climate resilience work even more meaningful.

Looking ahead, I see Georgia Tech’s partnership with the NYCE creating a powerful platform: NYCE amplifies our work through capital and convening; Georgia Tech anchors deployment with Southeast roots and global reach. Working alongside a distinguished board led by incoming chair Andrea Goldsmith, president of Stony Brook University, gives me confidence in this direction.

President Ángel Cabrera met with Goldsmith this week and reaffirmed our shared vision for bridging research and impact. “Georgia Tech’s mission has always been about translating knowledge into progress that serves society,” said Cabrera. “The New York Climate Exchange partnership exemplifies this commitment to innovative solutions that can be scaled to create real human impact. By connecting our strengths in community-engaged climate research with networks that can amplify and accelerate solutions, we’re living our motto of Progress and Service as we address one of humanity’s most urgent challenges.”

The Brook Byers Institute for Sustainable Systems (BBISS) convenes faculty, students, and partners to address sustainability challenges through research, education, and collaboration. Connect with BBISS on LinkedIn to be part of the ongoing discussion and/or reach out to Susan Ryan (susan.ryan@gatech.edu) to be added to BBISS’ climate science and solutions community of practice.

 
News Contact

Brent Verrill, Research Communications Program Manager, BBISS

Georgia Tech’s Nakia Melecio Named Fulbright Scholar

Nakia Melecio

For more than two decades, Nakia Melecio has helped researchers and entrepreneurs translate discoveries into real-world impact across biotechnology, aerospace, defense, energy, and medical technology. He's helped launch and scale more than 1,500 startups worldwide, delivered over 15,000 hours of mentorship and training, and contributed to securing more than $400 million in funding for research-driven ventures. He has also led collaborations with NIH, ARPA-H, DOE, NASA, USAID, and universities across the globe. All of that work has now culminated with his recent recognition of a Fulbright Scholar. 

“Being named a Fulbright Scholar is both an honor and opportunity to continue the work I love, helping transform breakthrough research into real-world impact,” said Melecio, director of NSF I-Corps Southeast Hub, director of Georgia Tech’s Center for MedTech Excellence, and principal at VentureLab. “This recognition allows me to collaborate with global partners, strengthen innovation ecosystems, and expand pathways that move discoveries out of the lab and into society.”

Expanding Georgia Tech’s Global Reach

With the Fulbright Scholar recognition, Melecio will share Georgia Tech’s Lab to Market framework with international partners. The seven-week program, which he designed at Georgia Tech, guides teams from lab validation to commercialization and prepares them with customer discovery insights, regulatory strategies, and investor readiness. While newly developed, the framework is already being used at Georgia Tech and will now be extended globally through the Fulbright program.

Through his Fulbright project, Melecio will strengthen global startup ecosystems, share best practices in technology transfer, and support the commercialization of breakthrough research to address urgent societal challenges. He aims to advance research translation, while also building sustainable systems that create industries, jobs, and new economies.

“Nakia’s Fulbright recognition underscores the global reach of Georgia Tech’s innovation ecosystem, and his leadership in international startup development exemplifies our commitment to creating technology that improves lives around the world,” said Raghupathy “Siva” Sivakumar, chief commercialization officer and vice president of Commercialization at Georgia Tech. “We are incredibly proud of Nakia for earning this prestigious honor and look forward to the continued impact of his work supporting entrepreneurs worldwide.”

The Fulbright Scholar Program is the U.S. government’s flagship international academic exchange initiative, designed to strengthen partnerships and foster cross-cultural collaboration. Through this award, Melecio will bring Georgia Tech’s commercialization expertise to global partners, working side by side with researchers and entrepreneurs to accelerate technologies that address urgent challenges in health, energy, and economic development. From Atlanta to Ghanna, Melecio’s work demonstrates the global reach of Georgia Tech’s innovation community. 

 

 

Once-in-a-Decade Conference Spotlights School of IC Researchers

Cindy Lin

Three School of Interactive Computing researchers were chosen for paper presentations at one of the most selective and unique computing conferences in the world.

The Aarhus Conference, hosted by Aarhus University in Denmark, has been held every decade since 1975, addressing the most urgent and vital issues in computing worldwide. 

The latest conference, titled Computing (X) Crisis, took place in August and featured presentations, critiques, and workshops that explored computing’s influence on the human condition in a world filled with crises.

Assistant Professor Cindy Lin, Associate Professor Lynn Dombrowski, and School of IC Professor and Chair Shaowen Bardzell authored the paper Whose, Which, and What Crisis? A Critical Analysis of Crisis in Computing Supply Chains. It was one of only 15 papers selected by conference organizers.

In the paper, in which Lin is credited as the lead author, the researchers advance a theoretical framework for understanding crises that impact the computing supply chain.

Bardzell, who served as program chair of the 2015 Aarhus Conference, approached Dombrowski and Lin about collaborating on a paper submission. Bardzell said the conference gets more than 100 submissions and has a minuscule acceptance rate.

“I knew I was going to go no matter what because I enjoyed it so much 10 years ago,” Bardzell said. “I was fortunate to come together with Lynn and Cindy. We spent six months reading, thinking, and debating together every week, and it was a pleasure to write it together.”

The authors identified common themes in areas they were already researching and examined how these themes affected the computing supply chain.

“We wanted to think about what this word means in relation to computing,” Dombrowski said. “Who gets to take advantage of a crisis, or who can construct a crisis in relation to computing? What’s not being talked about when we use that word?”

Lin is studying the rise of data centers and their impact on the environment and consumers. Dombrowski is an expert on the labor market and unjust labor practices. Bardzell has conducted extensive research on how chip manufacturing affects farming and agriculture in her homeland of Taiwan.

“We don’t often think about computing research as intergenerational colleagues working together,” Lin said. “I feel like the three of us represent very interesting generations of computing research that’s tied to critically thinking about the social and political aspects of computing. Each of us has different ways of thinking about those things.”

In the paper, the three authors discuss the concept of “against crisis thinking,” which emphasizes that crises affecting the computing supply chain aren’t self-evident phenomena. Human-computer interaction scholars, they say, should pay more attention to how the word “crisis” is introduced into public discourse and how it can be exploited by powerful actors and impact marginalized communities.

“Some players get to declare what is a crisis and whom it affects,” Lin said. “They create solutions to resolve the crisis, but they might not address what a chronic experience of a crisis may be.”

Although Bardzell said she considers it an honor to present at a conference that is so selective and is held only once a decade, she was encouraged to be among researchers dedicated to solving pressing societal and planetary issues.

“Academia can appear as a cutthroat environment where you’re trying to establish your brand and be known for XYZ,” Bardzell said. “At Aarhus, there was a strong sense of community and working alongside each other, and we’re better because of the people who work alongside us.”

Lin agreed and said that participating in Aarhus is different from the annual conferences where the researchers normally submit papers. 

“There’s something special about reflecting every 10 years,” Lin said. “It shows how much has changed but also how much things have remained the same.”

 

The Future of Antarctic Ice: New Study Reveals the Mathematics of Meltwater Lakes

A view of Greenland's ice sheet from the NASA/USGS Landsat 8 satellite showing meltwater lakes on a glacier. (Credit: NASA)

A view of Greenland's ice sheet from the NASA/USGS Landsat 8 satellite showing meltwater lakes on a glacier. (Credit: NASA)

Georgia Tech researchers have developed a mathematical formula to predict the size of lakes that form on melting ice sheets — discovering their depth and span are linked to the topography of the ice sheet itself. 

The team leveraged physics, model simulations, and satellite imagery to develop simple mathematical equations that can easily be integrated into existing climate models. It’s a first-of-it’s-kind tool that is already improving climate models.

“Melt lakes play an important role in ice sheet stability, but previously, there were no constraints on what we would expect their maximum size to be in Antarctica,” says study lead Danielle Grau, a Ph.D. student in the School of Earth and Atmospheric Sciences. “I was intrigued by the idea of quantifying how much of a role we could expect them to play in the future.”

The paper, “Predicting mean depth and area fraction of Antarctic supraglacial melt lakes with physics-based parameterizations,” was published in Nature Communications. In addition to Grau, the research team includes School of Earth and Atmospheric Sciences Professor Alexander Robel, who is Grau’s advisor, and Azeez Hussain (PHYS 2025).

Their predictions show that the majority of these lakes will be less than a meter deep and span up to 40% of the ice sheet surface area.

“Many models don’t include any data about lakes on the surface of ice sheets, while others simulate these melt lakes growing until the ice collapses,” Robel says. “Our results show that the reality is somewhere in between — and that the maximum size of these lakes can be predicted using these new equations. This gives us real, concrete numbers to use in climate models.”

From summer project to satellite discovery 

Grau first started working on the project as an undergraduate student when she applied for a Summer Research Experiences for Undergraduates program hosted by the School of Earth and Atmospheric Sciences.

Inspired by terrestrial lake research, Grau and Robel investigated the “self-affinity” of the Antarctic ice sheet — a property associated with surface roughness across various scales. For example, a landscape like Badlands National Park, with many rolling hills of a wide range of sizes, would have a different self-affinity than a flat prairie with three large volcanoes.

“A previous study had used this property to predict the size of terrestrial lakes and ponds, and we were curious if we could use a similar approach for supraglacial lakes in Antarctica,” Grau says. “Establishing that the Antarctic ice sheet also has this property was the first step in pursuing this research in more depth.” 

The mathematics of melt

Grau continued the investigation as a Ph.D. student in Robel’s lab. Together, they unraveled the physics of how meltwater moves across the ice surface, designing a ‘glacier in a computer’ that mimics meltwater accumulation and movement across various topographies.

“We designed an algorithm and integrated it into a model that the GT Ice & Climate Group has used in the past,” Grau says. “From that, we were able to see how lakes would form on different surfaces across thousands of scenarios. This was the foundation for the mathematical equations I developed, which can predict the lake depth and lake surface area based on the self-affinity property.”

To check their results, Grau enlisted the help of Hussain — then an undergraduate in the School of Physics — to examine satellite data from the Landsat satellite program (which captures detailed photography of the Earth’s surface from space) to measure existing supraglacial lakes and surface topography. 

“It was exciting to see how our predictions lined up with what we were seeing in the satellite imagery,” Robel explains. “This shows that our solution is a concrete avenue for climate models to realistically incorporate supraglacial lakes.”

Grau is already working to incorporate the team’s equations into an atmospheric model used by NASA in addition to an ice sheet model developed by the NASA Jet Propulsion Laboratory and Dartmouth College. 

“By turning complicated models and satellite data into simple predictive equations, we’re giving climate models a new lens to see the future,” she says. “It’s a small piece of the puzzle,  but one that helps us understand how ice sheets respond to a warming world.”

 

Funding: NASA Modeling, Analysis, and Prediction Program

DOI: https://doi.org/10.1038/s41467-025-61798-8 

 
News Contact

Written by Selena Langner