Antenna Testbed Will Help Boost Training for Military Aircrews

Researcher installing XPAT system in a test chamber

Radar systems using active electronically scanned array (AESA) technology are playing increasingly critical roles today, protecting warfighters and civilians alike as part of air and missile defense systems. These systems are also critical tools on modern test and training ranges, allowing aircrews to train against accurate simulations of the real-world threats they may face. 
 

To accelerate modernization of these systems, researchers at the Georgia Tech Research Institute (GTRI) have developed a novel system known as XPAT (X-band polarization-diverse AESA testbed). The system was developed for operation in airborne and ground-based applications that must be reconfigurable to meet a variety of mission requirements.
 

Now being tested in GTRI antenna ranges, XPAT includes advances such as state-of-the-art transmit-receive modules and patent-pending cold plates that are optimized to reduce thermal variances between electronic components. XPAT is designed to serve as a building block for future phased array radars in a variety of sizes and shapes, while allowing ease of assembly and maintenance using specialized interfaces for power, control, cooling, and radio frequency connections.

Read more in the GTRI newsroom


 

News Contact

Contact: gtri.media@gtri.gatech.edu

Rozell Named Inaugural Executive Director of New Neuroscience Institute

Christopher Rozell, a first-generation scholar and interdisciplinary researcher, serves as the inaugural executive director of Georgia Tech’s Institute for Neuroscience, Neurotechnology, and Society (INNS).

Christopher Rozell, a first-generation scholar and interdisciplinary researcher, serves as the inaugural executive director of Georgia Tech’s Institute for Neuroscience, Neurotechnology, and Society (INNS).

Christopher Rozell, Julian T. Hightower Chaired Professor in the School of Electrical and Computer Engineering, will serve as the inaugural executive director of Georgia Tech’s new Institute for Neuroscience, Neurotechnology, and Society (INNS). 

INNS is one of two new Interdisciplinary Research Institutes (IRIs) launched at Georgia Tech on July 1. Dedicated to advancing neuroscience and neurotechnology, the institute aims to drive societal progress through discovery, innovation, and public engagement. By bridging disciplines across the sciences, engineering, computing, ethics, policy, and the humanities, INNS will serve as a collaborative hub for exploring the brain in all its complexity — from molecular mechanisms to behavior and cognition, and from foundational research to clinical and technological applications.  

“Our neuro-related research community has built such a strong transdisciplinary vision for an IRI that I remain fully committed to its growth, even as we face a period of extreme uncertainty about federal research funding,” said Vice President for Interdisciplinary Research Julia Kubanek. “In fact, under Chris’s leadership I expect INNS to make our faculty more competitive and successful, bringing Georgia Tech closer to patient communities living with neurological conditions so that our research increasingly impacts people’s lives. INNS will also connect artists, social scientists, neuroscientists and engineers with entrepreneurial opportunities and non-traditional funding pipelines.” 

The launch of INNS builds on more than a decade of groundwork laid by Georgia Tech’s neuroscience community. Rozell has played a key role in shaping the vision for INNS as a member of the Neuro Next Initiative’s executive committee, and before that, as a steering committee member as the initiative was developed. The executive committee included Simon Sponberg, Dunn Family Associate Professor in the School of Physics and the School of Biological Sciences; Jennifer Singh, associate professor in the School of History and Sociology; and Sarah Peterson, Neuro Next Initiative program manager. 

“I'm excited to serve the INNS community in this next phase to build on the momentum generated across campus over many years,” said Rozell. “The brain is one of the great remaining frontiers, where discovery and innovation can unlock the future of human health and flourishing. INNS is uniquely positioned to lead in the modern interdisciplinary research necessary to address this grand challenge.” 

Rozell brings a unique blend of technical expertise, interdisciplinary leadership, and public engagement to his role as the inaugural executive director of INNS. His work spans neuroscience, data and computer science, neuroengineering, and cognitive science, with a particular focus on developing scalable brain stimulation therapies for treatment-resistant depression. Rozell also serves on advisory boards for organizations at the forefront of neuroethics and scientific rigor, reflecting his commitment to responsible innovation. 

Interdisciplinary from the outset, Rozell’s training in neuroscience has been shaped by a unique educational path that bridges engineering, the arts, machine learning, neuroscience and translational research. He holds a Bachelor of Fine Arts in Music alongside his engineering degrees and has developed multiple initiatives that incorporate the arts into neuroscience research and public engagement

Rozell’s research has been widely recognized, with over 130 peer-reviewed publications, multiple patents, and invitations to speak at high-profile venues, including a U.S. Congressional briefing celebrating the NIH BRAIN Initiative. A first-generation scholar, Rozell co-founded Neuromatch, a nonprofit dedicated to building an inclusive global neuroscience community. His contributions have earned him numerous honors, including the James S. McDonnell Foundation 21st Century Science Initiative Scholar Award, elected Fellow of American Institute for Medical and Biological Engineering, and Georgia Tech’s top teaching accolades, underscoring his impact both in and beyond the lab.

News Contact

Audra Davidson
Research Communications Program Manager
Institute for Neuroscience, Neurotechnology, and Society

Georgia Tech Students Help Illuminate Coffee County’s History

Georgia Tech student Bruce Minix accepts Award of Excellence from the American Association of State & Local History in September 2023.

Georgia Tech students played a pivotal role in the award-winning Coffee County Memory Project, an oral history initiative that preserves the stories of school desegregation in rural Georgia.

Launched in 2016, the project was supported by the Institute’s Sustainable Communities Summer Internship Program, run by the Center for Serve-Learn-Sustain (now the Center for Sustainable Communities Research and Education), in which students work full time with community partners across Atlanta and Georgia.

Beginning in 2017, trusted advisers contributed to the success of this work, including Vernon E. Jordan Jr., Christopher Lawton, Ann McCleary and G. Wayne Clough. Clough, who served as Georgia Tech’s president from 1994 to 2008, long advocated for public service, community-engaged research, and interdisciplinary teaching and learning.

In 2019, Georgia Tech students and participating interns Brice Minix and Nabil Patel combed through decades of local newspapers, digitized school board records, and conducted interviews with community members who lived in Coffee County during desegregation. In 2020, Kara Vaughan Adams and Bennett Bush transcribed countless interviews. Samina Patel’s contributions in 2020 and 2021 included graphic and web design.

All their work laid the foundation for two virtual museum exhibits: emergingVOICES of Coffee County and Overcoming Segregation: A Journey Through Coffee County’s Forgotten Stories. The latter received the 2023 Award of Excellence from the American Association of State and Local History. Further recognition came this year when the project earned the 2025 Georgia Association of Museums’ Special Project Award for the PLAYBACK & FASTFORWARD seminar series.

T. Cat Ford, Project Director said, “The Serve-Learn-Sustain interns we partnered with from Georgia Tech were all graduates of Coffee High School. Their efforts turbo-charged our work—not only because they worked tirelessly but also because, as they preserved their own history, they offered valuable insights into their lived experience of this legacy.

Click here to learn more about SCoRE’s Sustainable Communities Internship Program.

News Contact

Jennifer Martin, Assistant Director of Research Communications Services

Study: New AI Tool Deciphers Mysteries of Nanoparticle Motion in Liquid Environments

Schematic showing nanoparticles in the microfluidic chamber of liquid-phase transmission electron microscopy

Schematic showing nanoparticles in the microfluidic chamber of liquid-phase transmission electron microscopy

Nanoparticles – the tiniest building blocks of our world – are constantly in motion, bouncing, shifting, and drifting in unpredictable paths shaped by invisible forces and random environmental fluctuations. 

Better understanding their movements is key to developing better medicines, materials, and sensors. But observing and interpreting their motion at the atomic scale has presented scientists with major challenges.

However, researchers in Georgia Tech’s School of Chemical and Biomolecular Engineering (ChBE) have developed an artificial intelligence (AI) model that learns the underlying physics governing those movements. 

The team’s research, published in Nature Communications, enables scientists to not only analyze, but also generate realistic nanoparticle motion trajectories that are indistinguishable from real experiments, based on thousands of experimental recordings.

A Clearer Window into the Nanoworld

Conventional microscopes, even extremely powerful ones, struggle to observe moving nanoparticles in fluids. And traditional physics-based models, such as Brownian motion, often fail to fully capture the complexity of unpredictable nanoparticle movements, which can be influenced by factors such as viscoelastic fluids, energy barriers, or surface interactions.

To overcome these obstacles, the researchers developed a deep generative model (called LEONARDO) that can analyze and simulate the motion of nanoparticles captured by liquid-phase transmission electron microscopy (LPTEM), allowing scientists to better understand nanoscale interactions invisible to the naked eye. Unlike traditional imaging, LPTEM can observe particles as they move naturally within a microfluidic chamber, capturing motion down to the nanometer and millisecond.

“LEONARDO allows us to move beyond observation to simulation,” said Vida Jamali, assistant professor and Daniel B. Mowrey Faculty Fellow in ChBE@GT. “We can now generate high-fidelity models of nanoscale motion that reflect the actual physical forces at play. LEONARDO helps us not only see what is happening at the nanoscale but also understand why.”

To train and test LEONARDO, the researchers used a model system of gold nanorods diffusing in water. They collected more than 38,000 short trajectories under various experimental conditions, including different particle sizes, frame rates, and electron beam settings. This diversity allowed the model to generalize across a broad range of behaviors and conditions. 

The Power of LEONARDO’s Generative AI

What distinguishes LEONARDO is its ability to learn from experimental data while being guided by physical principles, said study lead author Zain Shabeeb, a PhD student in ChBE@GT. LEONARDO uses a specialized “loss function” based on known laws of physics to ensure that its predictions remain grounded in reality, even when the observed behavior is highly complex or random.

“Many machine learning models are like black boxes in that they make predictions, but we don’t always know why,” Shabeeb said. “With LEONARDO, we integrated physical laws directly into the learning process so that the model’s outputs remain interpretable and physically meaningful.”

LEONARDO uses a transformer-based architecture, which is the same kind of model behind many modern language AI applications. Like how a language model learns grammar and syntax, LEONARDO learns the "grammar" of nanoparticle movement, identifying hidden reasons for the ways nanoparticles interact with their environment.

Future Impact

By simulating vast libraries of possible nanoparticle motions, LEONARDO could help train AI systems that automatically control and adjust electron microscopes for optimal imaging, paving the way for “smart” microscopes that adapt in real time, the researchers said.

“Understanding nanoscale motion is of growing importance to many fields, including drug delivery, nanomedicine, polymer science, and quantum technologies,” Jamali said. “By making it easier to interpret particle behavior, LEONARDO could help scientists design better materials, improve targeted therapies, and uncover new fundamental insights into how matter behaves at small scales."

CITATION: Zain Shabeeb , Naisargi Goyal, Pagnaa Attah Nantogmah, and Vida Jamali, “Learning the diffusion of nanoparticles in liquid phase TEM via physics-informed generative AI,” Nature Communications, 2025.

Vida Jamali, assistant professor in Georgia Tech's School of Chemical and Biomolecular Engineering

Vida Jamali, assistant professor in Georgia Tech's School of Chemical and Biomolecular Engineering

News Contact

Space: The Current Frontier

Composite image of Europa behind Azadeh Ansari holding a computer chip that combines many sensors into one small package.

Right now, about 70 million miles away, a Ramblin’ Wreck from Georgia Tech streaks through the cosmos. It’s a briefcase-sized spacecraft called Lunar Flashlight that was assembled in a Georgia Tech Research Institute (GTRI) cleanroom in 2021, then launched aboard a SpaceX rocket in 2022. 

The plan was to send Lunar Flashlight to the moon, where the spacecraft would shoot lasers at its south pole in a search for frozen water. Mission control for the flight was on Georgia Tech’s campus, where students in the Daniel Guggenheim School of Aerospace Engineering (AE) sat in the figurative driver’s seat. They worked for several months in 2023 to coax the craft toward its intended orbit in coordination with NASA’s Jet Propulsion Lab (JPL). 

A faulty propulsion system kept the CubeSat from reaching its goal. Disappointing, to be sure, but it opened a new series of opportunities for the student controllers. When it was clear Lunar Flashlight wouldn’t reach the moon and instead settle into an orbit of the sun, JPL turned over ownership to Georgia Tech. It’s now the only higher education institution that has controlled an interplanetary spacecraft

Lunar Flashlight’s initial orbit, planned destination, and current whereabouts mirrors much of the College of Engineering’s research in space technology. Some faculty are focused on projects in low earth orbit (LEO). Others have an eye on the moon. A third group is looking well beyond our small area of the solar system. 

No matter the distance, though, each of these Georgia Tech engineers is working toward a new era of exploration and scientific discovery.

Meet them in the latest issue of Helluva Engineer magazine.

News Contact

Jason Maderer
College of Engineering

AI in Healthcare Could Save Lives and Money — But Change Won’t Happen Overnight

 AI will help human physicians by analyzing patient data prior to surgery. Boy_Anupong/Moment via Getty Images

AI will help human physicians by analyzing patient data prior to surgery. Boy_Anupong/Moment via Getty Images

Imagine walking into your doctor’s office feeling sick – and rather than flipping through pages of your medical history or running tests that take days, your doctor instantly pulls together data from your health records, genetic profile and wearable devices to help decipher what’s wrong.

This kind of rapid diagnosis is one of the big promises of artificial intelligence for use in health care. Proponents of the technology say that over the coming decades, AI has the potential to save hundreds of thousands, even millions of lives.

What’s more, a 2023 study found that if the health care industry significantly increased its use of AI, up to US$360 billion annually could be saved.

But though artificial intelligence has become nearly ubiquitous, from smartphones to chatbots to self-driving cars, its impact on health care so far has been relatively low.

A 2024 American Medical Association survey found that 66% of U.S. physicians had used AI tools in some capacity, up from 38% in 2023. But most of it was for administrative or low-risk support. And although 43% of U.S. health care organizations had added or expanded AI use in 2024, many implementations are still exploratory, particularly when it comes to medical decisions and diagnoses.

I’m a professor and researcher who studies AI and health care analytics. I’ll try to explain why AI’s growth will be gradual, and how technical limitations and ethical concerns stand in the way of AI’s widespread adoption by the medical industry.

Inaccurate Diagnoses, Racial Bias

Artificial intelligence excels at finding patterns in large sets of data. In medicine, these patterns could signal early signs of disease that a human physician might overlook – or indicate the best treatment option, based on how other patients with similar symptoms and backgrounds responded. Ultimately, this will lead to faster, more accurate diagnoses and more personalized care.

AI can also help hospitals run more efficiently by analyzing workflows, predicting staffing needs and scheduling surgeries so that precious resources, such as operating rooms, are used most effectively. By streamlining tasks that take hours of human effort, AI can let health care professionals focus more on direct patient care.

But for all its power, AI can make mistakes. Although these systems are trained on data from real patients, they can struggle when encountering something unusual, or when data doesn’t perfectly match the patient in front of them.

As a result, AI doesn’t always give an accurate diagnosis. This problem is called algorithmic drift – when AI systems perform well in controlled settings but lose accuracy in real-world situations.

Racial and ethnic bias is another issue. If data includes bias because it doesn’t include enough patients of certain racial or ethnic groups, then AI might give inaccurate recommendations for them, leading to misdiagnoses. Some evidence suggests this has already happened.

Humans and AI are beginning to work together at this Florida hospital.

Data-Sharing Concerns, Unrealistic Expectations

Health care systems are labyrinthian in their complexity. The prospect of integrating artificial intelligence into existing workflows is daunting; introducing a new technology like AI disrupts daily routines. Staff will need extra training to use AI tools effectively. Many hospitals, clinics and doctor’s offices simply don’t have the time, personnel, money or will to implement AI.

Also, many cutting-edge AI systems operate as opaque “black boxes.” They churn out recommendations, but even its developers might struggle to fully explain how. This opacity clashes with the needs of medicine, where decisions demand justification.

But developers are often reluctant to disclose their proprietary algorithms or data sources, both to protect intellectual property and because the complexity can be hard to distill. The lack of transparency feeds skepticism among practitioners, which then slows regulatory approval and erodes trust in AI outputs. Many experts argue that transparency is not just an ethical nicety but a practical necessity for adoption in health care settings.

There are also privacy concerns; data sharing could threaten patient confidentiality. To train algorithms or make predictions, medical AI systems often require huge amounts of patient data. If not handled properly, AI could expose sensitive health information, whether through data breaches or unintended use of patient records.

For instance, a clinician using a cloud-based AI assistant to draft a note must ensure no unauthorized party can access that patient’s data. U.S. regulations such as the HIPAA law impose strict rules on health data sharing, which means AI developers need robust safeguards.

Privacy concerns also extend to patients’ trust: If people fear their medical data might be misused by an algorithm, they may be less forthcoming or even refuse AI-guided care.

The grand promise of AI is a formidable barrier in itself. Expectations are tremendous. AI is often portrayed as a magical solution that can diagnose any disease and revolutionize the health care industry overnight. Unrealistic assumptions like that often lead to disappointment. AI may not immediately deliver on its promises.

Finally, developing an AI system that works well involves a lot of trial and error. AI systems must go through rigorous testing to make certain they’re safe and effective. This takes years, and even after a system is approved, adjustments may be needed as it encounters new types of data and real-world situations.

AI could rapidly accelerate the discovery of new medications.

Incremental Change

Today, hospitals are rapidly adopting AI scribes that listen during patient visits and automatically draft clinical notes, reducing paperwork and letting physicians spend more time with patients. Surveys show over 20% of physicians now use AI for writing progress notes or discharge summaries. AI is also becoming a quiet force in administrative work. Hospitals deploy AI chatbots to handle appointment scheduling, triage common patient questions and translate languages in real time.

Clinical uses of AI exist but are more limited. At some hospitals, AI is a second eye for radiologists looking for early signs of disease. But physicians are still reluctant to hand decisions over to machines; only about 12% of them currently rely on AI for diagnostic help.

Suffice to say that health care’s transition to AI will be incremental. Emerging technologies need time to mature, and the short-term needs of health care still outweigh long-term gains. In the meantime, AI’s potential to treat millions and save trillions awaits.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

News Contact
Authors:

Turgay Ayer, professor of Industrial and Systems Engineering, Georgia Institute of Technology

Media Contact:

Shelley Wunder-Smith
shelley.wunder-smith@research.gatech.edu

New AI Tool Deciphers Mysteries of Nanoparticle Motion in Liquid Environments

Computer generated model of nanoparticles

A study from Georgia Tech’s School of Chemical and Biomolecular Engineering introduces LEONARDO, a deep generative AI model that reveals the hidden dynamics of nanoparticle motion in liquid environments. By analyzing over 38,000 experimental trajectories captured through liquid-phase transmission electron microscopy (LPTEM), LEONARDO not only interprets but also generates realistic simulations of nanoscale movement. This innovation marks a major leap in understanding the physical forces at play in nanotechnology, with promising implications for medicine, materials science, and sensor development.

Read the full story.

News Contact
Brad Dixon | Communications Manager

School of Chemical and Biomolecular Engineering

 

Pancaked Water Droplets Help Launch Europe’s Fastest Supercomputer

ExaMFlow Droplet

JUPITER became the world’s fourth fastest supercomputer when it debuted last month. Though housed in Germany at the Jülich Supercomputing Centre (JSC), Georgia Tech played a supporting role in helping the system land on the latest TOP500 list.

In November 2024, JSC granted Assistant Professor Spencer Bryngelson exclusive access to the system through the JUPITER Research and Early Access Program (JUREAP).

By preparing Europe’s fastest supercomputer for launch, the joint project yielded valuable simulation data on the effects of shock waves in medicine and transportation.

“The shock-droplet problem has been a hallmark test problem in fluid dynamics for some decades now. It is sufficiently challenging to study that it keeps me scientifically interested, though the results are manifestly important,” Bryngelson said. 

“Understanding the droplet behavior in some extreme regimes remains an open scientific problem of high engineering value.”

Through JUREAP, JSC engineers tested Bryngelson’s Multi-Component Flow Code (MFC) on their computers. The project simulated how liquid droplets behave when struck by a large, high-velocity shock wave moving much faster than the speed of sound.

Tests produced visualizations of droplets deforming into pancake shapes before ejecting vortex rings as they broke apart from the shock wave. The experiments measured the swirls of air flow formed behind the droplets, known as vorticity.

Vorticity is one variable aerospace engineers consider when building aircraft designed to fly at supersonic and hypersonic speeds. Small droplets and vortices pose significant hazards for high-Mach vessels.

These computer models reduce the risk and cost associated with physical test runs. By simulating extreme scenarios, the JUREAP project demonstrated a safer and more efficient way to evaluate aerospace systems.

The human body is another fluid space where fast, high-energy flows can occur.

Simulations help medical researchers create less invasive shock wave treatments. This technology can be further applied for uses ranging from breaking up kidney stones to treating inflammation. 

MFC’s versatility for large- and small-scale applications made it suitable for testing JUPITER in its early stages. The project’s success even earned it a JUREAP certificate for scaling efficiency and node performance.

“The use of application codes to test supercomputers is common. We’ve participated in similar programs for OLCF Frontier and LLNL El Capitan,” said Bryngelson, a faculty member with Georgia Tech’s School of Computational Science and Engineering.

“Engineers at supercomputer sites usually find and sort most problems on their own. But deploying workloads characteristic of what the JUPITER will run in practice stresses it in new ways. In these instances, we usually end up identifying some failure modes.”

The JSC and Georgia Tech researchers named their joint project Exascale Multiphysics Flows (ExaMFlow).

ExaMFlow helps keep JUPITER on pace to become Europe’s first exascale supercomputer. This designation refers to any machine capable of computing one exaflop, or one quintillion (“1” followed by 18 zeros) calculations per second. 

All three systems that rank ahead of JUPITER are exascale supercomputers. They are El Capitan at Lawrence Livermore National Laboratory, Frontier at Oak Ridge National Laboratory, and Aurora at Argonne National Laboratory. 

JUPITER calculates more than 60 billion operations per watt. This makes the supercomputer the most energy-efficient system among the top five. 

ExaMFlow ran Bryngelson’s software on JSC’s JUWELS Booster and JUPITER Exascale Transition Instrument (JETI). The two modules form the backbone of JUPITER’s full design.

ExaMFlow’s report showed that MFC performed with near-ideal scaling behavior on JUWELS and JETI compared to similar systems based on NVIDIA A100 GPUs.

Access to NVIDIA hardware at Georgia Tech played a key role in ExaMFlow’s success.

The Institute hosts the Phoenix Research Computing Cluster, which includes A100 GPUs among its arsenal of components. Bryngelson’s lab owns NVIDIA A100 GPUs and four GH200 Grace Hopper Superchips

Since JUPITER is equipped with around 24,000 Grace Hopper Superchips, Bryngelson’s work with the hardware proved especially insightful for the ExaMFlow project.   

“The Grace Hopper chip is interesting. It’s not challenging to use like a regular GPU device when one is familiar with running NVIDIA hardware. The more fun part is using its tightly coupled CPU to GPU interconnect to make use of the CPU as well,” Bryngelson said. 

“It’s not immediately obvious how to best do this, though we used a few tricks to tune its use to our application. They appear to work nicely.”

JSC researchers Luis Cifuentes, Rakesh Sarma, Seong Koh, and Sohel Herff played important roles in running Bryngelson’s MFC software on early JUPITER modules. 

The ExaMFlow team included NVIDIA scientists Nikolaos Tselepidis and Benedikt Dorschner

The pair observed their company’s hardware used in the field. They return to NVIDIA with notes that help the corporation build the next devices tailored to the need of scientific computing researchers. 

“We try to be prepared for the latest, biggest computers. Being able to take immediate advantage of the largest systems is a valuable capability,” Bryngelson said. 

“When the early access systems arrive, it’s a great opportunity for the teams involved to test the machines, demonstrate and tune scientific software, and meet very capable new collaborators.”

JSC JUPITER Booster
Spencer Bryngelson
News Contact

Bryant Wine, Communications Officer
bryant.wine@cc.gatech.edu

Fueling Young Minds: Georgia Tech Summer Campers Explore Energy Systems With Oglethorpe Power, Green Power EMC, and Georgia System Operations Corporation

Energy Unplugged Camp Participants During a Field Trip to Oglethorpe Power, Green Power EMC and Georgia Systems Operations Corporation

High school students, who participated in the Energy Unplugged Summer Camp at Georgia Tech during a field trip to Oglethorpe Power, Green Power EMC and Georgia Systems Operations Corporation in June 2025.

In June, Georgia Tech’s Strategic Energy Institute (SEI) and the Energy Policy and Innovation Center hosted Energy Unplugged, a weeklong summer camp focused on science, technology, engineering, art, and mathematics (STEAM) for high school students. 

Led by SEI’s director of Research and Studies and principal research engineer, Richard Simmons, the camp introduced students to energy fundamentals and highlighted STEAM-related careers and undergraduate pathways valuable in today’s workforce. The curriculum included energy resources, energy production and consumption, conversion and delivery, electric circuits, battery storage, environmental impacts, and data analytics. 

As a featured part of this year’s program, students visited the headquarters of Oglethorpe PowerGreen Power EMC, and Georgia System Operations Corporation in Tucker, Georgia. The companies are owned by and serve 38 of Georgia’s not-for-profit electric membership cooperatives (EMCs), which provide retail electricity to approximately 4.7 million of Georgia’s more than 11 million residents. 

“As electricity demand continues to rise, so does the need to grow a skilled and capable workforce for the future. We are proud to partner with Georgia Tech on this inspiring program, supporting the growth and development of the next generation of leaders who will help power Georgia’s future,” said George Mathai, Oglethorpe Power performance and reliability engineer.

The site visit included a tour of Georgia System Operations’ generation and transmission control centers and presentations by Oglethorpe Power and Green Power EMC experts.

The tour began in the generation control center, where students observed operators continuously monitoring demand to make real-time decisions to increase or decrease electricity generation. Students learned that Georgia System Operations dispatches a wide array of energy sources and generation technologies to ensure a stable, reliable, secure, and efficient power grid. 

The group then visited the transmission control center, where a series of massive screens showed the web of transmission lines across the state. Students learned that the transmission system relies on extremely high-voltage lines to minimize loss across long distances. The voltages are then stepped down as they approach population centers at sub-stations, so they are suitable for use by residences, businesses, and industrial facilities. The operators in the transmission center monitor the grid for disturbances and respond to alarms, maintaining the integrity of the state’s power infrastructure. 

The tour offered a behind-the-scenes look at how electricity generation and transmission are integrated and managed across the state. 

Over lunch, Oglethorpe Power’s George Mathai and Shane Tolbert, Green Power EMC’s distributed energy resources manager, led discussions highlighting the roles of various generation sources and the benefits of a diverse portfolio in balancing cost, reliability, sustainable resources, and environmental impact. 

“Learning about how Oglethorpe Power, Green Power EMC, and Georgia System Operations work together was a highlight of the Energy Unplugged camp, as it reinforced many of the tabletop demonstrations and hands-on activities we had conducted in the days leading up to the visit. When students then get a chance to visualize energy production, conversion, and delivery concepts at full scale, lots of light bulbs start clicking on,” Simmons said.

Jointly contributed by:
Oglethorpe Power Corporation 
Georgia Tech Strategic Energy Institute (Destin Smyth)

Oglethorpe Power’s George Mathai and Shane Tolbert, Green Power EMC’s distributed energy resources manager, discussing the roles of various generation sources and the benefits of a diverse portfolio

Oglethorpe Power’s George Mathai and Shane Tolbert, Green Power EMC’s distributed energy resources manager, discussing the roles of various generation sources and the benefits of a diverse portfolio with the campers.

News Contact

Priya Devarajan, Communications Program Manager, 
Georgia Tech Strategic Energy Institute

Blair Romero, Director, Corporate Communications
Oglethorpe Power Corporation

‘Biochar’ Can Naturally Clean the Pollution that Rain Washes Off Georgia’s Roads

Ahmed Yunus and Yongsheng Chen working with a wastewater reactor system in the lab.

Professor Yongsheng Chen (left) and Ph.D. student Ahmed Yunus work with a wastewater reactor system in the lab. (Photo: Candler Hobbs)

A charcoal-like material made from leaves and branches that collect on forest floors could be a cheap, sustainable way to keep pollution from washing off roadways and into Georgia’s lakes and rivers.

Engineers at Georgia Tech and Georgia Southern University have found that this biological charcoal, or biochar, can be mixed with soil and used along roadways to catch grimy rainwater and filter it naturally before it pollutes surface water.

Their tests found the biochar effectively cleans contaminants from the rainwater and works just as well in the sandy soils of the coastal plain as in the clays of north Georgia. Their biochar-soil mixture can be easily substituted for expensive material mined from the earth that’s typically used on roads. 

Though they focused on Georgia, the researchers said the findings could easily apply across the U.S., providing a simple, natural way to keep road pollutants out of water sources. They published their approach in the Journal of Environmental Management.

Learn about their system on the College of Engineering website.

News Contact

Joshua Stewart
College of Engineering