Georgia Tech Hosts Opening Event for Atlanta Science Festival

Child at ASF demo group

This weekend, the annual Atlanta Science Festival kicks off with Georgia Tech Science and Engineering Day. From 10 a.m. to 2 p.m. on Saturday, March 9, the Institute will host STEAM activities for the whole family. A two-week celebration of STEAM education and career opportunities, ASF offers more than 100 events and demonstrations hosted around the city for people of all ages.

Science and Engineering Day aims to inspire the next generation of engineers and scientists and share the breadth and impact of Georgia Tech’s research with the local community. The theme of this year’s event is space and includes appearances by former NASA astronaut Shane Kimbrough, M.S. ISyE 1998, who has spent an astounding 388 days in space.

Attendees will lift off from the Launch Pad — The Kendeda Building for Innovative Sustainable Design — where they will receive a “Galactic Passport” to guide their journey through the Georgia Tech universe. Each planet, or building, will house multiple hands-on activities, all led by Georgia Tech students, researchers, faculty, and staff. There will be more than 50 interactive exhibits for attendees to experience.

Following Science and Engineering Day, the Neuro Next Initiative at Georgia Tech will co-host with Emory University a film screening and live performance that combines neuroscience research with movement and insights into the experience of people living with Parkinson’s and dementia. These interactive events include acrobats from the Centre for Circus Arts Research, Innovation and Knowledge Transfer (CRITAC), the research arm of the Canadian National Circus School.

Atlanta Science Festival is engineered by Science ATL, a 501(c)(3) nonprofit organization dedicated to bringing people together through the wonder of science. Founded in 2014 by Emory University, Georgia Tech, and Metro Atlanta Chamber, the organization produces year-round science events, including the Atlanta Science Festival, a youth STEM leadership program, school STEM partnership initiatives, self-guided family activities such as Passport and Discovery Walks, and more.

News Contact

researchevents@gatech.edu

Researchers Reach New AI Benchmark for Computer Graphics

Bo Zho is an assistant professor in Georgia Tech's School of Interactive Computing

Georgia Tech Assistant Professor Bo Zhu worked on a multi-institutional team to develop a new AI benchmark for computer graphics. Photo by Eli Burakian/Dartmouth College.

Computer graphic simulations can represent natural phenomena such as tornados, underwater, vortices, and liquid foams more accurately thanks to an advancement in creating artificial intelligence (AI) neural networks.

Working with a multi-institutional team of researchers, Georgia Tech Assistant Professor Bo Zhu combined computer graphic simulations with machine learning models to create enhanced simulations of known phenomena. The new benchmark could lead to researchers constructing representations of other phenomena that have yet to be simulated.

Zhu co-authored the paper Fluid Simulation on Neural Flow Maps. The Association for Computing Machinery’s Special Interest Group in Computer Graphics and Interactive Technology (SIGGRAPH) gave it a best paper award in December at the SIGGRAPH Asia conference in Sydney, Australia. 

The authors say the advancement could be as significant to computer graphic simulations as the introduction of neural radiance fields (NeRFs) was to computer vision in 2020. Introduced by researchers at the University of California-Berkley, University of California-San Diego, and Google, NeRFs are neural networks that easily convert 2D images into 3D navigable scenes. 

NeRFs have become a benchmark among computer vision researchers. Zhu and his collaborators hope their creation, neural flow maps, can do the same for simulation researchers in computer graphics.

“A natural question to ask is, can AI fundamentally overcome the traditional method’s shortcomings and bring generational leaps to simulation as it has done to natural language processing and computer vision?” Zhu said. “Simulation accuracy has been a significant challenge to computer graphics researchers. No existing work has combined AI with physics to yield high-end simulation results that outperform traditional schemes in accuracy.”

In computer graphics, simulation pipelines are the equivalent of neural networks and allow simulations to take shape. They are traditionally constructed through mathematical equations and numerical schemes. 

Zhu said researchers have tried to design simulation pipelines with neural representations to construct more robust simulations. However, efforts to achieve higher physical accuracy have fallen short. 

Zhu attributes the problem to the pipelines’ incapability of matching the capacities of AI algorithms within the structures of traditional simulation pipelines. To solve the problem and allow machine learning to have influence, Zhu and his collaborators proposed a new framework that redesigns the simulation pipeline.

They named these new pipelines neural flow maps. The maps use machine learning models to store spatiotemporal data more efficiently. The researchers then align these models with their mathematical framework to achieve a higher accuracy than previous pipeline simulations.

Zhu said he does not believe machine learning should be used to replace traditional numerical equations. Rather, they should complement them to unlock new advantageous paradigms. 

“Instead of trying to deploy modern AI techniques to replace components inside traditional pipelines, we co-designed the simulation algorithm and machine learning technique in tandem,” Zhu said. 

“Numerical methods are not optimal because of their limited computational capacity. Recent AI-driven capacities have uplifted many of these limitations. Our task is redesigning existing simulation pipelines to take full advantage of these new AI capacities.” 

In the paper, the authors state the once unattainable algorithmic designs could unlock new research possibilities in computer graphics. 

Neural flow maps offer “a new perspective on the incorporation of machine learning in numerical simulation research for computer graphics and computational sciences alike,” the paper states.

“The success of Neural Flow Maps is inspiring for how physics and machine learning are best combined,” Zhu added.

News Contact

Nathan Deen, Communications Officer

Georgia Tech School of Interactive Computing

nathan.deen@cc.gatech.edu

Department of Energy Awards $4.2 Million to Guard Power Grid from Cyber Threats

Saman Zonouz is a Georgia Tech associate professor and lead researcher for the DerGuard project.

Georgia Tech is developing a new artificial intelligence (AI) based method to automatically find and stop threats to renewable energy and local generators for energy customers across the nation’s power grid.

The research will concentrate on protecting distributed energy resources (DER), which are most often used on low-voltage portions of the power grid. They can include rooftop solar panels, controllable electric vehicle chargers, and battery storage systems. 

The cybersecurity concern is that an attacker could compromise these systems and use them to cause problems across the electrical grid like, overloading components and voltage fluctuations. These issues are a national security risk and could cause massive customer disruptions through blackouts and equipment damage. 

“Cyber-physical critical infrastructures provide us with core societal functionalities and services such as electricity,” said Saman Zonouz, Georgia Tech associate professor and lead researcher for the project. 

“Our multi-disciplinary solution, DerGuard, will leverage device-level cybersecurity, system-wide analysis, and AI techniques for automated vulnerability assessment, discovery, and mitigation in power grids with emerging renewable energy resources.”

The project’s long-term outcome will be a secure, AI-enabled power grid solution that can search and protect the DER’s on its network from cyberattacks. 

“First, we will identify sets of critical DERs that, if compromised, would allow the attacker to cause the most trouble for the power grid,” said Daniel Molzahn, assistant professor at Georgia Tech. 

“These DERs would then be prioritized for analysis and patching any identified cyber problems. Identifying the critical sets of DERs would require information about the DERs themselves- like size or location- and the power grid. This way, the utility company or other aggregator would be in the best position to use this tool.”

Additionally, the team will establish a testbed with industry partners. They will then develop and evaluate technology applications to better understand the behavior between people, devices, and network performance.

Along with Zonouz and Molzahn, Georgia Tech faculty Wenke Lee, professor, and John P. Imlay Jr. chair in software, will also lead the team of researchers from across the country. 

The researchers are collaborating with the University of Illinois at Urbana-Champaign, the Department of Energy’s National Renewable Energy Lab, the Idaho National Labs, the National Rural Electric Cooperative Association, and Fortiphyd Logic. Industry partners Network Perception, Siemens, and PSE&G will advise the researchers. 

The work will be carried out at Georgia Tech’s Cyber-Physical Security Lab (CPSec) within the School of Cybersecurity and Privacy (SCP) and the School of Electrical and Computer Engineering (ECE). 

The U.S. Department of Energy (DOE) announced a $45 million investment at the end of February for 16 cybersecurity initiatives. The projects will identify new cybersecurity tools and technologies designed to reduce cyber risks for energy infrastructure followed by tech-transfer initiatives. The DOE’s Office of Cybersecurity, Energy Security, and Emergency Response (CESER) awarded $4.2 million for the Institute’s DerGuard project. 

News Contact

JP Popham, Communications Officer II

Georgia Tech School of Cybersecurity & Privacy

john.popham@cc.gatech.edu

IRIM Director Delivers Keynote at Hyundai Meta-Factory Conference

IRIM Director Seth Hutchinson at Hyundai Meta Factory Conference Delivering Keynote

IRIM Director Seth Hutchinson at Hyundai Meta Factory Conference Delivering Keynote

Hyundai Motor Group Innovation Center Singapore hosted the Meta-Factory Conference Jan. 23 – 24. It brought together academic leaders, industry experts, and manufacturing companies to discuss technology and the next generation of integrated manufacturing facilities.

Seth Hutchinson, executive director of the Institute for Robotics and Intelligent Machines at Georgia Tech, delivered a keynote lecture on “The Impacts of Today’s Robotics Innovation on the Relationship Between Robots and Their Human Co-Workers in Manufacturing Applications” — an overview of current state-of-the-art robotic technologies and future research trends for developing robotics aimed at interactions with human workers in manufacturing.

In addition to the keynote, Hutchinson also participated in the Hyundai Motor Group's Smart Factory Executive Technology Advisory Committee (E-TAC) panel on comprehensive future manufacturing directions and toured the new Hyundai Meta-Factory to observe how digital-twin technology is being applied in their human-robot collaborative manufacturing environment.

Hutchinson is a professor in the School of Interactive Computing. He received his Ph.D. from Purdue University in 1988, and in 1990 joined the University of Illinois Urbana-Champaign, where he was professor of electrical and computer engineering until 2017 and is currently professor emeritus. He has served on the Hyundai Motor Group's Smart Factory E-TAC since 2022.

Hyundai Motor Group Innovation Center Singapore is Hyundai Motor Group’s open innovation hub to support research and development of human-centered smart manufacturing processes using advanced technologies such as artificial intelligence, the Internet of Things, and robotics.

- Christa M. Ernst

Related Links

IRIM Director Seth Hutchinson at Hyundai Meta Factory Conference on Panel Discussion

IRIM Director Seth Hutchinson at Hyundai Meta Factory Conference on Panel Discussion

News Contact

Christa M. Ernst - Research Communications Program Manager

christa.ernst@research.gatech.edu

The Who's Who of Bacteria: A Reliable Way to Define Species and Strains

A photo of a saltern site with structured ponds in the foreground and large mounds of salt in the background.

A photo of the saltern site in Spain where a significant portion of the research was done. A saltern is used to produce salt for human consumption and is a natural environment for Salinibacter ruber bacterium.

What’s in a name? A lot, actually.

For the scientific community, names and labels help organize the world’s organisms so they can be identified, studied, and regulated. But for bacteria, there has never been a reliable method to cohesively organize them into species and strains. It’s a problem, because bacteria are one of the most prevalent life forms, making up roughly 75% of all living species on Earth.

An international research team sought to overcome this challenge, which has long plagued scientists who study bacteria. Kostas Konstantinidis, Richard C. Tucker Professor in the School of Civil and Environmental Engineering at the Georgia Institute of Technology, co-led a study to investigate natural divisions in bacteria with a goal of determining a scientifically viable method for organizing them into species and strains. To do this, the researchers let the data show them the way.

Their research was published in the journal Nature Communications.

“While there is a working definition for species and strains, this is far from widely accepted in the scientific community,” Konstantinidis said. “This is because those classifications are based on humans’ standards that do not necessarily translate well to the patterns we see in the natural environment.”

For instance, he said, “If we were to classify primates using the same standards that are used to classify E. coli, then all primates — from lemurs to humans to chimpanzees — would belong to a single species.”

There are many reasons why a comprehensive organizing system has been hard to devise, but it often comes down to who gets the most attention and why. More scientific attention generally leads to those bacteria becoming more narrowly defined. For example, bacteria species that contain toxic strains have been extensively studied because of their associations with disease and health. This has been out of the necessity to differentiate harmful strains from harmless ones. Recent discoveries have shown, however, that even defining types of bacteria by their toxicity is unreliable.

“Despite the obvious, cornerstone importance of the concepts of species and strains for microbiology, these remain, nonetheless, ill-defined and confusing,” Konstantinidis said.

The research team collected bacteria from two salterns in Spain. Salterns are built structures in which seawater evaporates to form salt for consumption. They harbor diverse communities of microorganisms and are ideal locations to study bacteria in their natural environment. This is important for understanding diversity in populations because bacteria often undergo genetic changes when exposed in lab environments.

The team recovered and sequenced 138 random isolates of Salinibacter ruber bacteria from these salterns. To identify natural gaps in genetic diversity, the researchers then compared the isolates against themselves using a measurement known as average nucleotide identity (ANI) — a concept Konstantinidis developed early in his career. ANI is a robust measure of relatedness between any two genomes and is used to study relatedness among microorganisms and viruses, as well as animals. For instance, the ANI between humans and chimpanzees is about 98.7%.

The analysis confirmed the team’s previous observations that microbial species do exist and could be reliably described using ANI. They found that members of the same species of bacteria showed genetic relatedness typically ranging from 96 to 100% on the ANI scale, and generally less than 85% relatedness with members of other species.

The data revealed a natural gap in ANI values around 99.5% ANI within the Salinibacter ruber species that could be used to differentiate the species into its various strains. In a companion paper published in mBio, the flagship journal of the American Society for Microbiology, the team examined about 300 additional bacterial species based on 18,000 genomes that had been recently sequenced and become available in public databases. They observed similar diversity patterns in more than 95% of the species.

“We think this work expands the molecular toolbox for accurately describing important units of diversity at the species level and within species, and we believe it will benefit future microdiversity studies across clinical and environmental settings,” Konstantinidis said.

The team expects their research will be of interest to any professional working with bacteria, including evolutionary biologists, taxonomists, ecologists, environmental engineers, clinicians, bioinformaticians, regulatory agencies, and others. It is available online through Konstantinidis’ website and GitHub to facilitate access and use by scientific and regulatory communities.

“We hope that these communities will embrace the new results and methodologies for the more robust and reliable identification of species and strains they offer, compared to the current practice,” Konstantinidis said.

 

Note: Tomeu Viver and Ramon Rossello-Mora from the Mediterranean Institutes for Advanced Studies also led the research. Additional researchers from the Georgia Institute of Technology, University of Innsbruck, University of Pretoria, University of Las Palmas de Gran Canaria, University of the Balearic Islands, and the Max Planck Institute also contributed. 

Citation: Viver, T., Conrad, R.E., Rodriguez-R, L.M. et al. Towards estimating the number of strains that make up a natural bacterial population. Nat Commun 15, 544 (2024).

DOI: https://doi.org/10.1038/s41467-023-44622-z

Funding: Spanish Ministry of Science, Innovation and Universities, European Regional Development Fund, U.S. National Science Foundation.

A microscopy image of bacteria highlighted in green, pink, and indigo colors.

A microscopy photo of Salinibacter ruber, a bacterium that thrives in salterns.

A screenshot of a video conference with 12 people

A screenshot from a team meeting. The study's international team has researchers based in the U.S., Spain, Germany, Austria, and South Africa.

News Contact

Catherine Barzler

catherine.barzler@gatech.edu

Researchers Reveal Roadmap for AI Innovation in Brain and Language Learning

Anna (Anya) Ivanova

One of the hallmarks of humanity is language, but now, powerful new artificial intelligence tools also compose poetry, write songs, and have extensive conversations with human users. Tools like ChatGPT and Gemini are widely available at the tap of a button — but just how smart are these AIs? 

A new multidisciplinary research effort co-led by Anna (Anya) Ivanova, assistant professor in the School of Psychology at Georgia Tech, alongside Kyle Mahowald, an assistant professor in the Department of Linguistics at the University of Texas at Austin, is working to uncover just that.

Their results could lead to innovative AIs that are more similar to the human brain than ever before — and also help neuroscientists and psychologists who are unearthing the secrets of our own minds. 

The study, “Dissociating Language and Thought in Large Language Models,” is published this week in the scientific journal Trends in Cognitive Sciences. The work is already making waves in the scientific community: an earlier preprint of the paper, released in January 2023, has already been cited more than 150 times by fellow researchers. The research team has continued to refine the research for this final journal publication. 

“ChatGPT became available while we were finalizing the preprint,” Ivanova explains. “Over the past year, we've had an opportunity to update our arguments in light of this newer generation of models, now including ChatGPT.”

Form versus function

The study focuses on large language models (LLMs), which include AIs like ChatGPT. LLMs are text prediction models, and create writing by predicting which word comes next in a sentence — just like how a cell phone or email service like Gmail might suggest what next word you might want to write. However, while this type of language learning is extremely effective at creating coherent sentences, that doesn’t necessarily signify intelligence.

Ivanova’s team argues that formal competence — creating a well-structured, grammatically correct sentence — should be differentiated from functional competence — answering the right question, communicating the correct information, or appropriately communicating. They also found that while LLMs trained on text prediction are often very good at formal skills, they still struggle with functional skills.

“We humans have the tendency to conflate language and thought,” Ivanova says. “I think that’s an important thing to keep in mind as we're trying to figure out what these models are capable of, because using that ability to be good at language, to be good at formal competence, leads many people to assume that AIs are also good at thinking — even when that's not the case.

It's a heuristic that we developed when interacting with other humans over thousands of years of evolution, but now in some respects, that heuristic is broken,” Ivanova explains.

The distinction between formal and functional competence is also vital in rigorously testing an AI’s capabilities, Ivanova adds. Evaluations often don’t distinguish formal and functional competence, making it difficult to assess what factors are determining a model’s success or failure. The need to develop distinct tests is one of the team’s more widely accepted findings, and one that some researchers in the field have already begun to implement.

Creating a modular system

While the human tendency to conflate functional and formal competence may have hindered understanding of LLMs in the past, our human brains could also be the key to unlocking more powerful AIs. 

Leveraging the tools of cognitive neuroscience while a postdoctoral associate at Massachusetts Institute of Technology (MIT), Ivanova and her team studied brain activity in neurotypical individuals via fMRI, and used behavioral assessments of individuals with brain damage to test the causal role of brain regions in language and cognition — both conducting new research and drawing on previous studies. The team’s results showed that human brains use different regions for functional and formal competence, further supporting this distinction in AIs. 

“Our research shows that in the brain, there is a language processing module and separate modules for reasoning,” Ivanova says. This modularity could also serve as a blueprint for how to develop future AIs.

“Building on insights from human brains — where the language processing system is sharply distinct from the systems that support our ability to think — we argue that the language-thought distinction is conceptually important for thinking about, evaluating, and improving large language models, especially given recent efforts to imbue these models with human-like intelligence,” says Ivanova’s former advisor and study co-author Evelina Fedorenko, a professor of brain and cognitive sciences at MIT and a member of the McGovern Institute for Brain Research.

Developing AIs in the pattern of the human brain could help create more powerful systems — while also helping them dovetail more naturally with human users. “Generally, differences in a mechanism’s internal structure affect behavior,” Ivanova says. “Building a system that has a broad macroscopic organization similar to that of the human brain could help ensure that it might be more aligned with humans down the road.” 

In the rapidly developing world of AI, these systems are ripe for experimentation. After the team’s preprint was published, OpenAI announced their intention to add plug-ins to their GPT models. 

“That plug-in system is actually very similar to what we suggest,” Ivanova adds. “It takes a modularity approach where the language model can be an interface to another specialized module within a system.” 

While the OpenAI plug-in system will include features like booking flights and ordering food, rather than cognitively inspired features, it demonstrates that “the approach has a lot of potential,” Ivanova says.

The future of AI — and what it can tell us about ourselves

While our own brains might be the key to unlocking better, more powerful AIs, these AIs might also help us better understand ourselves. “When researchers try to study the brain and cognition, it's often useful to have some smaller system where you can actually go in and poke around and see what's going on before you get to the immense complexity,” Ivanova explains.

However, since human language is unique, model or animal systems are more difficult to relate. That's where LLMs come in. 

“There are lots of surprising similarities between how one would approach the study of the brain and the study of an artificial neural network” like a large language model, she adds. “They are both information processing systems that have biological or artificial neurons to perform computations.” 

In many ways, the human brain is still a black box, but openly available AIs offer a unique opportunity to see the synthetic system's inner workings and modify variables, and explore these corresponding systems like never before.

It's a really wonderful model that we have a lot of control over,” Ivanova says. “Neural networks — they are amazing.”

 

Along with Anna (Anya) Ivanova, Kyle Mahowald, and Evelina Fedorenko, the research team also includes Idan Blank (University of California, Los Angeles), as well as Nancy Kanwisher and Joshua Tenenbaum (Massachusetts Institute of Technology).

 

DOI: https://doi.org/10.1016/j.tics.2024.01.011

Researcher Acknowledgements

For helpful conversations, we thank Jacob Andreas, Alex Warstadt, Dan Roberts, Kanishka Misra, students in the 2023 UT Austin Linguistics 393 seminar, the attendees of the Harvard LangCog journal club, the attendees of the UT Austin Department of Linguistics SynSem seminar, Gary Lupyan, John Krakauer, members of the Intel Deep Learning group, Yejin Choi and her group members, Allyson Ettinger, Nathan Schneider and his group members, the UT NLL Group, attendees of the KUIS AI Talk Series at Koç University in Istanbul, Tom McCoy, attendees of the NYU Philosophy of Deep Learning conference and his group members, Sydney Levine, organizers and attendees of the ILFC seminar, and others who have engaged with our ideas. We also thank Aalok Sathe for help with document formatting and references.

Funding sources

Anna (Anya) Ivanova was supported by funds from the Quest Initiative for Intelligence. Kyle Mahowald acknowledges funding from NSF Grant 2104995. Evelina Fedorenko was supported by NIH awards R01-DC016607, R01-DC016950, and U01-NS121471 and by research funds from the Brain and Cognitive Sciences Department, McGovern Institute for Brain Research, and the Simons Foundation through the Simons Center for the Social Brain.

The Intersection of AI and Cognitive Neuroscience
Anna (Anya) Ivanova
News Contact

Written by Selena Langner

Editor and Press Contact:
Jess Hunt-Ralston
Director of Communications
College of Sciences
Georgia Tech

Renters Need Better Policies To Cope With Natural Disasters, New Research Shows

After Hurricane Katrina, 29% of single-family homes were damaged in Louisiana versus 35% of rental units, but while 62% of homeowners received disaster recovery assistance, only 18% of renters got similar aid.

Louisiana isn’t unique. Renters are an especially vulnerable population after natural disasters. They are generally less able to afford to move but are more likely to pay exorbitant markups when rental options are depleted. How renters are affected after a disaster is a key indicator of climate vulnerability, yet most political discourse and public policies focus on single-family homeowners.

New joint research from the Georgia Institute of Technology and the Brookings Institution is some of the first to expose the disaster impact on rental housing markets, examining whether the U.S. Department of Housing and Urban Development (HUD) disaster recovery policies work effectively to protect renters.

The researchers sought to answer the question of how the rental market shifts after a natural disaster and whether rents increase. They used data from 2000 to 2020 that broke down rental rates by ZIP code and quarter in major metro areas in California, Michigan, Arkansas, Georgia, and Florida. Any presidentially declared major disasters were included, totaling 180,000 ZIP code-based rental rate data points. The researchers combined federal emergency information with proprietary rental price data and stakeholder interviews. Brookings also hosted a workshop to provide qualitative insights and lived experiences from federal, state, and local government officials; tenants; and nonprofit organizations.

“I think the marriage between the two, the quantitative analysis and qualitative insights, is something that really makes this report unique,” said Brian An, assistant professor in the School of Public Policy in Georgia Tech’s Ivan Allen College of Liberal Arts.

Their quantitative analysis showed that rents do increase after disasters — by 4 to 6% for the first disaster; rents elevate for the first three years and then stabilize over the next few years, but they do not revert to the initial rate for at least five years. Subsequent disasters also increase the rent by 2 to 3 percentage points and have lasting impacts. On average, ZIP codes in high climate-risk areas where multiple disasters often serially occur experience 12% higher rents.  

Renters need more representation, according to the qualitative analysis. Stakeholders wanted better renter protections even outside of disasters but also noted that disaster relief needed to be more equitable and come from federal resources.

Policy typically favors single-family homeowners over renters in multifamily apartments. But after Hurricane Katrina, HUD imposed new rental development requirements in the Community Development Block Grant – Disaster Recovery (CDBG-DR), the largest federal funding for disaster recovery. For their analysis, the researchers scanned thousands of pages in federal register notices to track what types of rental development requirements were in place in the largest federal funds and whether those stipulations improved renters’ situation. They found rents appreciated at a much slower rate in areas with the CDBG-DR grants than those without.

The An group’s analysis suggests that the new requirements embedded in CDBG-DR seem to be working, but a more thorough investigation is needed. For example, in its follow-up study, the team is investigating which requirements are more effective than others and whether similar policies could be more broadly applied to help renters. The authors also note that more fundamental policy changes for renters are imperative, such as universal renter protections, disaster assistance prioritizing low-income renters, and state and local government requirements to enforce tenant protections and rental housing in exchange for federal funding.

“We embarked on this research for evidence-based policymaking,” An said. “While CDBG-DR has played a major role in disaster recovery, no one has examined their efficacy on rental housing until now. A major part of the challenge was due to the complexities in tracking the fund allocation notices and understanding nuanced rental requirements. But rigorous efforts and evaluations can inform policy design, our research demonstrates.”

After the report was published by Brookings, HUD’s Office of Policy Development and Research invited the researchers to present their findings to its staff.

“The unique academic-practitioner collaboration with philanthropic support is what made our research policy relevant, with actionable insights,” An said. “We need more policy research and evidence to build a resilient future for all renter populations.”

Martin, C., Drew, R., Orlando, A., Moody, J., Rodnyansky, S., An, B., Jakabovics, A., Patton, N., and
Donoghoe, M. (2023). “Disasters and the rental housing community: Setting a research and policy agenda.” Brookings Institution. https://www.brookings.edu/wp-content/uploads/2023/09/Disasters-and-the-Rental-
Housing_final.pdf

News Contact

Tess Malone, Senior Research Writer/Editor

tess.malone@gatech.edu

Carter Center and Georgia Institute of Technology Commemorate New Joint Fellowship

Pictured left-to-right: Georgia Tech President Ángel Cabrera, Daniel Nkemelu, and Carter Center CEO Paige Alexander.

Pictured left-to-right: Georgia Tech President Ángel Cabrera, Daniel Nkemelu, and Carter Center CEO Paige Alexander.

ATLANTA (Feb. 23, 2024) — The Carter Center and Georgia Institute of Technology today commemorated the new joint Governance and Technology Fellowship.

The Center’s Democracy Program and Georgia Tech’s Institute for People and Technology are supporting one fellowship during the spring 2024 academic semester for a doctoral candidate researching the intersection of technology and democratic governance.

“I am thrilled to visit Georgia Tech again and celebrate our strong partnership,” said Carter Center CEO Paige Alexander. “There is an important relationship between technology and democracy. Together, we are committed to promoting secure and transparent technologies that reinforce democratic principles.”

The fellow, Daniel Nkemelu, who is from Nigeria, is working closely with the Carter Center’s Democracy Program director, data scientist, and members of the digital threats to democracy initiative.

The fellowship builds on the institutions’ long collaboration, including with Michael Best, executive director of the Institute for People and Technology, who played an important role in establishing this fellowship.

“From social media platforms to computer-based voting machines, technologies today are profoundly impacting democracies across the globe,” said Georgia Tech President Ángel Cabrera. “This new fellowship and our ongoing partnership with The Carter Center express a shared commitment to strong democracies supported by secure technologies.”

The fellowship began in January. It aims to advance the fellow’s research agenda and give access to experts in democratic elections and participatory democracy. The fellow will also connect the Carter Center’s Democracy Program with Georgia Tech’s Institute for People and Technology research.

###

Contact: In Atlanta, Maria Cartaya, maria.cartaya@cartercenter.org

The Carter Center
Waging Peace. Fighting Disease. Building Hope.

A not-for-profit, nongovernmental organization, The Carter Center has helped to improve life for people in over 80 countries by resolving conflicts; advancing democracy, human rights, and economic opportunity; preventing diseases; and improving mental health care. The Carter Center was founded in 1982 by former U.S. President Jimmy Carter and former First Lady Rosalynn Carter, in partnership with Emory University, to advance peace and health worldwide.

Visit our website CarterCenter.org | Follow us on X @CarterCenter | Follow us on Instagram @thecartercenter | Like us on Facebook Facebook.com/CarterCenter | Watch us on YouTube YouTube.com/CarterCenter


About the Georgia Institute of Technology
The Georgia Institute of Technology, or Georgia Tech, is one of the top public research universities in the U.S., developing leaders who advance technology and improve the human condition. The Institute offers business, computing, design, engineering, liberal arts, and sciences degrees. Its more than 47,000 undergraduate and graduate students, representing 50 states and more than 148 countries, study at the main campus in Atlanta, at campuses in Europe and Asia, and through distance and online learning.

As a leading technological university, Georgia Tech is an engine of economic development for Georgia, the Southeast, and the nation, conducting more than $1.2 billion in research annually for government, industry, and society. 

 

 

Pictured left-to-right: Daniel Nkemelu, Paige Alexander, and Michael Best, executive director of IPAT

Pictured left-to-right: Daniel Nkemelu, Paige Alexander, and Michael Best, executive director of IPaT

News Contact

Walter Rich