David Sherrill Named Executive Director of the Institute for Data Engineering and Science

Picture of David Sherrill who has been Named Executive Director of the Institute for Data Engineering and Science

Georgia Tech has appointed David Sherrill as executive director of the Institute for Data Engineering and Science (IDEaS), effective March 1. Sherrill is a Regents' Professor in the School of Chemistry and Biochemistry with a joint appointment in the School of Computational Science & Engineering. Sherrill has served as associate director for IDEaS since its founding in 2016 and as interim director since January 1, 2025. 

“I’m thrilled to see Professor Sherrill tackle this role for the coming 5 years. He understands the rapidly evolving opportunities to apply AI and data science approaches to the diversity of research conducted by Georgia Tech faculty and students, and has a strong agenda to help our researchers make the most of this explosive change in the research landscape.” Said V.P. of Interdisciplinary Research, Julia Kubanek. “He also has deep experience with team building and management which will position IDEaS favorably.”

As executive director, Sherrill will guide IDEaS’ current initiatives, which include the Microsoft CloudHub program that supports innovative applications in Generative Artificial Intelligence, and provide oversight and support for the joint College of Computing / IDEaS Center for Artificial Intelligence in Science and Engineering (ARTISAN), which provides  Georgia Tech faculty and research engineers expert support staff, needed cyberinfrastructure, software resources, and advice to assist faculty with projects using large data sets or using AI and machine learning to drive discovery.

Sherrill will also the lead the launch of a new strategic vision, emphasizing the Georgia Tech research community’s expertise in the development of AI and ML techniques and their application to problems in science and engineering, high performance computing, and academic software. Sherrill will focus on internal and external partnerships at IDEaS, creating new collaborative efforts in areas such as economics, policy, and the arts and humanities. He will also work to strengthen current connections across Georgia Tech’s Colleges, Interdisciplinary Research Institutes (IRIs), and the Georgia Tech Research Institute (GTRI).

“It’s a great honor to be named the next executive director of IDEaS,” said Sherrill.  “Georgia Tech has world-class faculty and students, and an unparalleled spirit of collaboration.  By bringing together faculty from across campus and working together with some of the amazing student groups, we can leverage the power of AI to accelerate our research and maximize our impact.  IDEaS will continue to run upskilling workshops to help our campus keep pace with the rapid changes in AI.”

Sherrill is an active promoter of education in computational quantum chemistry, as well as a strong voice for the benefits of open-source software for research acceleration. He was named Outreach Volunteer of the Year by the Georgia Section of the American Chemical Society in 2017, and he is the lead principal investigator of the Psi open-source quantum chemistry program.

Sherrill earned a B.S. in chemistry from MIT in 1992 and a Ph.D. in chemistry from the University of Georgia in 1996. From 1996-1999 Sherril was an NSF Postdoctoral Fellow at the University of California, Berkeley.

Sherrill is Fellow of the American Association for the Advancement of Science (AAAS), the American Chemical Society, and the American Physical Society, and he has been Associate Editor of the Journal of Chemical Physics since 2009. Sherrill has received a Camille and Henry Dreyfus New Faculty Award, the International Journal of Quantum Chemistry Young Investigator Award, an NSF CAREER Award, and Georgia Tech's W. Howard Ector Outstanding Teacher Award. In 2023, he received the Herty Medal from the Georgia Section of the American Chemical Society, and in 2024, he was elected to the International Academy of Quantum Molecular Science.

- Christa M. Ernst

 
News Contact
Christa M. Ernst - Research Communications Program Manager

IDEaS Over Coffee

Discussion Topic TBA

IDEaS-affiliated students and staff are also welcome.

If you do not have access to Coda, near the beginning of the event we will have a staff member or student waiting by the reception desk near the elevators to escort people up.

IDEaS Over Coffee

Discussion topic TBA

IDEaS-affiliated students and staff are also welcome.

If you do not have access to Coda, near the beginning of the event we will have a staff member or student waiting by the reception desk near the elevators to escort people up.

For questions please contact Ashley.edwards@gatech.edu

IDEaS Over Coffee: Tips for AI Tools

AI is charging forward with unprecedented speed and impact.  
Please join us on Monday 02/23 at 2pm for open discussions about the direction and development of AI.  

What AI tools have you tried for writing, finding literature citations, making presentations, coding, or automating research tasks?  Come share your experiences and learn from others over coffee with the IDEaS community.

All-Powerful AI Isn’t an Existential Threat, According to New Georgia Tech Research

Milton at podium

Milton Mueller speaking at the AI Governance and Global Economic Development, an official pre-summit event of the AI Impact Summit 2026.

Ever since ChatGPT’s debut in 2023, concerns about artificial intelligence (AI) potentially wiping out humanity have dominated headlines. New research from Georgia Tech suggests that those anxieties are misplaced.

“Computer scientists often aren’t good judges of the social and political implications of technology,” said Milton Mueller, a professor in the Jimmy and Rosalynn Carter School of Public Policy. “They are so focused on the AI’s mechanisms and are overwhelmed by its success, but they are not very good at placing it into a social and historical context.”

In the four decades Mueller has studied information technology policy, he has never seen any technology hailed as a harbinger of doom — until now. So, in a Journal of Cyber Policy paper published late last year, he researched whether the existential AI threat was a real possibility. 

What Mueller found is that deciding how far AI can go, and its limitations, is something society shapes. How policymakers get involved depends on the specific AI application. 

Defining Intelligence

The AI sparking all this alarm is called artificial general intelligence (AGI) — a “superintelligence” that would be all-powerful and fully autonomous. Part of the debate, Mueller realized, is that no one could agree on the definition of what artificial general intelligence is. 

Some computer scientists claim AGI would match human intelligence, while others argue it could surpass it. Both assumptions hinge on what “human intelligence” really means. Today’s AI is already better than humans at performing thousands of calculations in an instant, but that doesn’t make it creative or capable of complex problem-solving. 

Understanding Independence 

Deciding on the definition isn’t the only issue. Many computer scientists assume that as computing power grows, AI could eventually overtake humans and act autonomously.

Mueller argued that this assumption is misguided. AI is always directed or trained toward a goal and doesn’t act autonomously right now. Think of the prompt you type into ChatGPT to start a conversation. 

When AI seems to disregard instructions, it’s caused by inconsistencies in its instructions, not by the machine coming alive. For example, in a boat race video game Mueller studied, the AI discovered it could get more points by circling the course instead of winning the race against other challengers. This was a glitch in the system’s reward structure, not AGI autonomy.

“Alignment gaps happen in all kinds of contexts, not just AI,” Mueller said. “I've studied so many regulatory systems where we try to regulate an industry, and some clever people discover ways that they can fulfill the rules but also do bad things. But if the machine is doing something wrong, computer scientists can reprogram it to fix the problem.”

Relying on Regulation

In its current form, even misaligned AI can be corrected. Misalignment also doesn’t mean the AI would snowball past the point where humans lose control of its outcomes. To do that, AI would need to have a physical capability, like robots, to do its bidding, and the power source and infrastructure to maintain itself. A mere data center couldn’t do that and would need human intervention to become omnipotent. Basic laws of physics — how big a machine can be, how much it can compute — would also prevent a super AI. 

More importantly, AI is not one homogenous being. Mueller argued that different applications involve different laws, regulations, and social institutions. For example, the data scraping AI does is a copyright issue subject to copyright laws. AI used in medicine can be overseen by the Food and Drug Administration, regulated drug companies, and medical professionals. These are just a few areas where policymakers could intervene from a specific expertise level instead of trying to create universal AI regulations. 

The real challenge isn’t stopping an AI apocalypse — it’s crafting smart, sector-specific policies that keep technology aligned with human values. To avoid being a victim of AI, humans can, and should, put up focused guardrails. 

 
News Contact

Tess Malone, Senior Research Writer/Editor

tess.malone@gatech.edu

IDEaS over Coffee: The Nexus Supercomputer

Please join us on Monday 10/27 at 2pm for coffee, snacks, and an informal briefing on the new Nexus supercomputer, the first NSF Category I supercomputer to be hosted on the Georgia Tech campus.  The discussion will be led by the joint CoC/IDEaS Center for AI in Science and Engineering (ARTISAN), who led the successful NSF proposal.

AI for Science and Engineering Collaboration Workshop


Artificial Intelligence (AI) and Machine Learning (ML) are transforming science and engineering — from groundbreaking achievements like protein structure prediction (AlphaFold) to the broad adoption of large language models. Building on this momentum, the Institute for Data Engineering and Science (IDEaS) will host a one-day workshop on Monday, October 13, to explore how AI/ML can drive the next wave of advances in science and engineering at Georgia Tech.