Illness Is More Than Just Biological – Medical Sociology Shows How Social Factors Get Under the Skin and Cause Disease

Lack of access to safe and affordable housing is harmful to health. Robert Gauthier/Los Angeles Times via Getty Images

Lack of access to safe and affordable housing is harmful to health. Robert Gauthier/Los Angeles Times via Getty Images

Health and medicine is more than just biological – societal forces can get under your skin and cause illness. Medical sociologists like me study these forces by treating society itself as our laboratory. Health and illness are our experiments in uncovering meaning, power and inequality, and how it affects all parts of a person’s life.

For example, why do low-income communities continue to have higher death rates, despite improved social and environmental conditions across society? Foundational research in medical sociology reveals that access to resources like money, knowledge, power and social networks strongly affects a person’s health. Medical sociologists have shown that social class is linked to numerous diseases and mortality, including risk factors that influence health and longevity. These include smoking, overweight and obesity, stress, social isolation, access to health care and living in disadvantaged neighborhoods.

Moreover, social class alone cannot explain such health inequalities. My own research examines how inequalities related to social class, race and gender affect access to autism services, particularly among single Black mothers who rely on public insurance. This work helps explain delays in autism diagnosis among Black children, who often wait three years after initial parent concerns before they are formally diagnosed. White children with private insurance typically wait from 9 to 22 months depending on age of diagnosis. This is just one of numerous examples of inequalities that are entrenched in and deepened by medical and educational systems.

Medical sociologists like me investigate how all of these factors interact to affect a person’s health. This social model of illness sees sickness as shaped by social, cultural, political and economic factors. We examine both individual experiences and societal influences to help address the health issues affecting vulnerable populations through large-scale reforms.

By studying the way social forces shape health inequalities, medical sociology helps address how health and illness extend beyond the body and into every aspect of people’s lives.

Protesters standing in front of a federal building, holding signs in the shape of graves reading '16 MILLION LIVES' and 'R.I.P. DEATH BY A THOUSAND CUTS,' wearing shirts that read 'MEDICAID SAVES LIVES'

Access to health insurance is a political issue that directly affects patients. Here, care workers gathered in June 2025 to protest Medicaid cuts. Tasos Katopodis/Getty Images for SEIU

Origins of Medical Sociology in the US

Medical sociology formally began in the U.S after World War II, when the National Institutes of Health started investing in joint medical and sociological research projects. Hospitals began hiring sociologists to address questions like how to improve patient compliance, doctor-patient interactions and medical treatments.

However, the focus of this early work was on issues specific to medicine, such as quality improvement or barriers to medication adherence. The goal was to study problems that could be directly applied in medical settings rather than challenging medical authority or existing inequalities. During that period, sociologists viewed illness mostly as a deviation from normal functioning leading to impairments that require treatment.

For example, the concept of the sick role – developed by medical sociologist Talcott Parsons in the 1950s – saw illness as a form of deviance from social roles and expectations. Under this idea, patients were solely responsible for seeking out medical care in order to return to normal functioning in society.

In the 1960s, sociologists began critiquing medical diagnoses and institutions. Researchers criticized the idea of the sick role because it assumed illnesses were temporary and did not account for chronic conditions or disability, which can last for long periods of time and do not necessarily allow people to deviate from their life obligations. The sick role assumed that all people have access to medical care, and it did not take into account how social characteristics like race, class, gender and age can influence a person’s experience of illness.

Patient wearing surgical mask sitting in chair of exam room, talking to a doctor

Early models of illness in medical sociology discounted the experience of the patient. Paul Bersebach/MediaNews Group/Orange County Register via Getty Images

Parsons’ sick role concept also emphasized the expertise of the physician rather than the patient’s experience of illness. For example, sociologist Erving Goffman showed that the way care is structured in asylums shaped how patients are treated. He also examined how the experience of stigma is an interactive process that develops in response to social norms. This work influenced how researchers understood chronic illness and disability and laid the groundwork for later debates on what counts as pathological or normal.

In the 1970s, some researchers began to question the model of medicine as an institution of social control. They critiqued how medicine’s jurisdiction expanded over many societal problems – such as old age and death – which were defined and treated as medical problems. Researchers were critical of the tendency to medicalize and apply labels like “healthy” and “ill” to increasing parts of human existence. This shift emphasized how a medical diagnosis can carry political weight and how medical authority can affect social inclusion or exclusion.

The critical perspective aligns with critiques from disability studies. Unlike medical sociology, which emerged through the medical model of disease, disability studies emerged from disability rights activism and scholarship. Rather than viewing disability as pathological, this field sees disability as a variation of the human condition rooted in social barriers and exclusionary environments. Instead of seeking cures, researchers focus on increasing accessibility, human rights and autonomy for disabled people.

A contemporary figure in this field was Alice Wong, a disability rights activist and medical sociologist who died in November 2025. Her work amplified disabled voices and helped shaped how the public understood disability justice and access to technology.

Structural Forces Shape Health and Illness

By focusing on social and structural influences on health, medical sociology has contributed significantly to programs addressing issues like segregation, discrimination, poverty, unemployment and underfunded schools.

For example, sociological research on racial health disparities invite neighborhood interventions that can help improve overall quality of life by increasing the availability of affordable nutritious foods in underserved neighborhoods or initiatives that prioritize equal access to education. At the societal level, large-scale social policies such as guaranteed minimum incomes or universal health care can dramatically reduce health inequalities.

People carrying boxes of food under a tent

Access to nutritious food is critical to health. K.C. Alfred / The San Diego Union-Tribune via Getty Images

Medical sociology has also expanded the understanding of how health care policies affect health, helping ensure that policy changes take into account the broader social context. For example, a key area of medical sociological research is the rising cost of and limited access to health care. This body of work focuses on the complex social and organizational factors of delivering health services. It highlights the need for more state and federal regulatory control as well as investment in groups and communities that need care the most.

Modern medical sociology ultimately considers all societal issues to be health issues. Improving people’s health and well-being requires improving education, employment, housing, transportation and other social, economic and political policies.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 
News Contact
Author:

Jennifer Singh, Associate Professor of Sociology, Georgia Institute of Technology

Media Contact:

Shelley Wunder-Smith
shelley.wunder-smith@research.gatech.edu

Georgia Tech Climbs to No. 2 University in Federally Sponsored Research Expenditures

Two Georgia Tech researchers looking at a biomedical chip.

University research drives U.S. innovation, and Georgia Institute of Technology is leading the way.  

The latest Higher Education Research and Development (HERD) Survey from the National Science Foundation (NSF) places Georgia Tech as No. 2 nationally for federally sponsored research expenditures in 2024. This is Georgia Tech’s highest-ever ranking from the NSF HERD survey and a 70% increase over the Institute's 2019 numbers.  

In total expenditures from all externally funded dollars (including the federal government, foundations, industry, etc.), Georgia Tech is ranked at No. 6.  

Tech remains ranked No. 1 among universities without a medical school — a major accomplishment, as medical schools account for a quarter of all research expenditures nationally. 

“Georgia Tech’s rise to No. 2 in federally sponsored research expenditures reflects the extraordinary talent and commitment of our faculty, staff, students, and partners. This achievement demonstrates the confidence federal agencies have in our ability to deliver transformative research that addresses the nation’s most critical challenges,” said Tim Lieuwen, executive vice president for Research.   

Overall, the state of Georgia maintained its No. 8 position in university research and development, and for the first time, the state topped the $4 billion mark in research expenditures. Georgia Tech provides $1.5 billion, the largest state university contribution. In the last five years, federal funding for higher education research in the state of Georgia has grown an astounding 46% — 10 points higher than the U.S. rate. 

Lieuwen said, “Georgia Tech is proud to lead the state in research contributions, helping Georgia surpass the $4 billion mark for the first time. Our work doesn’t just advance knowledge — it saves lives, creates jobs, and strengthens national security. This growth reflects our commitment to drive innovation that benefits Georgia, our country, and the world.” 

About the NSF HERD Survey 

The NSF HERD Survey is an annual census of U.S. colleges and universities that expended at least $150,000 in separately accounted for research and development (R&D) in the fiscal year. The survey collects information on R&D expenditures by field of research and source of funds and also gathers information on types of research, expenses, and headcounts of R&D personnel. 

About Georgia Tech's Research Enterprise 

The research enterprise at Georgia Tech is led by the Executive Vice President for Research, Tim Lieuwen, and directs a portfolio of research, development, and sponsored activities. This includes leadership of the Georgia Tech Research Institute (GTRI), the Enterprise Innovation Institute, 11 interdisciplinary research institutes (IRIs), Office of Commercialization, Office of Corporate Engagement, plus research centers, and related research administrative support units. Georgia Tech routinely ranks among the top U.S. universities in volume of research conducted.

 
News Contact

Angela Ayers
Assistant Vice President of Research Communications

AI Shouldn’t Try to Be Your Friend, According to New Georgia Tech Research

Sidney Scott-Sharoni

Sidney Scott-Sharoni at Ph.D. commencement December 2025

Would you follow a chatbot’s advice more if it sounded friendly? 

That question matters as artificial intelligence (AI) spreads into everything from customer service to self-driving cars. These autonomous agents often have human names — Alexa or Claude, for example — and speak conversationally, but too much familiarity can backfire. Earlier this year, OpenAI scaled down its “sycophantic” ChatGPT model, which could cause problems for users with mental health issues. 

New research from Georgia Tech suggests that users may like more personable AI, but they are more likely to obey AI that sounds robotic. While following orders from Siri may not be critical, many AI systems, such as robotic guide dogs, require human compliance for safety reasons. 

These surprising findings are from research by Sidney Scott-Sharoni, who recently received her Ph.D. from the School of Psychology. Despite years of previous research suggesting people would be socially influenced by AI they liked, Scott-Sharoni’s research showed the opposite. 

“Even though people rated humanistic agents better, that didn't line up with their behavior,” she said. 

Likability vs. Reliability 

Scott-Sharoni ran four experiments. In the first, participants answered trivia questions, saw the AI’s response, and decided whether to change their answer. She expected people to listen to agents they liked.

“What I found was that the more humanlike people rated the agent, the less they would change their answer, so, effectively, the less they would conform to what the agent said,” she noted.

Surprised, Scott-Sharoni studied moral judgments with an AI voice agent next. For example, participants decided how to handle being undercharged on a restaurant bill. 

Once again, participants liked the humanlike agent better but listened to the robotic agent more. The unexpected pattern led Scott-Sharoni to explore why people behave this way.

Bias Breakthrough

Why the gap? Scott-Sharoni’s findings point to automation bias — the tendency to see machines as more objective than humans.

Scott-Sharoni continued to test this with a third experiment focused on the prisoner’s dilemma, where participants cooperate with or retaliate against authority. In her task, participants played a game against an AI agent. 

“I hypothesized that people would retaliate against the humanlike agent if it didn’t cooperate,” she said. “That’s what I found: Participants interacting with the humanlike agent became less likely to cooperate over time, while those with the robotic agent stayed steady.”

The final study, a self-driving car simulation, was the most realistic and troubling for safety concerns. Participants didn’t consistently obey either agent type, but across all experiments, humanlike AI proved less effective at influencing behavior.

Designing the Right AI

The implications are pivotal for AI engineers. As AI grows, designers may cater to user preferences — but what people want isn’t always best.

“Many people develop a trusting relationship with an AI agent,” said Bruce Walker, a professor of psychology and interactive computing and Scott-Sharoni’s Ph.D. advisor. “So, it’s important that developers understand what role AI plays in the social fabric and design technical systems that ultimately make humans better. Sidney's work makes a critical contribution to that ultimate goal.” 

When safety and compliance are the point, robotic beats relatable.

 
News Contact

Tess Malone, Senior Research Writer/Editor

tess.malone@gatech.edu

Energy Insecurity Linked to Higher Rates of Anxiety, Depression, Georgia Tech Study Finds

A woman wearing a hat and warm clothing prepares food in her kitchen.

Energy insecurity is a significant financial problem, and potentially a major mental health issue, for millions of Americans.

A new study from the Jimmy and Rosalynn Carter School of Public Policy identifies energy insecurity — the inability to meet basic household energy needs — as a critical, yet often overlooked, social determinant of health.

“While we often talk about food and housing insecurity, fewer people recognize energy as a basic necessity that shapes not only comfort, but also safety and stress,” said Assistant Professor Michelle Graff, who co-authored the paper published in JAMA Network Open.

Analyzing data from the U.S. Census Bureau’s Household Pulse Survey, the researchers found that 43% of households experienced energy insecurity in the past year. Among respondents who reduced spending on necessities to cover energy bills, nearly 39% reported symptoms of anxiety and 32% reported symptoms of depression — more than twice the incidence among respondents who didn’t need to make that tradeoff.

“Being able to afford your home does not guarantee you can afford to safely heat, cool, or power it,” Graff said.

Such instability disproportionately affects Black and Hispanic households, renters, and families dependent on electronic medical devices, Graff said.

And while the study was not designed to explain whether energy insecurity causes mental health issues or some other dynamic is at work, Graff said it’s incontrovertible that these groups face compounding stressors. Living in inefficient housing can lead to higher bills and unsafe temperatures, disrupting sleep and health. When combined with the financial anxiety of potential utility shutoffs and the need to sacrifice food or medicine to pay bills, these trade-offs create a cycle of chronic stress, she said.

Among other recommendations, Graff said healthcare providers should start screening for energy insecurity just as they do for food insecurity.

“We view this primarily as a data-collection initiative designed to generate the evidence needed to inform future policy recommendations and program improvements,” Graff said.

Graff is continuing to explore these issues with Carter School graduate students, including recent work on state-level aid implementation with Ph.D. student Ryan Anthony and upcoming research with other students on how energy insecurity impacts eviction rates.

The article, “Energy Insecurity and Mental Health Symptoms in US Adults,” was published Oct. 27, 2025, in JAMA Network Open. It is available at https://doi:10.1001/jamanetworkopen.2025.39479.

""

Assistant Professor Michelle Graff.

 
News Contact

Michael Pearson
Ivan Allen College of Liberal Arts

Pascal Van Hentenryck Delivers Keynote on AI for Engineering Optimization at AI Festival in Austria Capital

Pascal-in-Austria-AI-Festival-2025

Pascal Van Hentenryck, A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE) at Georgia Tech, director of Tech AI, and director of NSF AI4OPT, was a keynote speaker at AI Festival 2025, held December 1–3 at TU Wien Informatics in Vienna, Austria.

The three-day international festival convened leading researchers, industry experts, and members of the public to explore how artificial intelligence is shaping science, technology, and society. Through keynote talks, panels, and interactive sessions, the event fostered dialogue around emerging AI research, real-world applications, and societal impact.

Van Hentenryck delivered a keynote on “AI for Engineering Optimization” during Day 1: Research, which focused on recent advances in foundational and applied AI. His talk highlighted how AI and optimization methods can be integrated to address complex engineering challenges, with implications for domains such as energy systems, mobility, and large-scale decision-making. 

The session was chaired by Nysret Musliu of TU Wien and the Cluster of Excellence Bilateral AI (BilAI).

The research-focused first day of the festival featured discussions on topics including neurosymbolic AI, large language models, explainable AI, AI in science, and automated problem solving and decision-making. Van Hentenryck’s keynote contributed to these conversations by emphasizing the role of AI-driven optimization in advancing engineering design and operational efficiency.

AI Festival 2025 was co-organized by TU Wien, the Center for Artificial Intelligence and Machine Learning (CAIML), BilAI—funded by the Austrian Science Fund (FWF)—the Vienna Science and Technology Fund (WWTF), and TU Austria. The event underscored the importance of international collaboration across academia and industry in advancing responsible and impactful AI research.

Van Hentenryck’s participation reflects Georgia Tech’s leadership in artificial intelligence, as well as the missions of Tech AI and AI4OPT to advance AI-enabled optimization and decision-making for complex, real-world systems.

 

The Age of Autonomous Supply Chains Is Here

Andre Calmon, associate professor of operations management

Andre Calmon, associate professor of operations management

Supply chain management is poised to enter a new era. The Harvard Business Review has published a groundbreaking article co-authored by Andre Calmon, associate professor of operations management, alongside Flavio Calmon, Harvard University; Carol Long, Harvard University; and David Simchi-Levi, Massachusetts Institute of Technology. “The Age of Autonomous Supply Chains Has Arrived” explores how generative AI is transforming supply chain management from automated systems to truly autonomous operations.
 

Based on data collected at the Scheller College of Business, Calmon’s research demonstrates how AI models like Llama 4 Maverick 17B—equipped with optimized prompts, data-sharing rules, and guardrails—can outperform human teams in managing complex supply chains. Using the classic MIT Beer Distribution Game as a testbed, the authors benchmarked AI agents against more than 100 Georgia Tech students. The results were striking: AI-driven systems reduced total supply chain costs by up to 67% compared to human performance.
 

Traditional automated systems rely on rigid, human-designed rules. Calmon and his co-authors employed autonomous agents that learn, adapt, and coordinate across functions in real time. The study highlights four critical factors for success: selecting capable reasoning models, implementing guardrails to prevent costly errors, curating data through orchestration, and refining prompts for optimal performance.
 

“This breakthrough positions the Scheller College of Business as a thought leader at the intersection of AI and supply chain innovation,” said Calmon. “World-class supply chain management is becoming a plug-and-play capability. Businesses that understand how to guide generative AI agents with the right data and policies will gain a decisive competitive edge.”
 

The implications extend beyond cost savings. By delegating operational decisions to autonomous systems, human managers can focus on strategic priorities such as network design and supplier relationships. In an era of global volatility, this research emphasizes how future supply chain success depends on the strategic use of AI-driven technology.
 

Read More: Harvard Business Review 

 
News Contact

Kristin Lowe (She/Her)
Content Strategist
Georgia Institute of Technology | Scheller College of Business
kristin.lowe@scheller.gatech.edu

Gazing Into the Mind’s Eye With Mice – How Neuroscientists Are Seeing Human Vision More Clearly

 Mice have complex visual systems that can clarify how vision works in people. Westend61/Getty Images

Mice have complex visual systems that can clarify how vision works in people. Westend61/Getty Images

Despite the nursery rhyme about three blind mice, mouse eyesight is surprisingly sensitive. Studying how mice see has helped researchers discover unprecedented details about how individual brain cells communicate and work together to create a mental picture of the visual world.

I am a neuroscientist who studies how brain cells drive visual perception and how these processes can fail in conditions such as autism. My lab “listens” to the electrical activity of neurons in the outermost part of the brain called the cerebral cortex, a large portion of which processes visual information. Injuries to the visual cortex can lead to blindness and other visual deficits, even when the eyes themselves are unhurt.

Understanding the activity of individual neurons – and how they work together while the brain is actively using and processing information – is a long-standing goal of neuroscience. Researchers have moved much closer to achieving this goal thanks to new technologies aimed at the mouse visual system. And these findings will help scientists better see how the visual systems of people work.

The Mind in the Blink of an Eye

Researchers long thought that vision in mice appeared sluggish with low clarity. But it turns out visual cortex neurons in mice – just like those in humans, monkeys, cats and ferrets – require specific visual features to trigger activity and are particularly selective in alert and awake conditions.

My colleagues and I and others have found that mice are especially sensitive to visual stimuli directly in front of them. This is surprising, because mouse eyes face outward rather than forward. Forward-facing eyes, like those of cats and primates, naturally have a larger area of focus straight ahead compared to outward-facing eyes.

Microscopy image of stacks of neurons

This image shows neurons in the mouse retina: cone photoreceptors (red), bipolar neurons (magenta), and a subtype of bipolar neuron (green). Brian Liu and Melanie Samuel/Baylor College of Medicine/NIH via Flickr

This finding suggests that the specialization of the visual system to highlight the frontal visual field appears to be shared between mice and humans. For mice, a visual focus on what’s straight ahead may help them be more responsive to shadows or edges in front of them, helping them avoid looming predators or better hunt and capture insects for food.

Importantly, the center of view is most affected in aging and many visual diseases in people. Since mice also rely heavily on this part of the visual field, they may be particularly useful models to study and treat visual impairment.

A Thousand Voices Drive Complicated Choices

Advances in technology have greatly accelerated scientific understanding of vision and the brain. Researchers can now routinely record the activity of thousands of neurons at the same time and pair this data with real-time video of a mouse’s face, pupil and body movements. This method can show how behavior interacts with brain activity.

It’s like spending years listening to a grainy recording of a symphony with one featured soloist, but now you have a pristine recording where you can hear every single musician with a note-by-note readout of every single finger movement.

Using these improved methods, researchers like me are studying how specific types of neurons work together during complex visual behaviors. This involves analyzing how factors such as movement, alertness and the environment influence visual activity in the brain.

For example, my lab and I found that the speed of visual signaling is highly sensitive to what actions are possible in the physical environment. If a mouse rests on a disc that permits running, visual signals travel to the cortex faster than if the mouse views the same images while resting in a stationary tube – even when the mouse is totally still in both conditions.

In order to connect electrical activity to visual perception, researchers also have to ask a mouse what it thinks it sees. How have we done this?

The last decade has seen researchers debunking long-standing myths about mouse learning and behavior. Like other rodents, mice are also surprisingly clever and can learn how to “tell” researchers about the visual events they perceive through their behavior.

For example, mice can learn to release a lever to indicate they have detected that a pattern has brightened or tilted. They can rotate a Lego wheel left or right to move a visual stimulus to the center of a screen like a video game, and they can stop running on a wheel and lick a water spout when they detect the visual scene has suddenly changed.

Mouse drinking from a metal water spout

Mice can be trained to drink water as a way to ‘tell’ researchers they see something. felixmizioznikov/iStock via Getty Images Plus

Mice can also use visual cues to focus their visual processing to specific parts of the visual field. As a result, they can more quickly and accurately respond to visual stimuli that appear in those regions. For example, my team and I found that a faint visual image in the peripheral visual field is difficult for mice to detect. But once they do notice it – and tell us by licking a water spout – their subsequent responses are faster and more accurate.

These improvements come at a cost: If the image unexpectedly appears in a different location, the mice are slower and less likely to respond to it. These findings resemble those found in studies on spatial attention in people.

My lab has also found that particular types of inhibitory neurons – brain cells that prevent activity from spreading – strongly control the strength of visual signals. When we activated certain inhibitory neurons in the visual cortex of mice, we could effectively “erase” their perception of an image.

These kinds of experiments are also revealing that the boundaries between perception and action in the brain are much less separate than once thought. This means that visual neurons will respond differently to the same image in ways that depend on behavioral circumstances – for example, visual responses differ if the image will be successfully detected, if it appears while the mouse is moving, or if it appears when the mouse is thirsty or hydrated.

Understanding how different factors shape how cortical neurons rapidly respond to visual images will require advances in computational tools that can separate the contribution of these behavioral signals from the visual ones. Researchers also need technologies that can isolate how specific types of brain cells carry and communicate these signals.

Data Clouds Encircling the Globe

This surge of research on the mouse visual system has led to a significant increase in the amount of data that scientists can not only gather in a single experiment but also publicly share among each other.

Major national and international research centers focused on unraveling the circuitry of the mouse visual system have been leading the charge in ushering in new optical, electrical and biological tools to measure large numbers of visual neurons in action. Moreover, they make all the data publicly available, inspiring similar efforts around the globe. This collaboration accelerates the ability of researchers to analyze data, replicate findings and make new discoveries.

Technological advances in data collection and sharing can make the culture of scientific discovery more efficient and transparent – a major data informatics goal of neuroscience in the years ahead.

If the past 10 years are anything to go by, I believe such discoveries are just the tip of the iceberg, and the mighty and not-so-blind mouse will play a leading role in the continuing quest to understand the mysteries of the human brain.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 
News Contact
Author:

Bilal Haider, Associate Professor of Biomedical Engineering, Georgia Institute of Technology

Media Contact:

Shelley Wunder-Smith
shelley.wunder-smith@research.gatech.edu