Maria Molina, Assistant Professor of Advertising and Public Relations, has been awarded the National Science Foundation (NSF) CAREER Award. The five-year grant, totaling more than $500,000, will support Molina's groundbreaking research on human-AI interaction, specifically how people navigate trust and privacy in their use of generative AI systems.
"This grant is a major milestone in my career," Molina said. "It is the most prestigious grant for early scholars from the National Science Foundation. It is also a very competitive grant, with early scholars across disciplines applying every year. Receiving the award is a recognition of my strong research trajectory."
The NSF CAREER Award recognizes educators who exemplify the role of teacher-scholars, combining innovative research with a deep commitment to education and mentorship. Molina's award places her among an elite group of emerging leaders in science and engineering nationwide.
Investigative Human Trust in Generative AI
Molina's project addresses a critical issue: the ways in which design choices in generative AI systems—such as chatbots—shape user trust. Developers often rely on anthropomorphic design features, like conversational styles that mimic human communication, to make AI interactions more intuitive. But these same features can unintentionally encourage users to place unwarranted trust in AI, leading to risky behaviors such as oversharing personal information.
Through focus groups and controlled experiments, Molina's team will identify which cognitive heuristics, or rules of thumb, are most triggered by AI design cues and test interventions to reduce unfounded trust. The goal is to develop strategies that ensure trust aligns with the system's true capacities, ultimately protecting users from privacy risks.
Educating the Next Generation of Ethical AI Designers
Beyond its research contributions, the project also emphasizes education and workforce development. Molina will create a chatbot toolbox—a resource that outlines common heuristics, the cues that trigger them, and practical design strategies to mitigate risks. The toolbox will serve as both a guide for AI developers and a teaching tool to help train future professionals, including students outside of computer science who are increasingly tasked with deploying AI tools in their fields.
"We are all using AI chatbots, like ChatGPT or Gemini," Molina said. "But we are not very careful about the information we share with these technologies, from personal information to sensitive information (e.g., health-related). While computer scientists build these technologies, it is also crucial to communicate about these technologies to help users protect privacy by design and help users be more mindful about the information they are sharing. Communicate is the key word here. As communication scholars we are equipped to take over this problem. And as professor, my goal is teaching my students about AI technologies and the role we as communication professionals can play in the design of these technologies."
Looking ahead, Molina sees this grant as an opportunity to expand student engagement and shape the role of communication professionals in the future of AI.
"This grant is a five-year grant in which I hope to engage undergraduate and graduate students in research about human-AI interaction," Molina said. "I hope to inspire ComArtSci students to think about AI as not only a tool to use for a communication process (e.g., using AI to write or create an image), but also how our expertise as communication professionals is crucial in the ethical and responsible design of AI technologies. After all, user experience is about communicating the technology to the user, and who can do that better than us in ComArtSci. Beyond ComArtSci, I hope that the tools that result from the grant can be used broadly to train non-computer scientists in ethical and responsible AI practices."
Positioning MSU at the Forefront of Ethical AI
Molina's CAREER Award underscores MSU's leadership in advancing ethical, human-centered approaches to emerging technologies. By investigating how cognitive heuristics shape trust in AI and developing solutions that safeguard user privacy, Molina's work will help ensure that AI systems are designed responsibly and transparently—while preparing the next generation of communicators to lead in this critical space.