Ask the Faculty: Experts on Impact of AI

Bryan Terry, Assistant Director of Content Marketing & Communications
Graphic image of classroom with technological imagery overlaid.
Image created using Adobe Firefly.
 

Ask the Faculty is an Inside Marist recurring series that features top experts on the most pressing and interesting topics of our time, providing insight into societal issues, cultural touchstones, technological developments and other hot topics.

Today's Topic: 

  • Marist’s new Minor in Applied Artificial Intelligence aims to help students across disciplines learn to engage with a rapidly changing, AI-driven world. What is one aspect of AI in your field that you think is critically important for students to understand, no matter what career path they take?
     

Dr. Eitel Lauria: Professor of Data Science & Information Systems 

Image of Dr. Eitel Lauria.
Dr. Eitel Lauria is a Professor of Data Science & Information Systems and Director of Graduate Programs in the School of Computer Science & Mathematics. He is an expert in data analytics and enterprise information systems with over 20 years of global consulting experience. Photo by Nelson Echeverria/Marist University.

No matter what career path students take, they must recognize that AI systems are fundamentally shaped by the data they are trained on. The "garbage in, garbage out" principle is especially relevant in systems that learn from data: poor-quality data, whether incomplete, erroneous, or biased, will inevitably lead to flawed AI systems.

Students must understand that AI doesn't create knowledge out of the blue; it extracts patterns from existing data. Even AI systems with advanced reasoning capabilities, like the ones we will encounter soon, will remain constrained by input data quality. Professionals in every field must critically assess the data used by the AI tools they employ.

Sci-fi author Arthur C. Clarke once noted that "any sufficiently advanced technology is indistinguishable from magic." That’s precisely how many people perceive much of AI today. Students must move beyond this perception. Data preparation often represents a substantial share of the work in successful AI implementation, even though it tends to remain invisible to end users. By focusing on this data-AI relationship, we can make students become not just AI consumers, but informed AI users who can evaluate AI tools' appropriateness, limitations, and potential risks in their respective domains, a critical skill for navigating our increasingly AI-driven world.
 

Page break with Marist "M"

Dr. Anne Zahradnik: Associate Professor of Health Care Administration

Image of Dr. Anne Zahradnik.
Dr. Anne Zahradnik is an Associate Professor of Health Care Administration whose research centers on program evaluation and communication in health care policy. Photo by Al Nowak/On Location.

In healthcare administration, AI is already being used to predict patient needs, manage staffing, and streamline operations. That’s exciting. But it also raises important questions about how those decisions are shaped, and whom they’re ultimately serving.

At Marist, I work with students to unpack those questions. In one class, we examine an AI tool used by hospitals to flag patients likely to miss follow-up appointments. It sounds helpful, and often is—but then we ask: What data is the system using? Could it unintentionally penalize patients who don’t have stable housing or easy access to transportation? That’s when the conversation shifts from how it works to who it’s really working for.

That’s why I’m so glad Marist is launching the Applied AI minor. It’s not about turning everyone into a data scientist—it’s about helping students in any major think critically about the systems shaping their fields. Whether you’re going into healthcare, business, communications, or policy, understanding how AI influences decision-making is quickly becoming part of being professionally fluent.
 

Page break with Marist "M"

Professor Brian Gormanly: Senior Professional Lecturer of Computer Science 

Image of Brian Gormanly.
Professor Brian Gormanly is a Senior Professional Lecturer of Computer Science with over 15 years of industry experience as a software engineer and entrepreneur, specializing in scalable systems, cloud computing, and IoT (Internet of Things) applications. Photo by Nelson Echeverria/Marist University.

I believe it is critically important for students to understand how to thoughtfully and strategically integrate AI tools into their work. I've encouraged Marist students to actively engage with AI so they can genuinely appreciate both its strengths and its limitations. My goal is for students to develop a clear understanding of when AI can effectively amplify their abilities and when it’s best to rely on their own judgment and creativity.
 
To support this, I have actively contributed our new interdisciplinary Applied AI minor and created a new special topics course called "Applied AI in Software Development," which I am teaching for the first time during the Spring ’25 semester. This course addresses emerging trends and real-time developments in AI, many of which are unfolding even as we study them, highlighting the dynamic nature of the field.  

It is essential for students to cultivate a balanced approach to AI, recognizing it as an assistant that enhances their unique voice and capabilities rather than as a replacement for critical thinking and personal insight. Understanding how AI shapes workflows, influences decision-making, and interacts with human values is essential, no matter what professional path they pursue.

Page break with Marist "M"

Dr. Sasha Biro: Lecturer of Philosophy and Religious Studies 

Image of Dr. Sasha Biro.
Dr. Sasha Biro (center) is a Lecturer of Philosophy and Religious Studies. Her research focuses on the role of myth in contemporary philosophical thought and its cultural significance. Photo courtesy of Sasha Biro.

 

One aspect of AI that I think is critically important for students to understand, no matter what career path they take, is responsible usage. As AI becomes increasingly integrated into our lives, it’s critical to understand AI’s capabilities and limitations to ensure we are applying this technology ethically. What makes AI especially fascinating is its ability to aggregate responses to philosophical dilemmas humans have wrestled with for centuries. 

Take the the classic thought experiment, the trolley problem, which poses the question:  Is it morally permissible to actively cause harm to one person if doing so will save a greater number of people? With the emergence of autonomous vehicles, this dilemma isn’t just hypothetical, it offers a striking example of utilitarian consequentialism in practice. 

Who decides how a vehicle should respond in a moral crisis? And on what basis? Who is best positioned to determine such outcomes? These kinds of questions highlight the need for understanding the ethical dimensions of AI, from agency and responsibility to bias and fairness. AI doesn’t ‘think' in a human sense. Instead, it predicts outcomes based on past data patterns. That makes it all the more important to examine the human choices behind AI development and use. At the heart of it all is the need to center the human to ensure that our values, judgments, and social responsibilities guide how these technologies are designed and implemented in the world.
 

Page break with Marist "M"

Dr. Gissella Bejarano: Assistant Professor of Computer Science 

Image of Dr. Dr. Gissella Bejarano.
Dr. Gissella Bejarano is an Assistant Professor of Computer Science whose research focuses on machine learning for sequential data, including sign language processing and smart city forecasting. Photo by Carlo de Jesus/Marist University.

As AI systems increasingly generate code and automate decisions, it's essential for students to develop both computational thinking skills and soft skills. By learning how to break down problems and assess logic, they become capable of understanding, reviewing, and correcting AI-generated outputs. This empowers them to see themselves as professionals who can collaborate with AI tools to enhance creativity, productivity, initiative, and responsibility.

From my perspective, these abilities need to be nurtured from the ground up, in thoughtful and controlled learning environments—after all, we don’t teach a child to do arithmetic by simply handing them a calculator.

Finally, it's critical that students are encouraged to share their knowledge and support diverse participation, recognizing that technology reflects the perspectives of its creators. We must ensure that no one is left out of the design or implementation process, especially as AI continues to shape our shared future.

Page break with Marist "M"

To book an interview with a Marist faculty expert, contact the Media Relations team.

  3399 North Road, Poughkeepsie, NY 12601  |   (845) 575-3430  |    mediarelations@marist.edu

To subscribe to Inside Maristclick here.

 

Asset Publisher