-
About
Marist Commencement
Celebrating the Class of 2025
• The graduate ceremony will be on Friday, May 23.
• The undergraduate ceremony will be on Saturday, May 24.About
-
Academics
Marist Commencement
Celebrating the Class of 2025
• The graduate ceremony will be on Friday, May 23.
• The undergraduate ceremony will be on Saturday, May 24.Academics
-
Admission & Financial Aid
Marist Commencement
Celebrating the Class of 2025
• The graduate ceremony will be on Friday, May 23.
• The undergraduate ceremony will be on Saturday, May 24.Admission & Financial Aid
-
Student Life
Marist Commencement
Celebrating the Class of 2025
• The graduate ceremony will be on Friday, May 23.
• The undergraduate ceremony will be on Saturday, May 24.Student Life
- Athletics
image of students at poster presentation fair

Eitel J.M. Lauría
Buenos AiresAcademic School
Computer Science and MathCampus
New YorkDr. Eitel J.M. Lauría, Professor of Data Science & Information Systems, Director of Graduate Programs
Abstract: Dr. Eitel J.M. Lauría is a Data Science & Information Systems Professor and the Director of Graduate Programs (MSIS, MSCS) at the School of Computer Science & Mathematics at Marist College. Born and raised in Buenos Aires, Argentina, Dr. Lauría holds an Electrical Engineering degree from Universidad de Buenos Aires, an MBA from Universidad del Salvador, Argentina, and a PhD in Information Science from University at Albany, SUNY. His broad research interests cover the fields of data science and artificial intelligence, in particular machine learning, natural language processing, and data & information quality, focusing on the application of these disciplines in a variety of domains, including IT implementation, educational technology, learning analytics, health informatics, network security, and business analytics. His research on early detection of academically at-risk students has been widely cited, starting with the Open Academic Analytics Initiative (OAAI), a project funded by the Bill & Melinda Gates Foundation aimed at increasing college student retention using data mining methods. Prof. Lauría is also actively involved in research and development of question-answering and conversational systems using deep learning architectures and large language models.
Give me a little background about yourself – what do you teach, what inspired you to pursue this path?
My broad research interests cover the fields of data science and artificial intelligence, in particular machine learning, data mining and predictive analytics, natural language processing, and data & information quality. I come from the IT industry, where I worked for 20+ years, first at Apple Computer and then running a software consulting company, working closely with IBM, Microsoft, ExxonMobil, Reuters, STET France Telecom, GE Global Research, and the World Bank among other global corporations. I have drafted and implemented much of the data science, analytics and artificial intelligence curriculum at the School, both at the undergraduate and graduate level. I am the 2015 recipient of the Board of Trustees Distinguished Teaching Award.
What has driven your interest in AI?
I took a graduate course in machine learning at SUNY Albany many years ago which was an eye-opener. Most of my work up to that point had been centered around decision support systems and data-driven applications. It really sparked my interest and helped me narrow the topic of my doctoral dissertation on Bayesian networks, a form of probabilistic expert systems.
What unique perspective do you bring to Marist?
As I said, I am a researcher in the fields of data science and artificial intelligence with 20+ years of experience in the tech industry, working with large corporations. I have helped develop the field of learning analytics, with seminal work that placed Marist among the first institutions to develop and implement this kind of systems.
How are you making use of AI in your field?
AI is a very broad term that has changed its meaning over time. In the last twenty something years it has shifted from rule-based systems to machine learning, also a broad term, but which more accurately reflects many of today's advancements. For more than a decade now I have worked and done research in the fields of learning analytics, educational data mining, and educational technology, developing and implementing machine learning and data-driven models to a) early detect students at academic risk, b) predict and explain student attrition, in particular freshmen attrition. Also, since the inception of the first large language models, I have taken an interest in question-answering and conversational systems. I work and have worked and published with some of my students and colleagues. Early this year, for example, together with Amanda Damiano (SCA) and one of my students we published a paper on early perceptions of ChatGPT, analyzing data from a survey collected from Marist College students and instructors.
What excites you most about the potential of AI in your field?
Well, my field is closely connected to data-driven AI. I think the research on causal machine learning and explainability is extremely important, as models become more complex. Of course, the exponential growth in Generative AI opens a large number of research avenues.
Could you provide an example of how AI has made a tangible difference in your classroom or research?
I presume you refer to Generative AI and large language models. In terms of research, I would say that it has become a natural, very knowledgeable assistant with which I dialog, reason, and elicit information; it helps with tedious tasks such as writing standard code, and also testing new approaches, debugging and troubleshooting technical problems, summarizing information, and so on. In that regard, it is a time saver. Models are increasingly more powerful and tend to hallucinate much less.
In the classroom, it is still an open question. I tell my students to use GenAI wisely for two reasons: a) they take the class to learn, and the learning process implies involvement in the subject matter, so anything produced entirely by GenAI does not help the learning process; b) they have to understand that when they go to a job interview, they are not going to have GenAI helping them in the solution of the technical problems they are presented with; they need to know, by themselves.
Why should the everyday person care about advancements in AI research related to your field?
In the fields of learning analytics, educational data mining and educational technology current advances in AI research will help develop better predictive models, learned from broader datasets. This should help the student population, instructors and educational institutions. In my work on Q&A/ conversational systems, progress in AI is evident and increases with the release of new and more powerful models.
Why is it important for educational institutions like Marist to be involved in AI development? How should the institution be approaching it?
I think the AI initiative at Marist is a very wise decision. There is a saying out there that goes like this: “you will not lose your job to AI, you will lose your job to someone trained in the use of AI”. This applies to individuals and organizations. We cannot deny what is happening, the AI wave is moving, and moving fast, very fast. I come from the tech industry, which has historically been driven by trends and fads. The current progress in AI R&D is no fad, it is probably the most important development in the history of humankind. It will affect individuals, culture, organizations, society, and humanity as a whole. We need to help our students navigate these new and uncertain waters, and for that, the whole organization and Marist community must get educated in the use of AI. It is reflected in Marist100’s pillars: academic vibrancy, student centrality, and expansive community. We won’t be able to achieve these goals if we disregard AI.
Why is it important for information systems teachers to embrace AI in the classroom?
It is not just information systems teachers. As I mentioned before, teachers must embrace the use of AI. Denying its existence is not a valid proposition. But having said so, the question is how teachers should embrace AI in the classroom. With a few exceptions, in some types of courses, or some types of assignments, it is still an open question and an ongoing challenge, currently addressed using a trial-and-error approach. That’s why benchmarking other institutions, developing best practices, policy and recommendations is a very important task. Teaching and learning are about to change, but we still don’t know how much.
My major concern is not the ethical issue of academic integrity. Cheating is an issue, but a much bigger problem is lack of incentive. If you have a large language model that can produce creative work as good or better than the average individual, why bother to try to create? This affects everyone (cell phones are a good example of how technology can affect our neural pathways, shaping attention spans, memory, and cognitive behaviors). But it is much worse with learners, who don’t know what they don’t know.
What are some of the top ethical concerns regarding the use of AI that we should all be thinking about?
First and foremost, bias and discrimination: If we want a better, more diverse, inclusive, and equitable society we should find ways to contain AI systems that can perpetuate or amplify biases.
Disinformation: large language models are increasingly more persuasive, AI-generated audio, images and video are increasingly more realistic; malicious actors can cause substantial damage with the use of these technologies, making it difficult to discern true from fake.
Surveillance and lack of privacy. Shoshana Zuboff wrote a very comprehensive book called “The Age of Surveillance Capitalism”. This was before the current GenAI explosion and large language models. It can get much worse.
Equal access to these technologies: a disparity could deepen the divide between the “haves” and “have-nots”, in a world which is not particularly fair.E
conomic, cultural and societal impact: in a way, this is related to what I mentioned in the previous question, and there is ongoing research that points in that direction: over-reliance on these technologies can limit creativity. There is also the question of job loss, but I addressed that before.
Long term: Impossible to discern, but existential risk is certainly one of them. There is a lot of debate about whether we are going to reach Artificial General Intelligence -aka AGI- in a few years (AGI is an advanced form of AI capable of performing any task that a human can do). I don’t think that the question is whether we will reach AGI soon: perhaps a more relevant question is whether we will develop AI systems that are superior to us (they are already, in a number of tasks). Superior is key: it does not have to be human intelligence, just superior. Stuart Russell, the legendary AI Berkeley professor wrote the following in a recent book: AI is like a superior alien civilization. But if a superior alien civilization sent us email saying, “We’ll arrive in 30-50 years”, would we just reply, “OK, call us when you get here, we’ll leave the light on”? That’s a call to action. And it used to be 30 to 50 years. It could now be considerably less according to the AI industry leaders and researchers.
Should AI be feared or embraced?
The problem is not AI, it's humanity itself. AI is being developed without guardrails, if you compare the amount of investment in AI research to AI safety research. I don’t agree with those who argue that placing a regulatory framework on AI R&D will curtail innovation.
What about the future of AI excites you?
Well, if humankind can control its self-destructive tendencies, AI can/will help produce a better world. Just one example: half of the Nobel prize in chemistry this year was awarded to two DeepMind researchers (Deepmind is a division of Google) who developed a deep learning system -Alphafold- to solve a 50-yr-old problem: predicting proteins’ 3D structures. That is a major breakthrough and a testament to human creativity and initiative.