image of students at research fair

Gissella Bejarano

image of Gissella Bejarano

Gissella Bejarano

Peru

Academic School

Computer Science and Math

Campus

New York

Dr. Gissella Bejarano, PhD and her use of AI as part of her research related to sign language


Abstract: Dr. Bejarano is a professor of computer science at Marist College, with industry, government, and research expertise with artificial intelligence. Here at Marist, she brings a unique perspective from Peru and keeps her priority focused on bridging gaps between minority groups. Her specific AI research revolves around sign language processing, but as a professor, she believes she also holds a responsibility to teach students how to use AI as a mentor rather than just a source of information. She champions a human-centric approach to AI and believes we need to use our learnings from past technological advancements like social media in order to properly navigate the constantly evolving nature of AI. 

What has driven your interest in AI and how to use it in your field? 

As an undergraduate, I was first interested in data mining and knowledge discovery from huge amounts of data. I was very interested in going deeper, but in industry I couldn’t do that too much. So I decided to get a PhD in Computer science focused in machine learning and enter a research career and become an academic.

How does racial diversity come into play with AI? 

Machine learning models are based on huge amounts of data. We can arrive at the conclusion that people that are more exposed to the internet have more access to putting their knowledge and opinions about the world on the internet. Which is usually the source of some models. We need diversity to ensure most of us are represented in the data.

How are you making use of AI in relation to sign language? 

Mostly sign language processing. Which is to not only recognize individual signs, but to be able to translate a sequence of signs as a sentence and translate that to Spanish, to English. I’m working with two teams, one for Peruvian sign language and one in Baylor for American Sign Language. I like that because we are an interdisciplinary team. Which I think is key in these kinds of projects so we won’t have only the machine learning perspective or the engineer perspective which sometimes is not enough. All of us work together because we want this humanities perspective to compliment and help build technical models.

Could you provide an example of how AI has made a difference in your classroom or research? 

I keep telling my students to use AI very carefully. I don’t prevent them from using it. I tell them to try not to use AI at the beginning because you’re learning how to code, how to think, how to develop computational thinking. Unless you’re super careful and tell ChatGPT to provide rationale, they might end up not learning.

Why should the everyday person care about advancements in AI research related to language? 

For students, professors, and the average person, we need to know how tech works so we don’t end up just consuming it without reflection. 
Even though AI has great potential to help us with lots of things, we need to focus on people. We need to be aware of the humanities side that we can take to guarantee that AI is designed to service all.

Why is it important for teachers to embrace AI in the classroom? How can it help with the nuance of sign language? 

I think it’s important for faculty to embrace it. Because students might be using it anyways, and we have a great opportunity to discuss how to use it much better. It’s good to discuss with students to know their perspective and to ask ourselves. I try to tell my students to engage more in their learning process. So they’re not only receivers of information. A good way to use it is as a mentor that provides step by step rationale. Instead of an entity that just gives you answers.

What are some of the top ethical concerns regarding the use of AI that we should all be thinking about?  

We want everyone to be represented in the data. And making sure that models that are deployed in the real-world work for every single group. I worry about the biases when training models in very closed data sets or not diverse data sets. 

Why is it important for educational institutions like Marist to be involved in AI development? How should the institution be approaching it? 

It will help us all think about AI and see it and think and discuss and have different perspectives from different academic disciplines. Having those humanities and academic perspectives is so important in the advancement of AI. It’s a great initiative. 

Should AI be feared or embraced? 

I think it’s not about taking a side of embracing or fearing. It’s to have both. Because as with any other technology or aspect of our lives, we find good things and bad things. We need to embrace the good things and the potential of AI and of course restrict or at least make people aware that some people are using AI in bad ways.

AI should be human centric. And think what I wouldn’t like this model to do to me, I shouldn’t allow the model to do to others.

What about the future of AI excites you? 

To bridge the gaps. To bridge the gaps of people who can’t access education. I think we can bridge the gap by providing AI tools to teach and access information and education to groups that haven’t been able to access them. 

 

Interview conducted by Trevor McCormick

RelatedJournalArticles