We can do a lot with AI... but who's considering the ethics?

|
Facebook share Twitter X share Pinterest share Linkedin share

AI researcher Dr. Stella George’s focus is how policy can promote responsible AI, and ways to educate and empower the public

Dr. Stella George spends a lot of time thinking about AI as part of her research program, but not exactly in the same way many other Athabasca University researchers do. 

Rather than focusing on developing new algorithms to improve AI performance, or developing new applications for AI, the assistant professor at AU's School of Computing and Information Systems in the Faculty of Science and Technology focuses on the ethical implications of the current boom in AI technologies. 

In other words, she spends less time asking questions like “Can we do this?” and “How can we do this?” and more time asking questions like, “Should we do this?” and “How can we do this responsibly?” 

“I’m interested in the social impacts of AI on a macro scale, and how the use of AI tools is changing society and changing our focus as a society on where we’re putting our resources,” George explained. “AI is not free. Even if there’s no fee to use it, it’s very resource intensive.” 

Many publications, such as Scientific American, have pointed out that generative AI can use huge amounts of energy. This raises questions around how we’re using water,  energy, tax dollars. It also raises the question of how the race to not fall behind impacts the safeguards society has put in place around things like environmental change. 

A key part of that is educating regular people to become part of the discourse around these bigger AI questions. How can we use AI ethically in our daily lives? How can we educate people about what the issues are, and empower them to become participants in a meaningful discussion around these social impacts of AI? 

I’m interested in the social impacts of AI on a macro scale, and how the use of AI tools is changing society and changing our focus as a society on where we’re putting our resources.

Dr. Stella George, assistant professor, Faculty of Science and Technology

Early experiences working with data and AI

George began working with AI and machine learning in the 1990s when she completed her doctorate. At that time, there were serious limitations on what AI could do because of limits in computing power and also limits on data. The internet existed, but e-commerce was still in its infancy and social media was still decades from the kind of mass adoption we see today. 

“It wasn’t really AI in those days, it was sort of advanced statistics with that big data component,” she said. “It’s really very obvious now with the generative AI, just how fast this domain is changing and evolving.” 

She ended up working in one of the few places in those days you could find massive amounts of data—for a software development company that worked with financial institutions. The software focused on things like fraud detection and how to do banking and e-commerce digitally, but in a safe and secure way. 

“It felt really positive, and it had really positive social impacts. Like how your bank could be aware of what’s going on in your account even if you’re not looking at your account,” George said. 

Meanwhile, some of the technical limitations that held AI back, such as lack of computing power and data to work with, began to disappear. But as she watched the evolution of AI and big data, she began to develop concerns that social controls on how this technology developed were limited or inadequate. 

“There were more and more tech-savvy and computer-savvy people around, including what are known as bad actors: people looking to work outside the moral compass of general society,” she said. “So I started really watching the speed of development as those barriers started to lift.” 

With that, George felt the focus was moving away from using AI to benefit society and toward using AI to make money. This prompted her to leave industry and return to academia, with an eye to doing research and supporting policy that could mitigate some of the negative impacts of technology on society. 

“We’re talking about huge amounts of money to sustain these developments in AI,” she said. “And now it’s not just the money, it’s the energy, which creates a whole new world of challenges.”

Digital hands reaching out of a computer screen and typing on the keyboard

How can, and should, we use AI at AU?

In her current role with AU, George has co-chaired an academic integrity and AI working group and been involved in discussions around how and when instructors and students can and should be using AI, and with two specific focuses. 

The first focus is understanding who can benefit from the use of AI within the educational system. Students are already using AI to varying degrees, so there’s an opportunity to understand who is using it and how it’s benefiting them—particularly students with accommodations and diverse needs. 

“It’s important for us to both recognize how students are self-selecting to use these technologies, but also how we might support students in use of these technologies to be sure we’re still managing equitability,” she said. 

The second is how the use of AI changes student responsibility regarding their education. With such a powerful tool, it’s important for students to understand how they’re using it, and to use it in a way that doesn’t shortcut their own learning. 

For example, if the assignment is to demonstrate understanding by summarizing a particular text, then asking an AI to summarize the text for you would not help you to reach the learning goal. But in other cases, it may be appropriate for a student to get a high-level summary of a text to see the big picture before diving in to understand the text at a deeper level, which could in fact support the student’s learning. 

It's this distinction that forms the basis of the high-level advice George gives students: “AI tools can support the work around learning, but should not do the work of learning." 

It’s important for us to both recognize how students are self-selecting to use these technologies, but also how we might support students in use of these technologies to be sure we’re still managing equitability.

Dr. Stella George

Challenges with researching AI ethics

While George’s recent focus has been more on policy, she continues to work on developing research proposals related to AI uses, and is happy to work with students who are interested in researching issues related to the social uses and impacts of AI. 

“AI is applicable in many places,” she said. “If you find a social problem, chances are you have some data around it, and the chances are you can apply AI to the problem.” 

One of the projects she hopes to work on during a coming sabbatical is to develop a crowd-sourcing mechanism that would allow reporting of AI issues—in other words, something that can empower the public to be the responsible user. This bottom-up approach pairs with the top-down approach to regulation that’s also needed. Work is ongoing in this area to understand how to put some limits and controls on organizations using AI, but without killing the commerce and business that drives some of the decision-making. 

“That’s how society’s going to have to work with this,” she said. “We can put some policy in place to make sure the larger organizations driving the change aren’t doing anything illegal, but the nuanced control has to come from the ground up, from the people impacted.”

Explore AI with a master's degree

Athabasca University’s Master of Science in Information Systems (MScIS) is a graduate program like no other, with many routes and options to meet your unique educational goals.  

Athabasca University launches free course to counter anti-Indigenous racism in health care

Online course aims to address health and social disparities through exploring the roots of anti-Indigenous racism and discrimination.

Learn More