Young Innovator Aifric Deasy Earns Oracle Award for AI Research
- Online Journalist
- 5 minutes ago
- 2 min read
By Tara Maher
Aifric Deasy, a 5th-year student at Kinsale Community School, combined her interests in politics and computer science for her project. She set out to tackle the growing problem of bias in news media by using artificial intelligence, which earned her the Oracle Academy special award and 3rd place in the Senior Technology Individual category. “Bias in news is a bigger problem today than ever before, and I wanted to use my interest in computer science to help address it,” Aifric told The Carrigdhoun.
“Even though we think we can recognise bias when we see it, humans are actually very bad at identifying it on our own.”
During her research, Aifric found that only just over a quarter of adults can accurately identify bias in statements presented as factual information. As a result, many people consume biased news without realising it, allowing disinformation to spread unnoticed. This motivated her to develop an AI tool that could warn readers about possible bias in the news they read.

Aifric wrote all the code for the project herself and trained her AI models independently. She learned machine learning and natural language processing through online courses and resources. Because she did not have access to a GPU at home, she trained her models using free GPU services available online.
“Although this made the project much more challenging, it was the best way to approach it,” she said. “It forced me to learn a lot about AI and machine learning, which are skills I might not have learned otherwise. AI is becoming an essential part of computer science, and I know I’ll be grateful in the future that this project helped me develop those skills.”
As part of her findings, Aifric discovered that her model—built using a novel architecture focused on analysing linguistic traits—outperformed existing state-of-the-art solutions. It also showed much better generalisability, meaning it worked well on content different from what it had been trained on.
“Improving generalisability was the main goal of my project, so I was pleasantly surprised that test performance improved as well,” she explained. “These two measures often conflict with each other, so it was fascinating to see both improve at the same time. My results highlight the potential of this linguistic approach for identifying bias and for other tasks in computational linguistics.”
Biased news is a widespread issue that affects everyone who consumes media.
“My project provides a useful tool to help people recognise potential bias in the news they read, which can help reduce the spread of disinformation,” Aifric said.
“The approach I used is very novel and has strong potential for future applications in computational linguistics.”
