In the Paper BrandedUp Watch Hello! Create with us Privacy Policy

Jessica Soho on AI: Scary but exciting

Published Aug 04, 2025 2:15 pm

Jessica Soho described artificial intelligence as both scary and exciting, hoping it will be used for good. 

During her speech at the Global Youth Summit held on August 3, the seasoned broadcast journalist emphasized the potential of AI for the youth. 

As digital natives, she said, they can create their own opportunities as long as AI is used responsibly. 

"AI is scary, yes! But I am also excited with what you can do with it, for as long as you use it for good."

Moreover, she talked about how AI can be used to cross-check and fact-check amid an online landscape where everyone has their own platform.

According to a November 2024 report by Jobstreet, 46% of Filipinos use generative AI for work and their personal life at least once a month. This is more than the global average of 39% and the Southeast Asian average of 44%.

"Huwag kayong maniniwala sa deepfakes ko na naglalako ng kung ano-anong produkto at investment," she added.

In her speech, Soho also urged young people to be storytellers who are responsible citizens and online users.

"Bawat isa sa atin may kulang o kahinaan, may baggage o krus na pinapasan—bubog, sabi ng premyadong manunulat at National Artist for Film and Broadcast Arts na si Ricky Lee. At ang payo niya, sa buhay man o sa pagsusulat, gamitin mo ang bubog na building block o pambuo ng iyung kuwento o pagkatao," she said.

"Gaano man kasakit o kahirap, hugutin ang bubog o kung anuman ang iyong dinadala as a motivation to write your story or to make something of yourself. Kung hindi, ang bubog ay mananatiling bubog na laging magbibigay ng pahirap at pasakit. Pagtagumpayan niyo sana sa buhay ang inyong mga bubog."

Other journalists on AI

Several journalists have also voiced their thoughts about the use of AI. In a previous article for PhilSTAR L!fe, media advisor and technologist Jaemark Tordecilla highlighted how the use of AI for school work, in particular, can be problematic. 

He said that AI chatbots like ChatGPT are prone to errors called hallucinations and are trained more heavily on data from the West. However, Tordecilla noted that AI is a good tool to process information.

"For example, you can use it to process a large document in seconds and to analyze a big budget dataset, but you need to be able to ensure the accuracy of its results along the way. In short, you need to be able to check whether its output is correct. This means that you need to already have an idea about the answers you’ll get even before you ask your questions," he wrote.

For her part, award-winning digital journalist Jacque Manabat believes the future of journalism is "AI-enhanced, not AI-driven."

"Stories need human connection, which is something AI cannot replicate," she wrote in an article for The Philippine STAR. "AI should work for us, not the other way around. In the end, it's not about replacement, but adaptation."

In a speech for One Young World, Nobel Peace Prize laureate Maria Ressa talked about how AI is "manipulating" people. 

"One of the greatest dangers that technology has done today is it's robbing us of history, of context, of nuance," she said.

"If you're using ChatGPT, be careful... And you can use their 10 foundational models, all of [which] are not transparent, all of [which] have taken our content as publishers, your content, anything on the internet, and fed it—and you know garbage in is garbage out—it's also thrown in the social media the toxic sludge that is insidiously manipulating all of us. That's in the mix, so the probability is based on that none of this (AI-generated content) is anchored in facts," she continued.

Ressa also shared that her company, Rappler, has "embraced" GenAI in some ways. "I want you to be careful... Like us, use the tech, but be aware. [Thinking] slow is never going to fight thinking fast."

According to its website, Rappler labels all content that has been produced by AI tools. It uses GenAI for tasks like summarizing, transcribing, data sorting, grammar and style checking, and translating, but "still with human oversight."

A recent study from the Massachusetts Institute of Technology suggested that the use of large language models could lead to a decline in critical thinking.

Researchers found that ChatGPT users “consistently underperformed at neural, linguistic, and behavioral levels" when asked to write essays about different topics for under 20 minutes.

The ChatGPT-generated essays lacked originality or critical depth, according to researchers, with the human teachers perceiving the essays as "soulless" and lacking "personal nuances."