by Keren Setton
JERUSALEM, March 12 — It started as a game. Or Levi, a former Israeli scientist at eBay, challenged his friends: Could they differentiate between automatically generated fake news and real news?
The game was a success and Levi realized that if fake news could be generated automatically, artificial intelligence (AI) could be utilized to recognize fake news that is circulating.
“I wanted to use AI to break the problem into smaller problems that are solvable by AI,” Or Levi tells Xinhua.
Thus Adverif.ai was born in 2017, the year in which the Collin dictionary crowned “fake news” as the word of the year.
Levi is a 29-year-old Israeli based in Holland where the headquarters of the company are located. He has a team working both in Israel and the United States.
The sophisticated algorithm takes existing content and first checks to see if it was already discredited by other fact-checking sources.
According to Levi, fake news can be identified partly due to the language used in such publications, short sentences filled with sentiment, exclamation marks and question words, things that do not frequent in credible news reporting.
In the age where news is generated by many sources and easily spread not only by reputable news organizations, the job is immense.
Currently, firms like Google and Facebook employ thousands of people who are tasked at identifying fake news and malicious content.
Facebook found itself in the midst of controversy when denying almost any presence of fake news in its platform.
“The problem with manual screening is they are dealing with amounts of data that are impossible for humans to process,” Levi told Xinhua.
Levi claims his software has 90 percent accuracy rates, saying 100 percent is probably impossible to reach.
“We don’t know where the glass ceiling is yet,” he added.
The tool Levi is offering will not replace human screening completely, but will expedite the process.
In an attempt to perhaps rectify past mistakes, last week Facebook announced a partnership with the Associated Press (AP) aimed at debunking “false and misleading stories” related to the upcoming mid-term elections in the United States.
The main markets targeted by Adverif.ai are the advertising market, media and social media organizations, governments or non-governmental organizations.
The advertising market is perhaps most lucrative, but also most vulnerable to fake news.
Clickbait, content which is meant to attract attention and more clicks is leading source of malicious content.
The “click performance model” provides a huge incentive for publishing information that is not entirely true but attracts web traffic.
“What we are trying to do is to cut the channel between the advertising networks and fake news movers,” Levi explained adding “once they don’t have the incentive, we hope they will move forward to other ventures.”
Whether advertisers will be willing to give up on traffic in order to enhance their credibility remains a question.
A study published just days ago by the Massachusetts Institute of Technology’s (MIT) Media Lab found that false news was 70 percent more likely to be retweeted than true news.
The study said this was more likely partly due to the linguistic features that Adverif.ai’s algorithm also uses to identify such content.
Levi declined to give names of firms working with Adverif.ai, only saying he has clients in the social media and advertising market. He mentioned also working pro bono with certain fact checking organizations.
Israeli financial newspaper Calcalist recently reported that the European Union (EU) is using the software to target fake news and misinformation circulating on the web.
With fakes news not going away, there is clearly in need for means to target it and reduce its scope.