Researchers from the Monash University Malaysia campus have announced they are developing a platform that aims to moderate and verify content shared to online forums and discussion boards.

The platform was developed by the Monash University Malaysia School of Information Technology. According to the researchers, it uses a combination of graph algorithms and machine learning to extract tacit information from platforms like Reddit, StackExchange, and Quora, to apply a score that estimates the reliability of someone's post.

The university explained that with people increasingly relying on social media for information, the dissemination of fake news related to issues like health and politics will "remain a constant challenge unless there is an urgent application that can appropriately moderate and verify content online".

See also: QUT develops algorithm aiming to block misogyny from Twitter[1]

"By assigning numbers to users of various online discussion forums we're able to reward those people who are sharing credible and trustworthy content, while punishing others who are pushing incorrect and misinformed content. The reward or punishment aspect is tied to the visibility and engagement of someone's profile or content," Dr Ian Lim Wern Han from the university's School of IT said.

"If users are credible, their content will be placed higher up on the page for more visibility and their Reddit votes will be worth more when they vote on other threads or comments. If a user is deemed untrustworthy, their post will be placed lower on the page or even in some cases hidden from the public altogether and their votes have less worth."

Lim collected over 700,000 threads across a bunch of online forums from almost two million users. His research profiled each individual with a rating and these numbers were then used to predict a user's contribution on a subsequent day. The

Read more from our friends at ZDNet