The internet is a vital part of all of our lives. It’s been around for almost three decades and has become increasingly advanced and complex during that time. But it isn’t fool proof. Although more processes behind the internet are being automated, as recent cases have shown, the need for a human touch is more important than ever before.

Quality and safety issues

As more of our lives have gone online, numerous quality and safety issues on the internet have arisen. There are concerns about the spread of fake news and other misinformation, a development that has reportedly played a role in both Brexit and Trump’s election. Advancements in artificial intelligence (AI), specifically a branch of it known as deep learning, has created deepfakes. These images and videos depict acts that have never occurred and have been used to spread confusion over political events. They are getting harder to spot and only a degree of knowledge around key societal issues and critical thinking skills can help humans identify what’s real and what isn’t.

Rudimentary knowledge

Therefore, the internet needs human management. Current AI technology used by the likes of Google and Facebook is still relatively rudimentary. It is mostly powered by machine learning, another subset of AI, that is limited in its functions. General AI knowledge that matches a human’s abilities is still a long way off.

The American developmental psychologist Howard Gardener described seven types of human intelligence: natural, musical/sounds, logical/mathematical, life experience, interpersonal, body and linguistic. Naturally, AI will be better in some areas and worse in others. Human intelligence, therefore, is needed to plug the gaps in current AI knowledge. That is, to understand different cultures, what constitutes hate speech, humour, and different language nuances.

Mixing human and AI capabilities

With this understanding, leading tech firms use a mix of human and AI capabilities to protect and police our activities online. Google has recently appointed an external advisory council with complete oversight of how it uses AI in its services, for example. This will help unpick potential biases in algorithms that may impact search results. A right-wing person is more likely to respond to right-wing content, so they will mostly be served those results. In that way, they become more biased as they are never exposed to a different view.

Exacerbating hate crime

Then there’s the thorny issue of fake news and how this is shaping our political and economic climate. Zuckerberg, in particular, has been hauled in front of Congress to answer for Facebook’s role in sharing Russian misinformation during the 2016 presidential election and the platform’s use of personal data. Fake news becomes more worrisome when you consider its potential to change the course of elections, cause riots and incite hate crime.

Because of this, Facebook and other tech companies (Google, YouTube and Twitter, specifically) have turned to human content moderators, supplemented with AI, to flag fake news and harmful content. But this has also created scandals, with content moderators reportedly suffering from PTSD-type symptoms after a few months viewing disturbing content online.

The Napalm Girl problem

Despite the personal risks, human content moderators play a vital role in policing our online world. They can understand the historical and societal context of specific content that won’t be allowed in AI’s black-and-white rulings. The ‘Napalm Girl’ image, for example, would normally be banned online because it depicts child nudity. But the historical importance of the image means that it remains.

Quality control

Humans also play a part in evaluating search engine results. Search engine evaluators give feedback on the accuracy, timeliness and breadth of search engine results. They ensure that it is spam-free and relevant to the searcher’s query. Some feedback can be provided through machine learning, by telling an algorithm if a user clicked on its results. But, again, human intelligence is needed to supplement this with experience and critical thinking.

Therein lies the rub for the internet. It needs humans to fill gaps in knowledge that technology and the algorithms powering it cannot yet provide. That’s not to say that AI won’t advance to a point where it can make better judgements on online content, but humanity will always be required. The internet can only reach its full potential by capitalising on the strengths of both machine and mankind.