1. Artificial intelligence is not smart yet
The pace of AI development has been exaggerated. The applications of artificial intelligence are not smart yet, claims Teemu Roos. He leads a University of Helsinki research group on machine learning, which focuses on big data and applications of AI in quantum physics and medicine.
When a computer wins a game of chess against a human, it does not mean that artificial intelligence has surpassed human intelligence. It just means that the programme has been optimised for chess. One programme can predict the movements of the markets, another can recognise faces, and still another can find relevant documents out of huge amounts of data.
– This is to say that current methods can only handle fairly narrow duties. For example, the much-advertised Watson from IBM is a collection of individual methods all doing their own thing, not a single, multipotent artificial intelligence, says Roos.
However, different methods can be combined under a single application, such as self-driving cars. Roos believes that they will become a routine mode of transportation in Helsinki within a decade.
– The risk of accidents exists, but it will probably be smaller than with human drivers, says Roos.
2. Artificial intelligence has no culture
According to Roos, artificial intelligence is not an entity that becomes increasingly smart, gains self-awareness on its own and then takes over the world. Even though the capacity of computers is growing exponentially, their problem-solving skills are not.
For an artificial intelligence to develop itself independently, the machine should be able to solve increasingly complex problems. People have had to adapt to the fact that making progress in science is becoming increasingly difficult, because problems become more complicated as the amount of information increases. According to Roos, however, we have become accustomed to this, and we also use our cultural understanding to solve problems.
– It’s unlikely for artificial intelligence to surpass the collective intelligence of people, says Roos.
Another often-cited dystopian vision is the thought experiment known as the paperclip factory. It proposes a factory controlled by AI, instructed to create as many paperclips as possible as cost-effectively as possible. At some point, the AI will examine statistics and find that the fewer humans are competing with it for raw materials, the more paperclips it can produce. It then begins to kill people to optimise production.
According to Roos, this is an unrealistic scenario.
– For AI to escape human control, it should also be able to understand humans well enough to realise that paperclips are not our sole goal in life.
3. Artificial intelligence may discriminate
Roos believes that right now, algorithmic bias is a pressing issue. For example, we can teach an algorithm to select potential employees from thousands of CVs. The algorithm will inspect previous recruitment data and find that people of certain nationalities are less likely to be chosen. It will then begin to screen them out. This means that if the raw data is discriminatory, the system will also learn to discriminate.
– Even if we remove the applicant’s name, gender and nationality from the CV, the algorithm may still learn to discriminate. It can draw conclusions on the applicant’s gender and ethnicity based on specific vocabulary or other small hints, says Roos.
However, Roos believes that it will be easier to eradicate discrimination from data than from human behaviour, as data cannot lie to make itself look better.
In one study, Google search would show women advertisements for lower-paid jobs than it did for men. It’s possible that women had been clicking on such ads previously, so the algorithm learned to recognise them in searches. Ultimately, women will find it harder to find high-paying open positions, if the search function never suggests them.
– These phenomena are the result of living in a society that has discrimination. It’s good that the new EU General Data Protection Regulation, which will come into force in late May 2018, means that companies must be able to justify the use of machine-learning algorithms. This will help us recognise the reasons for discrimination, says Roos.
Roos finds it unfortunate that the most vocal participants in the artificial intelligence discussion are the extremes: the foolhardy optimists and the doomsday prophets.
– We don’t need buzzwords or dystopias, we need careful understanding of the possibilities afforded by AI, and this is also our aim in the Elements of AI course.