AI technology is more advanced than ever - but how far has it still got to go?

AI: What does the future hold for artificial intelligence?

Artificial Intelligence (AI) has found its way into almost every corner of our lives from our transport and food to our finance and relationships, but some of the biggest leaps to be made are more abstract.

Up until now, all AI has been what is known as ‘weak’ or ‘applied’ AI. Simply stated, this means that current artificial intelligences have been restricted to solving specific problems and have to be pre-programmed to perform specific tasks. Whilst these AIs have been exceedingly useful they fall short of ‘true’ AI and will not be able to truly allow for computer vision, natural language understanding or deal with unexpected circumstances in real-world scenarios.

Many Strong AI researchers hope to combine current Narrow AI methods such as artificial neural networks, data mining and weak computer vision together to make a unified, universally capable AI. This area of research has exploded in recent years and consequently, the main focus of Strong AI research has been in developing the relevant architectures, with particular weight given to integrated agent, cognitive or subsumptive architectures.

The development of Strong AI is essential for many of the problems we want AI to solve, but none of the problems are so monumental and paradigm-shifting as the problem of brain emulation. Once an AI is created that is so powerful so as to emulate a human brain, our concept of death will change as brains are scanned and emulated, and the rate of technological growth will cause what is known as a technological singularity, resulting in unfathomable changes to human civilisation. Whether this will result in a benefit or a harm to the human race, or even if such an AI is possible remains to be seen – and is perhaps beyond the scope of a blog post.

A more manageable problem with the advent of a Strong AI would be in determining who is human and who is not. Currently, methods such as CAPTCHAs are used to differentiate humans from computers, enabling internet security companies to automatically block access to automated data-mining bots and to stop brute-force attacks by password-guessing bots. As AIs become more and more capable of computer vision and creative thinking, digital security engineers will have to develop more and more complex methods for confirming your humanity.

A key test for ascertaining humanity is the Turing Test. Theorised by logician Alan Turing, the Turing Test attempts to discern Strong AIs from Weak AIs by having a group of human question masters ask a series of questions to unknown keyboard operators. If 30% of the question masters believe that the agent on the other side of the chatbox is human then they are deemed to be either a Strong AI or a human. In turn, this would mean that the AI was capable of independent thought and truly sentient.

In 2014 a 13-year-old Ukranian boy called Eugene Goostman passed the Turing Test on the 60th anniversary of Turing’s death. This would not be a remarkable feat, were it not for the fact that Eugene was not a human, but a computer program. By using his age and nationality to his advantage, Eugene convinced 33% of the panel that he was actually a 13-year-old boy from Ukraine, revealing a huge gap in the Turing Test’s methodology which had been theorised as far back as Turing’s first conception of it.

John Searle outlined the main problem in 1980 with his paper ‘Minds, Brains, and Programs’ with his ‘Chinese Room’ thought experiment. In the experiment, Searle places himself in a box with a slot for sheets of paper to enter and exit. As sheets of Chinese characters enter the box through the slot, Searle follows a list of instructions which instruct him, step by step, in how to respond to certain Chinese instructions. He then writes what he thinks is the correct response and pushes his response through the appropriate slot. The Chinese question master on the outside of the box can then only declare that the box is sentient and knows Chinese. This, Searle argues, is a close approximation of the Turing Test and damning evidence of why it is not a true test of conscience of Strong AI. Of course, Searle does have his detractors, and it is well worth reading both his full paper and a selection of the responses, but the Chinese Room thought experiment succinctly demonstrates the problems we have with even classifying AI in the first place.

Jackson Hogg is a specialist recruitment firm based in the North East of England. We specialise in finding top candidates in the engineering, manufacturing and technology industries, taking on all stages of the recruitment process, across all forms of engineering, manufacturing and technology including electrical engineering, IT recruitment and technology recruitment from entry-level to executive searches. If you are a candidate with experience in machine learning, robotics or electronics, or a company looking to identify top talent please contact Margaret Celgarz by emailing margaret.celgarz@jacksonhogg.com or by calling 07375287739.

Leave a Reply

Your email address will not be published. Required fields are marked *