A word or two about Artificial Intelligence

Artificial Intelligence = Not originating naturally, made by human skill in imitation of something natural.

To define Artificial Intelligence as the Oxford Dictionary has [above] may already be needing a rethink given the rate of knots at which we are broadening and redefining the concept.

But first things first, what do we know about AI? Well, we know that there are several types of AI and that they are the barriers that separate machines from us and indeed us from them.

 

We know that the current intelligent systems are able to handle huge amounts of data and they can make complex calculations very quickly, but they lack that element that is needed for us to build sentient machines that is the goal for the future.

Intelligence, generally described, is the ability to perceive information and retain it as knowledge which can be applied towards adaptive behaviours within a context or an environment, essentially it involves learning, understanding, and being able to apply the learned knowledge in the achievement or one or more goals.

You could say that AI can be described as intelligence exhibited by machines. Thus we have machine intelligence versus natural intelligence as displayed by humans and other animals.

IBM WatsonYou may remember past postings here telling you about IBM’s Watson computer that beat the human Chess Master [1990], or the Google one [AlphaGo] that beat the world top experts in ‘GO’ in recent times. Such machines are known as Reactive machines because they do not remember or form memories, but for each move recalculates the possible next move and then chooses, i.e. acting on what it sees. Whilst Google’s machine used a neural network to evaluate game development such methods can’t readily be applied to other situations.

AlphaGoAs these machines will behave in exactly the same way every time they meet the same situation they may be good for e.g. autonomous cars because the system will be reliable and will never be bored, disinterested nor sad. But you wouldn’t want that type of machine to engage with or respond to the world.

Then there is another type, the ones with limited memory, the machines that can look into the past, and self-driving cars can do some of this already. Observation added to the preprogrammed representations of the world like lane markings, curves, traffic lights and other important elements, and what to do to change lanes when about to being hit by another car. A human driver builds on experience and retains that knowledge, whereas the car information is transient, i.e. not saved to learn from in a new situation.

It is apparently very difficult to do and we know that the connection speed needed for the self-drive car to interact is beyond us presently so we may have to wait till the machine learns to build its own representations.

Robo-earthMachines of the future of a more advanced type will need to be able to understand that each of us can have thoughts and feelings and expectations for how we’ll be treated, and they will have to adjust their behaviour accordingly. It is our social interactions that allowed us to form human societies so if these machines are ever to become part of society then there is a long way to go.

Lastly there is the type of machines where we have succeeded in developing and build a system that can form representations about themselves, the self-aware type. That means that the developer needs to understand consciousness and build machines that have it.

Conscious beings are aware of themselves, they know about their internal states and are able to predict feelings of others, e.g. if someone in a car behind you is tooting at you, you know that the person is impatient, or wanting to draw your attention to something, because that is how you might react in the same circumstance, with other words the understanding of other people, creatures and objects in the world, also called ‘theory of mind’ in psychology. Without that we could not make those kind of inferences.

We already use AI in thousands of applications, but because such applications have become mainstream we no longer consider them as AI but just an application of normal computing. Think of speech recognition software [processing, generation and understanding] and other recognition tasks [patterns, text, audio, image, video, facial] autonomous vehicles, medical diagnosis, gaming, search engines, spam filtering, crime fighting, marketing, robotics, remote sensing, computer vision, transportation, music recognition, classification and lots more.

Hand on heart, how many did you recognize as AI applications?

Google alone from a sporadic usage in 2012 of AI in software rose to 2700 projects  3 years later.

future robotAs I alluded to above, the requirement for our AI robot of the future of being able to understand speech, emotions, reasoning, and what it means ,and and how humans react is crucial to evolve machines to understand intelligence on its own. It is indeed a hot topic at the moment and one which is being heavily invested in by the big corporations for some key reasons. Among those are the ability to automatically provide deep insights, recognize unknown patterns and create high performing predictive models from data, all without requiring explicit programming instructions.

In a nutshell, machine learning is all about automatically learning a highly accurate predictive or classifier model, or finding unknown patterns in data, by leveraging learning algorithms and optimization techniques.

If you are wondering if that really is the end of it, the short answer is no, we haven’t begun to talk about things like artificial neural networks modelled on our biological ditto, which I  won’t bore you with and which is beyond me. Suffice it to say it gets very complex especially when we get into Deep Learning. A simple example could be to train a neural network with real-world data to spot a spam email, transcribe spoken words into a text message or recognize a cat, some of which you recognize already.

Your smartphone has an AI accelerator chip [iPhone X, Samsung GalaxyS9] and will no doubt appear in the mid-premium phones shortly to speed up the processes. Those of you familiar with say Adobe’s Photoshop and Premium Pro already push their processors to the limit and they have now introduced Sensei AI technology for speeding up tasks like photo editing.

Cars will get AI brains as the autonomous technology improves and we have spoken of the IoT [Internet of Things] such as Alexa [Google] and Echo [Amazon], these speakers are at the cutting edge of AI even if they rely on network links for most of their brains. With built-in AI chips they could understand your voice commands faster so you don’t have to wait for your instruction to be recognized and carried out.

AI is undoubtedly an exciting and very powerful field that will become more important and will continue to develop quite likely beyond what we can dream of for now.

At the speed it moves now we can expect to see some amazing things in our lifetime, and you may think it bad, good or indifferent, a Pandora’s box maybe, but you cannot be untouched.

The more we know about something usually turns out to be less threatening than first thought. And to our ultimate advantage.

Congratulations if you read this far, knowledge is power. Be happy.