Top Banner

The Rise of AI – The Impact on Family Offices – Part 2

From Human Intelligence to Artificial Intelligence

The previous, introductory article, observed how a pervasive intelligence revolution is underway that will, eventually, greatly affect our professional and personal lives.

It is therefore imperative that we try to understand the nature of this transformation. What exactly do we mean by artificial intelligence (AI) and how does it work?

This article reviews the attributes of human intelligence, enquires to what extent our current technology can replicate it in machines, and indicates some of the barriers to progress.

These aspects are of intrinsic interest. In considering them, it is apparent that, for the foreseeable future, humans will continue to supervise economic activity, but at an increasingly creative level.

On Human Intelligence 

Think of Human Intelligence as:

Human intelligence = Biochemical intelligence + Mind

For our purpose, biochemical intelligence is defined here as a baseline ability to understand, reason and make predictions, implemented in our human, biochemical substrate. All our other subjective attributes are grouped under Mind (or consciousness). They include our feelings, perceptions, hopes, desires, beliefs, intentions and self-awareness. These also surely influence our intelligent behaviour. They give us purpose. Without minds we are just robots, no matter how complicated, without a primary motive. It is why humans will determine the future if AI does not become conscious.

Apart from lack of consciousness, there are also other reasons for expecting that even replicating our biochemical intelligence on conventional hardware and software is out of the reach of current technology. They have to do with the formidable capacity of the brain, on how we think and learn and on implementing plain common sense. We evaluate these next.

The Brain

Our brain, a small, convoluted, three-pound mass of grey matter, has an impressive biochemical complexity, efficiency, power and memory:

1. Structure: It has nearly 90 billion nerve cells, or neurons, which are linked together by about 10 trillion connections called synapses. Collectively, this enables brain signals to pass through hundreds of trillions of different pathways. The cerebrum, the main driver of “intelligence”, contains roughly a quarter of all the neurons. A single neuron is probably more complex than our latest AI technology − a deep learning, convolutional neural network.
2. Memory: Scientists now believe that the capacity of the human brain is about a petabyte – the entire Internet
3. Processing power: It is at least two million times faster than the fastest computer on earth (as of 2013).
4. Efficiency: The human brain can store this volume of information with just enough power to run a dim light bulb. By contrast, a computer with the same memory and processing power would require 1 gigawatt of power, or as Tom Bartol, a neuroscientist at the Salk Institute put it, “basically a whole nuclear power station to run one computer that does what our ‘computer’ does with 20 watts”.

According to Tim Dettmers (a deep learning specialist) computing technology may only achieve this capability by about the year 2100.

Thinking and Learning

Human thinking occurs in two principal modes that alternate between reasoning and understanding. When we learn a task or skill, with practice, our reasoning gradually transitions to understanding. Whereas reasoning is a conscious, laborious process, understanding is spontaneous and fast. It is performed subconsciously. Understanding, supervised and supplemented by reasoning and common sense, enables us to function efficiently in a dynamic, complex environment.

Example: Driving a Car
Consider how one’s expertise develops in driving a car. During learning, recall how it was necessary to concentrate on making assessments and analysing many individual tasks: coordinating with the steering, clutch, gears (in the old cars!) accelerator and brake; evaluating the response of the car; gauging distances and position on the road in relation to obstacles, boundaries, other cars and pedestrians.

One employed a combination of innate common sense and reasoning in anticipating the behaviour of cars and pedestrians just ahead to assess the risk and make decisions; and, importantly, to learn to correct errors in decision making and control.

The result, after numerous trials, is that no deliberation is eventually needed on many individual actions, or sequences of actions, and on making predictions. They have progressively morphed into sub-conscious, expert assessments, integrated with the smoothly coordinated, physical tasks involved in navigating the car through traffic. For the most part, driving becomes instinctive, a complex but holistic activity, easy to perform yet difficult to explain. But, at the same time, periodically, as the situation demands, a switch to conscious analysis occurs. Unexpected encounters, parking, following signs, road junctions, and so forth, override our automated behaviour.

The interplay between instinct and reason, is how we generally learn, think and act from day to day, interspersed with common sense.

In an article “Why AI Works” Monica Anderson, an independent AI researcher and educator, refers to the book “Thinking Fast and Slow”, by Daniel Kahneman. The book discusses how the mind employs two complementary ways of thinking, which Kahneman calls System 1 and System 2. System 1 is fast, automatic, intuitive and a largely unconscious mode. System 2, in contrast, is our slow, deliberate, analytical and consciously effortful mode of reasoning about the world. The driving example illustrates these two modes in action.

The chart below (Figure 1) is reproduced from Anderson’s article. Appropriately, she calls system 1 “Understanding” and System 2 “Reasoning”.

Figure 1. Monica Anderson Independent AI researcher, implementer, epistemologist, and educator exploring Deep Neural Networks since 2001; ex-Googler; founder Syntience Inc. and Sens.AI

We can characterize these as follows:

Reasoning is a process: It involves analysis. It is reductive. It cannot cope with complexity, but it offers insights. It provides rules.

Understanding, on the other hand, is a state: you look at, or pay attention to, an object or a situation and you understand or know what it is without having to analyze it. It is spontaneous and holistic. It can look through complexity, seeing the wood for the trees.It is pattern recognition.

During learning, our thinking migrates along a continuous spectrum. Reasoning, analysis and deliberation, are at one end of this continuum; spontaneous understanding, pattern recognition, intuition and instinct are at the other end. With practice in a particular domain, our skill level moves from slow and deliberate to fast, efficient and subconscious, and spanning increasingly complex cognition and activity.

The extent to which our actions can become automatic, of course, depends on the level of complexity. Consider the measured effort required when parking in a tight space compared to our intuitive driving when the traffic is flowing.

As we transfer from deliberate to instinctive mode, so our cognition transitions from a precise and laborious focus on details to a more efficient, holistic awareness (seeing the wood for the trees), but still at a level that is sufficient to carry out the activities in question. Driving would not be possible if we always had to examine details and employ reason.

It would also be problematic if we did not employ common sense.

Common Sense

Common Sense is the third mode of thinking.

Our capacity to understand often requires an awareness of some context. This awareness is necessary for a proper, nuanced and not always literal, interpretation of the object of interest. This might be a situation, an image or a sentence. It is a capacity that (most) humans possess. We apply it to act “sensibly” in all sorts of new situations. It is the ability to fill in the blanks – to infer the state of the world from partial information; to forecast the future from the past and the present.

For example:
• Does the driver who’s edging out into the road intend to merge into traffic?
• Is a pedestrian standing on the curb looking down at his smartphone, absent-mindedly, about to cross the street while absorbed in his phone?
• “The man couldn’t lift his son because he was so weak.” In this case, “he” could logically refer to either the man or his son. But as humans, we know that the son was weak in this context. For computers, the “he” is equally valid for both.

This facility comes from a basic knowledge about how the world of human beings works. It is not, strictly, rule-based or logical, but a set of heuristics that is both inherent in us and also accumulated and refined during our daily interaction with the world.

On the other hand, “Common sense is nothing more than a deposit of prejudices laid down in the mind before age eighteen” (Albert Einstein). But not many people have the ability to question common sense and creatively think out of the box!

To what extent can we model our biochemical intelligence in hardware and software?

On Artificial Intelligence

Figure 2 is a modest impression of how current technology translates, or, more aptly, dilutes, human intelligence to create artificial intelligence.

Figure 2

We have seen how the three attributes required to simulate human thinking are Reasoning, Understanding and Common Sense. To what extent can a machine be trained to replicated them ?

Reasoning – Rules
The implementation of reasoning in a machine is straightforward, in principle. It can be accomplished by conventionally coded, logical rules – computer instructions specified by domain-specific experts. Whereas, for humans reasoning is a slow, laborious process, computers, on the other hand, are able to process rules much faster.

However, there are practical limitations to applying only reasoning capacity to very complex settings, because a very large ensemble of internally consistent and complete instructions would be required, which could be difficult.

Understanding – Pattern Recognition
The way around this is to realise that, often, we do not need to analyse the details of a complicated situation or environment or object in order to recognize it to make assessments or decisions.

An obvious example is the futility of a detailed analysis of the relationship between individual pixels to comprehend the image of a face. Depending on the level of analysis required, one could progressively step back and focus on the nose, the eyes, and so on until the whole picture is effortlessly seen. In many instances, all we need is such a holistic, high-level understanding.

It turns out that the recent dramatic growth in computer processing power has enabled machine learning techniques, particularly Deep Neural Networks (DNN),  to recognize very complex patterns, at many levels of complexity, in all kinds of data. In many instances, when properly trained, such machines outperform humans in some tasks like classifying objects in images. In cases of digital data (for example, cross-sectional and time-series financial data), where there are complicated, hidden, non-linear relationships, they perform better than even the most advanced statistical procedures. This sophisticated pattern recognition capability is therefore an expedient surrogate for human understanding.

Common sense – Heuristics?
Unfortunately, a principled approach to implementing common sense in a machine is still missing. Pattern Recognition is thus, largely, a literal interpretation of the world, without nuance and context.

The human capacity for common sense seems to originate from the way we learn, by discovering and gathering “rules of thumb” – heuristics – through experience and trial and error. This suggests that a similar process might be possible in machine learning.

Machine Learning

In a recent automotive experiment, NVIDIA trained convolutional neural networks (CNNs – a form of DNN) to map the raw pixels from a front-facing camera to the steering commands for a self-driving car. After training, the car could navigate freeways, country roads, gravel driveways, and steer (only) in the rain after just 3,000 miles of training data.

Training data was collected by placing video cameras on a car and driving on a wide variety of road, in a diverse set of lighting and weather conditions. The cameras gathered “surface street data in central New Jersey and highway data from Illinois, Michigan, Pennsylvania, and New York. Other road types include two-lane roads (with and without lane markings), residential roads with parked cars, tunnels, and unpaved roads. Data was collected in clear, cloudy, foggy, snowy, and rainy weather, both day and night. In some instances, the sun was low in the sky, resulting in glare reflecting from the road surface and scattering from the windshield”.

The training consisted of presenting the sequence of training video images to the CNN and getting it to learn to output the proper amount of rotation required on the steering wheel for each image. A method known as supervised learning was employed, in which, for each image presented, a prescribed rotation was also shown. The training algorithm employed a systematic trial and error procedure to adjust the system to output the correct rotation for each of the training road conditions. Thereafter it was able to “steer” correctly for images not previously recorded.

This capability should not be underestimated. Without explicitly learning the pattern in each video frame, the CNN nevertheless automatically focused on the required level of detail from each image to make the steering decision.

Quite human-like, one could say!

However, learning performed by humans is mostly unsupervised, discovering regularities in the world and building predictive models through observation and action, trial and error, across a voluminous number and diversity of everyday experiences. It is this that gives us a general ability to exercise common sense, which we naturally transfer to learning specialized skills.

For a fully driverless car, where a human would have to intervene once every 5000 miles, a recent estimate of how much training would be required is:

• 3 million miles of live test drives
• 1 billion miles of simulated test drives

This suggests that, with a sufficiently large number and variety of training examples, machine learning could emulate operationally useful “common sense”.

In closing, one should not minimize the utility of reasoning. Wherever intentions, directions and constraints are needed, users must specify rules. AI applications will typically combine rules and advanced pattern recognition algorithms with active expert participation to provide common sense and supervision!

The next article will look at machine learning technology in more detail and its applications, particularly as they relate to family offices.

Right Banner 2Right Banner 1