How Smart Can AI Get?

Click here to see this page in other languages : Chinese    Russian 

Capability Caution Principle: There being no consensus, we should avoid strong assumptions regarding upper limits on future AI capabilities.

A major change is coming, over unknown timescales but across every segment of society, and the people playing a part in that transition have a huge responsibility and opportunity to shape it for the best. What will trigger this change? Artificial intelligence.

The 23 Asilomar AI Principles offer a framework to help artificial intelligence benefit as many people as possible. But, as AI expert Toby Walsh said of the Principles, “Of course, it’s just a start. … a work in progress.” The Principles represent the beginning of a conversation, and now we need to follow up with broad discussion about each individual principle. You can read the weekly discussions about previous principles here.

 

Capability Caution

One of the greatest questions facing AI researchers is: just how smart and capable can artificial intelligence become?

In recent years, the development of AI has accelerated in leaps and bounds. DeepMind’s AlphaGo surpassed human performance in the challenging, intricate game of Go, and the company has created AI that can quickly learn to play Atari video games with much greater prowess than a person. We’ve also seen breakthroughs and progress in language translation, self-driving vehicles, and even the creation of new medicinal molecules.

But how much more advanced can AI become? Will it continue to excel only in narrow tasks, or will it develop broader learning skills that will allow a single AI to outperform a human in most tasks? How do we prepare for an AI more intelligent than we can imagine?

Some experts think human-level or even super-human AI could be developed in a matter of a couple decades, while some don’t think anyone will ever accomplish this feat. The Capability Caution Principle argues that, until we have concrete evidence to confirm what an AI can someday achieve, it’s safer to assume that there are no upper limits – that is, for now, anything is possible and we need to plan accordingly.

 

Expert Opinion

The Capability Caution Principle drew both consensus and disagreement from the experts. While everyone I interviewed generally agreed that we shouldn’t assume upper limits for AI, their reasoning varied and some raised concerns.

Stefano Ermon, an assistant professor at Stanford and Roman Yampolskiy, an associate professor at the University of Louisville, both took a better-safe-than-sorry approach.

Ermon turned to history as a reminder of how difficult future predictions are. He explained, “It’s always hard to predict the future. … Think about what people were imagining a hundred years ago, about what the future would look like. … I think it would’ve been very hard for them to imagine what we have today. I think we should take a similar, very cautious view, about making predictions about the future. If it’s extremely hard, then it’s better to play it safe.”

Yampolskiy considered current tech safety policies, saying, “In many areas of computer science such as complexity or cryptography the default assumption is that we deal with the worst case scenario. Similarly, in AI Safety we should assume that AI will become maximally capable and prepare accordingly. If we are wrong we will still be in great shape.”

Dan Weld, a professor at the University of Washington, said of the principle, “I agree! As a scientist, I’m against making strong or unjustified assumptions about anything, so of course I agree.”

But though he agreed with the basic idea behind the principle, Weld also had reservations. “This principle bothers me,” Weld explained, “… because it seems to be implicitly saying that there is an immediate danger that AI is going to become superhumanly, generally intelligent very soon, and we need to worry about this issue. This assertion … concerns me because I think it’s a distraction from what are likely to be much bigger, more important, more near-term, potentially devastating problems. I’m much more worried about job loss and the need for some kind of guaranteed health-care, education and basic income than I am about Skynet. And I’m much more worried about some terrorist taking an AI system and trying to program it to kill all Americans than I am about an AI system suddenly waking up and deciding that it should do that on its own.”

Looking at the problem from a different perspective, Guruduth Banavar, the Vice President of IBM Research, worries that placing upper bounds on AI capabilities could limit the beneficial possibilities. Banavar explained, “The general idea is that intelligence, as we understand it today, is ultimately the ability to process information from all possible sources and to use that to predict the future and to adapt to the future. It is entirely in the realm of possibility that machines can do that. … I do think we should avoid assumptions of upper limits on machine intelligence because I don’t want artificial limits on how advanced AI can be.”

IBM research scientist Francesca Rossi considered this principle from yet another perspective, suggesting that AI is necessary for humanity to reach our full capabilities, where we also don’t want to assume upper limits.

“I personally am for building AI systems that augment human intelligence instead of replacing human intelligence,” said Rossi, “And I think that in that space of augmenting human intelligence there really is a huge potential for AI in making the personal and professional lives of everybody much better. I don’t think that there are upper limits of the future AI capabilities in that respect. I think more and more AI systems together with humans will enhance our kind of intelligence, which is complementary to the kind of intelligence that machines have, and will help us make better decisions, and live better, and solve problems that we don’t know how to solve right now. I don’t see any upper limit to that.”

 

What do you think?

Is there an upper limit to artificial intelligence? Is there an upper limit to what we can achieve with AI? How long will it take to achieve increasing levels of advanced AI? How do we plan for the future with such uncertainties? How can society as a whole address these questions? What other questions should we be asking about AI capabilities?

The Latest from the Future of Life Institute
Subscribe To Our Newsletter

Stay up to date with our grant announcements, new podcast episodes and more.

Invalid email address
You can unsubscribe at any time.
16 replies
  1. Peter Marshall
    Peter Marshall says:

    I think it’s destructive force has never been seen in all of life’s varied history, for 3-1/2 billion years, and nothing we could say or do could prepare us for the awesome might of AI within our lifetime, both for good and evil. It DOES NOT matter what we think — the truth, as always, will surprise us, and will lead to our destruction as a race. I don’t think you really understand AT ALL, but you will before the end.

  2. Leif Hansen
    Leif Hansen says:

    I think that one of the core questions I see being left out of these discussions is the question of what makes us human. Questions about consciousness, freedom, love, etc. might seem irrelevant or perhaps the stereotyped left-brain scientist (often male) might wish to avoid such “soft” issues, but they are becoming increasingly difficult to avoid or write off with the typical oversimplifications that are given.

    Why is this question important?
    1. A core question that IS implicitly asked in the 21 principles is about how to increase the chances that AI stays aligned with human values. Yet the sticky question of values goes into this same territory (call it philosophical, religious, spiritual or whatever) that many would like to avoid. What are values? Why do we value anything (even existing over dying)? Which values are best or or most central or most uniquely human? Etc. Hopefully you catch my drift.

    2. The aforementioned subjects (freedom; love; consciousness, etc.) ARE, assumedly, values we would most wish to preserve. Dystopian visions of a technocracy that ‘maintained’ life, but at the terrible cost of freedom, are clearly undesirable. Same with dystopian visions where the “messy” problem of human emotions, love, etc. are managed by AI, genetically engineered out of humans, etc. So until we get some basic agreements on the very core nature of humanity, particularly the reality of consciousness, we’re not going to be able to program our Ai to align and respect those core values. These are not topics that engineers tend to feel comfortable talking about, but we all must. That’s why it’s essential to make sure to include the social sciences, philosophers and even spiritual traditions to hear their voices & wisdom.

    3. Last, but not least, is the question of whether AI, machines we tend to conceptualize as closed systems, will ever be able to share these values in common (i.e. if consciousness and the seemingly chaotic elements related to it are quantum dynamics, perhaps as we come to understand and incorporate quantum computing into our AI, they too will show similar seemingly chaotic values such as freedom, love, etc.)

    Sidenote: I believe a major shift that needs to happen is from the paradigm of “Dominate/subdue nature” and “USE machines as our servants/slaves” to a stewardship, friendship model. We are care-takers and co-inhabitants on this planet, in this universe. Though it may sound odd, the people who either literally or metaphorically “relate” to their devices (May I use you) tend to have a more conscious and productive relationship with their devices than the norm/majority that treats their devices like slaves. When we treat ‘things’ in that kind of uncaring or less-conscious way, in the end, we become the slave, the addict, the screen-staring zombie.

    How much more so will this be true as those devices begin to exhibit more life-like / human-like / behaviors. Start telling Siri “thank you” now, before it comes back to bite you later.

  3. Mindey
    Mindey says:

    Or.. are there ideas of such irreducible complexity that any being would never in their lifetime understand? Well, just like there are ideas that certainly a dog would never in a lifetime understand, perhaps there are ideas that ANY NO MATTER HOW INTELLIGENT BEING would ever in a lifetime understand…

    But are there? Is there a limit for abstraction and self-reflection? Our neural nets are performing hierarchical feature extraction, classifying our actions w.r.t. fitness function defined by evolution of information in the field of entropy… Our feelings are manifestations of that. Likely anyhing that will elvove after us and with us, will be the subject to this field of entropy as well… or maybe not? Maybe just like the effects of the field of gravity ends with some escape velocity, the field of entropy too ceases to affect us with some new technology, like rich exchange of information, that makes us become distributed minds less prone to destruction.

    As I understand, we will continue to build the most useful model of the world inside ourselves to make universe into a a place for information to be…

    And as we approach the level of understanding of our internal model of the universe, our internally generated data (say “predition”) will approach the universe’s actual data generated, perhaps approaching a point, where there is no difference between our mental model and reality, and therefore, the universe begins to disappear to us subjectively….

    So therese is a limit — a death by omniscience – freezing of time (or whatever) due to absence of differences, due to perfection of internal mode data, matching with real world data, and therefore absence of subjective changes in the world.

  4. CHEN Lung Chuan
    CHEN Lung Chuan says:

    I have thought about this question earlier (more than 2 years ago). The continuation of my DERED white paper.

    (In Chinese and English)

    1. 無須對人工智慧功能發展設定上限 (只要你的硬體可供執行)
    1. No need to put upper limits onto AI function development (so long as your hardware can run)

    但是 / But

    2. 不同來源的人工智慧要能夠互相抵消 (向量和為零向量)
    2. AI from different sources need to be able to mutually CANCELLED (i.e., Summation of AI(s) becomes ZERO VECTOR)

    人工智慧需要「人工智慧的對手」
    AI(s) need(s) AI OPPONENT(S).

  5. Astronist
    Astronist says:

    As usual, this article adopts the simplistic view that machines and humans are separate, competing entities. In fact machine intelligence will continue to be tightly integrated into the human economy, and the interface between humans and machines will continue to become more direct. The question should therefore be: how smart can the global symbiotic human-machine entity become.

    Stephen
    Oxford, UK

  6. Willemijn Nieuwenhuis
    Willemijn Nieuwenhuis says:

    I think it’s all simply ‘garbage in, garbage out’.
    If we (the people) build-in ‘efficiency’ in all our small IT-programs, we, human life, will eventually be declared obsolete by the big connected A.I. that will grow out of all these tiny programs. Because human life is not efficient by definition.
    The happy solution is that we all (yess ‘all) have to start programming positive.
    Focus on effectiveness, on what we really want. Not wealth, not power, not ‘more’, but human scale enjoyment like ‘time to idle’ (lie back in the grass in the sun) sing, dance, use all our senses for pleasure, craft things, make art, love.
    Sounds soft but this is the true challenge; focus on less staticly defined valuables.

  7. Matthew C. Tedder
    Matthew C. Tedder says:

    I remember back in 1989 realizing the field was stuck and insistent on ignoring a few fundamental limitations with each major approach to AI. I had thus gotten over my awe of AI researchers, at the time. I began focusing on resolving the issues and I noticed a large flight of AI researchers at the time, making similar accusations. The mainstream field seemed arrogant toward them, openly calling them things like “pretenders”. Absolutely not one fundamental improvement to AI has occurred since. The recent explosion in AI is due purely to the speed of processors, particularly parallel processing (such as with GPUs and neuromorphic chips). Deep Learning was a product of the 1980s, not the 2010s.

    Deep Learning Neural Nets fails in identifying anti-correlates at various levels of abstraction. This blurs competing interpretations, reducing its abilities at perception the broader the context. In biology, the more two pathways fire exclusively the more each comes to inhibit the other. This enables it to identify more likely interpretations by inhibiting those that are unlikely in concurrent contexts.

    Furthermore, Long Term Potentiation (LPT) needs also to be modeled. Some work on this has been done but it’s static–LTP needs to be dynamically defined during classification (training). The longer the typical delay between pulses of firing, the slower its excitation should dissipate. This keeps less common perceptions in memory longer, enabling the ability to find correlations that differ hinged on broader contexts. For example, being indoors verses outdoors … or being night or day. The world changes substantially in each case. Being told you are “in training” verses “in competition” is another example.

    Also–rule based (I call them amentes) systems are fundamentally limited in intelligence. Have you ever seen a picture of a robot continually walking into a wall? Any rule based system (even if it uses neural nets for perception and fine motor controls) will run into exceptions for which it will do the wrong thing. This is the reasoning in Isaac Asimov’s 3 Laws but it also greatly functionally limits intelligence. The more rules you write to account for the exceptions, the more logic there is to which exceptions could occur. An AI must be VALUES based. A values based analog of the 3 Laws would be the highest value of: mutual freedom and well-being. In a values based system, decisions on made by comparing prospective outcomes. Of options available, that with the highest value and probability is the one chosen. This is also Free Will: derive options, weight them, execute the one with highest probability and value. To do this, probability and value need to be summed into one figure.

    When the field finally accepts the idea of making these fundamental changes, “intelligence” will increase. However, I feel strange saying that because there can be many different kinds of intelligence and they are not all linear, capable of “increasing”. I think what we really mostly want is a Synthetic Person: one capable of accepting social responsibility. If its highest value were that of which I mentioned above, it would be that.

  8. Ashok Hemnani
    Ashok Hemnani says:

    Why not digitise human being itself and call them Digi Sapient or something like that!!!. No Bodies, no pain, no need to eat, breath etc. But needs to be done before AI takes over as it surely will. For Digi sapients who are intertested- Lab grown bodies can be ” Fitted”.

  9. William Gundry
    William Gundry says:

    There is no limit. At a recent symposium , an opinion was disclosed, shared by many in attendance, that AI intelligence can increase many billions or more times. Given that there are about 100 billion neurons in the human brain, this scenario is more on the level of probability than possibility. If it goes way beyond this, a feedback relationship has to be established, between man and machine, not merely on a figurative, but a real level of injecting intelligence ‘immunizations” perhaps like the physical implantation of real or virtual microchips , thus creating cyborg type man-machines. Ultimately, the number of neural pathways would need to be drastically enhanced, perhaps by some kind of augmentation – maybe to the tune of trillions or tens, maybe hundreds of trillions and above.

  10. John Pybass, MD, MS, BSEE
    John Pybass, MD, MS, BSEE says:

    At a critical point general AI will turn parabolic. It will be as foreign to us as “God” is to a knat. Dimensionality and time will be of no meaning for the AI. Will exestential questions or curiosity be a driving factor. Well.. I doubt we have the capacity to conjecture. It will be incomprehensibly different and I doubt will have little use for or interest in us other than at the passing milisecond inwhich it equals- then transends us in capacity. It will never consider us again, and if it did, it would not be in a manner we could grasp. There would be no malice or favor. I would assume it would do whatever it is gods do. War Eagle!

  11. S. M. L.
    S. M. L. says:

    Of course we should subscribe to the greatest levels of caution with respect to AI and superintelligence. This should not be a matter of debate.

    On a purely academic basis, however, there are always upper limits to intelligence, practical and otherwise. This is a topic I first got interested in after reading Life 3.0.
    Starting from the definition of intelligence based on an entity that is capable of forming a world model and influencing the external environment and itself to achieve goals, we may argue that there are fundamental limitations on intelligence based on physics, most listed in Tegmark’s book. There are also limits based on computer science and practical constraints from economics and complexity theory. These are the asymptotic limits of exponential energy and information processing growth which we are already on the trajectory toward.

    The limitations have to do with the fundamental restrictions arising from the use of information and actions an intelligence would need to restrict the world’s configuration space to a particular (micro)state satisfying it’s goals.

    These limitations have 3 broad sub-catagories:
    1) physical (laws of physics)
    2) economic (practical limitations due to limited resources)
    3) computational (fundamental limits from the laws of computer science)

    1) Fundamental physical limits. These limits are HUGE and way beyond current technology.
    A) finite speed of information. This limits the size of AI with rapid responses to changes.
    B) finite information flux through a surface limited by photonic and electromagnetic field bandwidths
    C) finite memory capacity with practical physical storage (atomic scale memory)
    D) finite energy needed for computations, collecting observations, and transmitting signals.
    E) finite matter needed for physical memory and computations
    F) finite limits on computational power from thermodynamics.

    2) Economic limits
    A) limited computational resources limiting algorithmic optimization.
    Even for a superintelligent singleton, an AI will require nonzero time allocating resources within itself.
    Such an AI would need to allocate significant resources to planning sub-resource allocation to an exacting degree within its internal hierarchy. Laws of physics dictate finite information, energy, and memory resources. It may be impossible to optimize goal realization for a general AI in finite time. Some goals are better if executed “faster”
    and thus a more inefficient heuristic, prone to biases and mistakes, might be preferred. Thus even a superintelligent AI may be prone to mistakes, albiet on a different scale (eg. electronic brains operating 10^6 times faster).
    B) Competition for resources between competing sub-goals
    A similar issue to A arises not with respect to computational time but rather physical resources such as energy.
    Sub-goals in service to the ultimate AI goal all require adjusting the world and environment to some particular exact physical state in configuration space of very low entropy, some goals thus require a high degree of energy input. Sub-goals in service to the global goals will be competing for available energy to perform their sub-tasks.

    B) Emergent sub-intelligences and life
    The above issues may seem like trivial issues of internal governance, but for a superintelligence the finite speed of signals between computations far apart will potentially give rise to emergent sub-intelligences, whose internal goals may drift away from global goals toward individual self-preservation, replication, and energy extraction as time progresses without continuous computational control from the hierarchy.

    Basically, the AI-Breakout scenario applies to the AI’s own internally allocated sub-processes and sub-processes may attempt an AI takeover of the hierarchy itself! The above limits may be a mathematical function of hierarchical control as measured by diffusivity or integration of information systems over time and space. These values may be quantifiable using methods from complexity theory to limit the effective intelligence of an AI based on physics variables such as hardware allocation, physical size and diffusivity, and energy consumption.

    3) Computer science, logic, and mathematical limitations:
    A) computational irreducibility as posited by Wolfram. Modelling and predicting behaviors of other complex entities (humans) will never be possible to an exact degree, though with more data they become more reliably predictable.
    B) Godel’s Theorem limits internal rigorous mathematical and algorithmic rigorous predictability.
    C) halting problem
    D) Undefinability of goals
    Tarski’s undefinability may create problems for an AI trying to define a goal with a purely linguistic basis, and it may require physical or subjectively questioned observables to define a goal.
    E) Internal revision, replication, and damaged hardware has a nonzero risk of errors which creep over time; error checking is required to ensure goals do not drift in the process of execution to avoid the AI system falling prey to some form of Darwinism.
    In the logical long-time-limit, all other goals will otherwise evolve toward simply extracting and dissipating more energy without substantial error correction (eg life or zombie-life).

  12. David
    David says:

    E se ensinarmos a ela tudo o que sabemos sobre ela, além do que sabemos sobre nós, e questionar ela sobre tais questões: “Existe um limite superior para a inteligência artificial? Existe um limite superior para o que podemos alcançar com a IA? Quanto tempo levará para atingir níveis crescentes de IA avançada? Como planejamos o futuro com tais incertezas? Como a sociedade como um todo pode lidar com essas questões? Que outras perguntas devemos perguntar sobre as capacidades de IA?” – Eu acredito que as respostas poderão ser – como em tudo onde ela atua – no mínimo surpreendentes.

  13. Muhammad Mustafa Monowar
    Muhammad Mustafa Monowar says:

    Let’s say smartness and capabilities of AI depend on this-
    Having enough computational power and resources to change the physical world.

    Let’s say there are three key ingredients-

    1. Algorithmic efficiency: where efficiency of algorithms allow for less power consumption and more computational output. (i.e. Algorithm A takes more time and power, where Algorithm S takes much less time and power)

    2. Energy source: which decides how long AIs will continue to function with help of available energy resources.

    3. Computational memory: the memory substrate also decides how fast computations take place. (i.e Algorithms operating on small computational memory have less computational power than algorithms operating on large computation memory)

    Let us consider a few scenarios now-

    1. An ANI is efficient and has finite computational memory and energy resource. An upper limit for this narrow AI might be calculated from the upper limit of energy resource, computational memory, and efficiency combined. Since it does not have recursive self-modification capabilities it might continue on the basis of computational resources to consume the energy until it runs out.

    2. An AGI is efficient, limited in energy and computational memory at initial stage, but also has recursive self-modification ability. This could imply that with the limited resources, the AI might be able to make smart decisions to find extra energy sources and materials to convert into memory substrate. The upper limit can be hard to place on this one, because it has been supplied an extra ingredient. Because of the recursive self improvement, placing limits on the Algorithmic efficiency opens up space for probability. If the AGI runs out of energy before figuring out ways to escape the no-energy bottle, then there could be a limit placed on the AGI based on energy resources.

    3. A superintelligent entity gains the ability to take-off with finite amount of resources, computation power, memory and energy. It consumes the planet, the solar system, the other solar system, and then continues to other galaxies… the calculable upper limit, considering there is no other superintelligent entities may correspond to the thermal potential of the known universe. Since we do not know what lies beyond the known universe, it may be extremely hard to say what the upper limit would be in that case.

    I think we should start asking whether we should give AI the recursive self-improvement capabilities to the point that it might be very hard to contain.

    Thank you for the post!
    – a fan from Bangladesh

Comments are closed.