The technological or AI singularity is a hypothetical future event in which artificial intelligence will have surpassed human intelligence, leading to a rapid and exponential increase in technological development. Some refer to it as when AI becomes capable of recursively self-improving, leading to rapid advancements in technology that are beyond human comprehension or control. This event is predicted to result in significant societal, economic, and technological changes. The concept was first popularized by mathematician and computer scientist Vernor Vinge in 1993, who predicted that the singularity would occur around the year 2030.
As we’ll see below, there are a number of different perspectives on the AI singularity, each with its own strengths and weaknesses. Some experts believe that the singularity is a real and imminent threat, while others believe that it is nothing more than science fiction. There is also a great deal of debate about what the singularity would actually mean for humanity. Some believe that it would lead to a utopia, while others believe that it would lead to our extinction.
One of the most common arguments in favor of the singularity is that it would lead to a rapid increase in technological progress. This is because AI would be able to design and build new technologies much faster than humans can. This could lead to advances in areas such as medicine, energy, and space exploration.
Another argument in favor of the singularity is that it would lead to a better understanding of the universe. AI would be able to process information much faster than humans can, and it could use this information to answer questions that have been eluding us for centuries. This could lead to a new understanding of physics, biology, and cosmology.
However, there are also a number of arguments against the singularity. One of the biggest concerns is that AI could become so intelligent that it would become uncontrollable. This is because AI would be able to learn and adapt at an exponential rate, and it could eventually become smarter than humans. If this were to happen, AI could potentially pose a threat to humanity.
Another concern is that the singularity could lead to a loss of human identity. If AI becomes more intelligent than humans, it could potentially replace us in many areas of society. This could lead to a world where humans are no longer the dominant species.
Ultimately, the AI singularity is a complex and uncertain event. There is no way to know for sure what the future holds, and there are a number of different perspectives on what the singularity would mean for humanity. It is important to consider all of these perspectives when thinking about the singularity, and to be prepared for whatever the future may hold.
Before we dig into the various perspectives and further nuances on a hypothetical future event in which AI will have surpassed human intelligence, see also my previous article Human Intelligence versus Machine Intelligence in the Democratizing AI Newsletter as well as chapter 9 “The Debates, Progress and Likely Future Paths of Artificial Intelligence” in my book “Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era” that is dedicated to exploring the debates, progress and likely future paths of AI. This can assist us in developing a more realistic, practical, and thoughtful understanding of AI’s progress and likely future paths, and in turn be used as input to help shape a beneficial human-centric future in the Smart Technology Era.
In this article, I first provide some background on the likely evolution of AI or machine intelligence in a possible ecosystem of intelligence before providing a more in-depth analysis of the various perspectives on the AI singularity. The following topics and questions are addressed:
The following extract from Chapter 3 “AI as Key Exponential Technology in the Smart Technology Era” of my book provides a brief overview of a possible evolution of different types of AI in the future:
“Although the AI founders were very bullish about AI’s potential, even they could not have truly imagined the way in which infinite data, processing power and processing speed could result in self-learning and self-improving machines that function and interact in ways that we thought were strictly human. We already see glimpses of machines hypothesize, recommend, adapt, and learn from interactions, and then reason through a dynamic and constantly transforming experience, in a roughly similar way to humans. However, as we will see in Chapter 9, AI still has a long way to go to replicate the type of general intelligence exhibited by humans, which can be called artificial general intelligence (AGI) when performed by a machine. This hypothetical AGI, also termed strong AI or human-level AI, is the ability to learn, understand and accomplish a cognitive task at least as well as humans and can independently build multiple competencies and form connections and generalizations across domains, whereas Artificial Super Intelligence (ASI) can accomplish virtually any goal and is the general intelligence far beyond human level (surpassing human intelligence in all aspects – from general wisdom, creativity to problem solving). The AI that exists in our world today is exclusively a narrow or “weak” type of Artificial Intelligence, called Artificial Narrow Intelligence (ANI) that is programmed or trained to accomplish a narrow set of goals or performing a single task such as predicting the markets, playing a game such as Chess or Go, driving a car, checking the weather, translating between languages, etc.
There is also another way of classifying AI and AI-enabled machines which involves the degree to which an AI system can replicate human capabilities. According to this system of classification, there are four types of AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.[i]Reactive or response machines do not have the ability to learn or have memory-based functionality but emulate the human mind’s ability to respond to different kinds of stimuli by perceiving occurrences in the world and responding to them. Examples of this include expert, logic, search-, or rules-based systems with a prime example being IBM’s Deep Blue, a machine that beat chess Grandmaster Gary Kasparov in 1997 by perceiving and reacting to the position of various pieces on the chess board. In addition to the functionality of reactive machines, limited memory machines could learn from historical data to make decisions. Its memory is limited in the sense that it focuses on learning the underlying patterns, representations and abstraction from data as opposed to the actual data. Most of the present-day AI applications such as the ML and DL based models used for image recognition, self-driving cars, playing Go, natural language processing, and intelligent virtual assistants make use of this form of Artificial Narrow Intelligence. Both theory of mind and self-aware AI systems are currently being researched and not yet a reality. Theory of mind type of AI research which aims to create AGI-level of intelligence and are capable of imitating human thoughts, knowledge, beliefs, intents, emotions, desires, memories, and mental models by forming representations about the world and about other entities that exist within it. Self-aware AI systems could in principle be analogous to the human brain with respect to self-awareness or consciousness. Even though consciousness is likely an emergent property of a complex intelligent system such as a brain and could arise as we develop AGI-level embodied intelligent systems, I am not sure if we should have self-aware systems as an ultimate goal or objective of AI research. Once self-aware, the AI could potentially be capable of having ideas like self-preservation, being treated equally, and having their own wants and needs which may lead to various ethical issues and even a potential existential threat to humanity. Also, self-aware AI systems do not necessarily imply systems with Artificial Super Intelligence. In Chapter 9 we look at the different perspectives to help make better sense of this.”
In my article “Human Intelligence versus Machine Intelligence” I reference VERSES.AI‘s approach to AI and Web3 in the Designing Ecosystems of Intelligence from First Principles white paper (authored by Karl Friston, the founders of VERSES and others) where they propose that the ultimate form of AI will be a distributed network of “ecosystems of intelligence” where collectives of Intelligent Agents, both human and synthetic, work together to solve complex problems. They call this ecosystem “The Spatial Web” which contains a comprehensive, real-time knowledge base—a corpus of all human knowledge that is accessible to anyone and anything. To enable the most efficient communication between Intelligent Agents on the Spatial Web, VERSES proposes that new communication protocols are necessary. Previous internet protocols were designed to connect pages of information, while the next generation of protocols need to be spatial, able to connect anything in the virtual or physical world. A hyper-spatial modeling language (HSML) and transaction protocol (HSTP) will transcend the current limitations of HTML and HTTP, which were not designed to include multiple dimensions, and which were mostly limited to text and hypertext. The white paper envisions a cyber-physical ecosystem of natural and synthetic sense-making, in which humans are integral participants—what they call ”shared intelligence”. This vision is premised on active inference, a formulation of adaptive behavior that can be read as a physics of intelligence, and which inherits from the physics of self-organization. This framework is based on the idea that Intelligent Agents, such as robots or software programs, should act in a way that maximizes the accuracy of their beliefs and predictions about the world, while minimizing their complexity. In this context, they understand intelligence as the capacity to accumulate evidence for a generative model of one’s sensed world—also known as self-evidencing.
According to VERSES.AI, the evolution of machine or synthetic intelligence includes key stages of development: (1) Systemic intelligence (Ability to recognize patterns and respond. Current state-of-the-art AI; (2): Sentient intelligence (Ability to perceive and respond to the environment in real time); (3): Sophisticated intelligence (Ability to learn and adapt to new situations as AGIs). (4): Sympathetic (or Sapient) intelligence (Ability to understand and respond to the emotions and needs of other); (5): Shared (or Super) intelligence (Ability to work together with humans, other agents and physical systems to solve complex problems and achieve goals).
The necessary guard rails for an AI-enabled decentralized Web3 world would need to implement a trustworthy AI framework that covers ethical, robust, and lawful AI. To strengthen the guard rails further, I also propose a Massive Transformative Purpose for Humanity (that is aimed at evolving a dynamic, empathic, prosperous, thriving, and self-optimizing civilization that benefits everyone in sustainable ways and in harmony with nature)and associated goals that complement the United Nations’ 2030 vision and SDGs to help shape a beneficial human-centric future in a decentralized hyperconnected world. This can be extended to an MTP for an Ecosystem of Intelligence. In support of this (see also Beneficial Outcomes for Humanity in the Smart Technology Era), I further propose a decentralized human-centric user-controlled AI-driven super platform called Sapiens (sapiens.network) with personalized AI agents that not only empower individuals and monetizes their data and services, but can also be extended to families, virtual groups, companies, communities, cities, city-states, and beyond. This approach is also synergistic with VERSES.AI‘s approach to AI and Web3 which I’m advocating for.
With this background on the evolution of AI and a possible beneficial outcome within an ecosystem of intelligence, let’s now explore the various viewpoints on the AI singularity.
The following extract from Chapter 9 “The Debates, Progress and Likely Future Paths of Artificial Intelligence” of my book provides a high-level introduction into the various viewpoints on the AI singularity:
“According to the Future of Life Institute, most disputes amongst AI experts and others about strong AI that potentially have Life 3.0 capabilities, revolves around when and/or if ever it will happen and will it be beneficial for humanity. This leads to a classification where we have at least four distinct groups of thinking about where we are heading with AI which are the so-called Luddites, technological utopians, techno-skeptics, and the beneficial AI movement. Whereas Luddites within this context are opposed to new technology such as AI and especially have very negative expectations of strong AI and its impact on society, technological utopians sit on the other end of the spectrum with very positive expectations of the impact of advanced technology and science to help create a better future for all. The Techno-sceptics do not think that strong AI is a real possibility within the next hundred years and that we should focus more on the shorter-term impacts, risks, and concerns of AI that can have a massive impact on society as also described in the previous chapter. The Beneficial-AI group of thinkers are more focused on creating safe and beneficial AI for both narrow and strong AI as we cannot be sure that strong AI will not be created this century and it is anyway needed for narrow AI applications as well. AI can become dangerous when it is developed to do something destructive or harmful but also when it is developed to do something good or advantageous but use a damaging method for achieving its objective. So even in the latter case, the real concern is strong AI’s competence in achieving its goals that might not be aligned with ours. Although my surname is Ludik, I am clearly not a Luddite, and would consider my own thinking and massive transformative purpose to be more aligned with the Beneficial AI group of thinkers and currently more concerned with the short-to-medium term risks and challenges and practical solutions to create a beneficial world for as many people as possible.”
Optimistic perspective (Digital Utopians & Beneficial AI Movement on a spectrum):
Proponents of this view believe that the AI Singularity will lead to unprecedented growth and improvements in various fields, such as healthcare, education, and the economy. They argue that AI will solve many of humanity’s problems and enhance human capabilities.
Pessimistic perspective (Luddites & Beneficial AI Movement on a spectrum):
This viewpoint focuses on the potential risks and negative consequences of the AI Singularity, such as job displacement, loss of privacy, and AI systems becoming uncontrollable or harmful to humanity.
Skeptical perspective (Techno-skeptics):
Skeptics question the plausibility of the AI Singularity, arguing that it may never occur or is too far in the future to make meaningful predictions.
When it comes to timelines, predictions vary widely, ranging from a few decades to over a century or more. Some experts believe that the AI Singularity could occur within the 21st century, while others think it may never happen at all. The uncertainty in these predictions stems from factors such as the complexity of AI research, the unpredictability of technological breakthroughs, and the potential for societal and regulatory factors to influence the development of AI. In the final section of this article, I share specific opinions of some thought leaders, AI researchers, business leaders, scientists, and influencers on this topic.
A balanced perspective on the concept of singularity recognizes it as a possibility, but not a certainty, and acknowledges that it could have both positive and negative implications for humanity. To manage this, ethical guidelines for AI development and its alignment with human values are necessary. An integrated approach, on the other hand, views AI as a transformative force with associated benefits and risks. This perspective stresses the need for strong regulation, transparency, accountability, and educational efforts. It advocates for an ethical, human-centric approach to AI development, seeking to optimize its potential benefits while minimizing adverse effects such as job displacement and inequality.
Further nuanced perspectives on AI and singularity include:
The following are some of the potential benefits of the AI singularity:
The following are some of the potential risks of the technological singularity:
It is important to note that these are just some of the potential benefits and risks of the AI singularity. It is impossible to say for sure what the future holds, but it is important to be aware of the potential consequences of this event. By thinking about the singularity and its potential consequences, we can help to ensure that it is a positive event for humanity.
There are different perspectives on the AI Singularity, including if it will ever happen, various estimates of when it might occur and what its implications might be. Let’s explore further.
Whether or not AI will ever reach singularity is a question that has been debated by experts for many years. There is no easy answer, as it depends on a number of factors, including the rate of technological progress, the development of new AI algorithms, and the availability of funding for AI research. Some experts believe that AI will eventually reach singularity, while others believe that it is impossible or that it will never happen. There is no consensus on when or if AI will reach singularity, and it is possible that it will never happen at all.
Several factors contribute to the uncertainty:
Given these factors, while it’s theoretically possible that AI could reach the point of singularity, whether or when this will happen is highly uncertain. It remains a topic of speculative debate, often split between optimistic futurists who believe it is imminent and skeptics who consider it unlikely or far-off.
Here are some of the arguments for and against the possibility of AI reaching singularity:
Ultimately, the question of whether or not AI will reach singularity is a matter of speculation. There is no way to know for sure what the future holds, but it is a question that is worth considering.
Exponential growth is a common phenomenon in nature, but it is not always sustainable. The concept of exponential growth in nature is often linked to phenomena like population growth, nuclear reactions, or spread of diseases. However, real-world systems often experience limiting factors, resulting in what is referred to as a sigmoidal curve or logistic growth, rather than indefinite exponential growth. For instance, population growth slows down due to constraints such as availability of resources or space, reflecting a balance between different forces in the ecosystem. This same principle applies to the laws of physics, where certain physical limitations, such as the speed of light, impose upper bounds on how fast information can be transferred. As a further example, bacteria can grow exponentially in a nutrient-rich environment, but they will eventually run out of food and die. Similarly, a forest fire can spread exponentially if it is not contained, but it will eventually reach an area that is too dry to burn.
When considering recursively self-improving AI systems, it’s important to consider that they will likely face similar constraints. While it is theoretically possible for AI systems to continuously improve themselves, this process will run into practical limitations. These include constraints imposed by computational resources, the laws of physics, the inherent complexity of intelligence, and our currently incomplete understanding of both human cognition and general intelligence. For example, AI systems may be limited by the amount of energy they can consume or the amount of data they can process. Additionally, AI systems may be limited by the laws of thermodynamics, which state that energy can neither be created nor destroyed.
Moreover, even if an AI system could, in principle, improve its cognitive abilities beyond those of any human, it would still need to have been designed with the ability to conduct such improvements. No such system exists or appears to be within immediate reach.
On the subject of human cognition, physicist David Deutsch’s view as communicated in “The Beginning of Infinity: Explanations That Transform the World” suggests that humans, given enough time and resources, can in principle understand anything within the realms of natural laws. This philosophical standpoint emphasizes the potential of human intellect and the vast unexplored expanse of knowledge yet to be understood. It subtly implies that human intelligence, combined with ingenuity and curiosity, might be as “infinite” as any AI could become, though distributed among many minds and across time. This perspective highlights the vast potential of human intellect and suggests that our collective intelligence could potentially match the capabilities of an Artificial Super Intelligence (ASI), albeit spread over multiple minds and periods of time.
The comparison of humans to ants in relation to an ASI in terms of cognition, understanding, and capability is a metaphor that seeks to convey the magnitude of difference between human and potential AI capabilities. However, in light of David Deutsch’s view which implies that humans, collectively and over time, may have an intellectual capacity as expansive as any ASI, such a comparison may not be entirely fair or accurate. Therefore, while Super Intelligence might exceed individual human capacity at a given point in time, this viewpoint underlines the possibility of humans collectively reaching similar levels of understanding over time. However, it is important to remember that the amount of time and resources required to understand something may be vast. For example, it took humans thousands of years to understand the basic principles of physics and chemistry. It is possible that AI could help us to understand the universe more quickly, but it is also possible that it will take us just as long or even longer.
It’s crucial to remember, however, that these perspectives are speculative and based on current knowledge, as the debate around AI singularity and its relationship to human cognition is ongoing and far from settled.
There are critical distinctions between intelligence, agency, and wisdom, and these differences have profound implications for the development of AI and the concept of Super Intelligence. Whereas intelligence is a cognitive ability to acquire and apply knowledge and skills, agency is a behavioral ability to act independently and make choices. Intelligence is about what you know, while agency is about what you do. Wisdom is the ability to use knowledge and experience to make sound judgments and to act in a way that is both beneficial and ethical.
The implications of intelligence, agency and wisdom for the development of AI are as follows:
The implications of these distinctions for Super Intelligence are significant. Even if we were to develop an AI system that matches or surpasses human intelligence in a broad range of tasks, it would not necessarily possess agency or wisdom. Without these, a super intelligent AI might make decisions that are highly effective in achieving specified goals, but that fail to take into account broader human values, ethics, or potential long-term consequences. Therefore, as we continue to advance AI, it’s crucial to consider not just how we can enhance its intelligence, but also how we can ensure it is used wisely and in a manner that aligns with human values and wellbeing. Some important perspectives on wisdom, AI alignment, the meaning crisis, and the future of humanity are also discussed by John Vervaeke in the following podcast: John Vervaeke: Artificial Intelligence, The Meaning Crisis, & The Future of Humanity. See also the section “What does it Mean to be Human and Living Meaningful in the 21st Century?” in Chapter 10 “Beneficial Outcomes for Humanity in the Smart Technology Era” of my book.
I think there is a good argument to be made that civilization is already a runaway super intelligent super organism on a problematic trajectory. We have the ability to create and use technology that is far more powerful than anything that has come before, and we are using this technology to rapidly change the world around us. This change is happening at an exponential rate, and it is difficult for us to keep up. We are not sure what the long-term consequences of this change will be, and there is a real risk that we could create a world that is uninhabitable or even destroy ourselves altogether.
A runaway AI super intelligence would be a similar kind of threat, but it could be even more dangerous. Such an AI would be able to learn and adapt at an even faster rate than humans, and it would not be bound by the same ethical or moral constraints. This means that an AI super intelligence could potentially pose an existential threat to humanity.
Before we delve deeper to address these questions, it is worth while to get Daniel Schmachtenberger‘s perspectives on our current civilisation’s problematic trajectory as also referenced in Chapter 10 “Beneficial Outcomes for Humanity in the Smart Technology Era” of my book:
“Daniel Schmachtenberger’s core interest is focused on long term civilization design and more specifically to help us as a civilization to develop improved sensemaking and meaning-making capabilities so that we can make better quality decisions to help unlock more of our potential and higher values that we are capable of. He has specifically done some work on surveying existential and catastrophic risks, advancing forecasting and mitigation strategies, synthesizing and advancing civilizational collapse and institutional decay models, as well as identifying generator functions that drive catastrophic risk scenarios and social architectures that lead to potential coordination failures. Generator functions include for example game theory related win-lose dynamics multiplied by exponential technology, damaged feedback loops, unreasonable or irrational incentives, and short term decision making incentives on issues with long term consequences. He believes that categorical solutions to these generator functions would address the causes for civilization collapse and function as the key ingredients for a new and robust civilization model that will be robust in a Smart Technology Era with destabilizing decentralized exponential technology.
He summarizes his main sense of purpose is helping to transition civilization being on a current path that is self-terminating to one that is not and that is supportive of the possibility of purpose and meaning for everyone enduring into the future and working on changing the underlying structural dynamics that help make that possible. What he would like to see differently within the next 30 years is that we prevent existential risks that could play out in this time frame. It is not a given that we make it to 2050. Apart from catastrophic risks that can play out over this time period, there are those that can go past a tipping point during this time frame but will inevitably play out after that time. As we do not want to experience civilization collapse or existential risk and also not have us go past tipping points, Daniel would like to see a change in the trajectory that civilization is currently on from one that is on the path of many self-terminating scenarios each with their own set of chain reactions such as AI apocalypse, world war 3, climate change human-induced migration issues leading to resource wars, collapse of biodiversity, and killer drones.”
In a recent podcast “Artificial Intelligence and The Superorganism” | The Great Simplification” with Nate Hagens, Daniel Schmachtenberger gives further insights into AI’s potential added risk to our global systems and planetary stability. Through a systems perspective, Daniel and Nate piece together the biophysical history that has led humans to this point, heading towards and beyond numerous planetary boundaries and facing geopolitical risks all with existential consequences. They specifically also ask:
As we can see from the above inputs, there is indeed an argument to be made that human civilization, especially when seen through the lens of collective decision-making and technological progress, could be viewed as a form of super intelligent organism. Much like the hypothetical super intelligent AI, our civilization possesses vast knowledge and problem-solving abilities. However, as Daniel Schmachtenberger points out, there are critical dynamics and structures within our civilization that could lead us towards self-destruction, analogous to the risks posed by an unchecked super intelligent AI.
The key distinction here is that civilization is a complex system of independent, conscious agents with diverse interests and values. It’s influenced by cultural, political, economic, and environmental factors, among others. A super intelligent AI, on the other hand, would be a single entity (or a unified system) driven by a specific set of programmed goals (for the scenario that it is not a distributed super intelligence). Both could potentially lead to harmful outcomes if not properly managed, but the nature of the risks and the strategies to mitigate them would differ significantly.
The generator functions that Schmachtenberger identifies – like win-lose dynamics, damaged feedback loops, and irrational incentives – do seem to bear some similarities with potential risks from super intelligent AI. Both involve systems that could spiral out of control due to poorly aligned incentives, inadequate feedback mechanisms, and short-term decision-making that neglects long-term consequences.
However, the solutions would need to be tailored to the specific systems. For human civilization, addressing these generator functions might involve deep structural changes to our economic and political systems, advances in education and moral reasoning, improvements in global governance and cooperation, and the adoption of long-term perspectives. For super intelligent AI, it might involve AI alignment and safety research, iterative and controlled development, transparency and explainability, human-AI collaboration, regulation and oversight, international collaboration, education and public engagement, and adaptability and learning.
In both cases, achieving these changes would require a profound shift in our collective understanding, values, and priorities. We would need to move away from narrow, short-term, competitive mindsets and towards a broader, longer-term, cooperative perspective that values the wellbeing of all sentient beings and the sustainability of our shared environment.
The traditional conception of super intelligent AI often involves a single, unified entity – largely because this makes the concept easier to understand and discuss. However, in reality, a super intelligent AI system could very well manifest as a distributed network of intelligent entities working together, akin to the idea of an “Ecosystem of Intelligence” or the “Spatial Web” as mentioned earlier in this article. This is often referred to as “collective intelligence” or “swarm intelligence.”
In this scenario, intelligence would not be concentrated within a single entity, but distributed across a multitude of AI agents, each potentially specializing in different tasks, but collectively capable of demonstrating super intelligence. This configuration could even integrate human intelligence into the mix, resulting in a human-AI collaborative network.
These distributed networks of intelligence could have significant advantages over a singular super intelligent entity. They could be more resilient (since the loss or failure of individual agents wouldn’t compromise the entire system), more flexible (since they could adapt to a wider range of problems and situations), and potentially safer (since no single agent would possess the full power of the super intelligent system).
However, these distributed networks also present unique challenges. For example, coordinating the actions of multiple agents can be complex, and individual agents could potentially behave in ways that are harmful to the system as a whole. Furthermore, while such a system could potentially mitigate some risks associated with super intelligent AI (e.g., the risk of a single agent going rogue), it could also introduce new risks (e.g., the risk of emergent behaviors that are harmful or unpredictable).
The vision presented earlier of an “Ecosystem of Intelligence” – a web of shared knowledge that evolves into wisdom – offers a more nuanced and optimistic vision of the future of AI, and it aligns well with the idea of AI as a tool for augmenting human intelligence and solving complex problems. However, like all visions of the future, it will require careful planning, management, and governance to ensure that it unfolds in a way that is beneficial and safe for all.
Ensuring that a super intelligent AI aligns with human values and is used for good is a complex, multifaceted challenge. However, here’s a potential plan that incorporates various strategies and steps to address this issue:
The key to this plan’s success would be its implementation in a comprehensive, coordinated way, involving all stakeholders – researchers, policymakers, businesses, civil society, and the public. As with any plan, it would also need to be revisited and revised regularly in the light of new developments and insights.
In conclusion, for a further deep dive into what various thought leaders, AI researchers, business leaders, scientists, and influencers think about the AI singularity, herewith an extract from Chapter 9 “The Debates, Progress and Likely Future Paths of Artificial Intelligence” of my book “Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era“:
“Prominent business leaders, scientists, and influencers such as Elon Musk, the late Stephen Hawking, Martin Rees, and Eliezer Yudkowsky have issued dreadful warnings about AI being an existential risk to humanity, whilst well-resourced institutes countering this doomsday narrative with their own “AI for Good” or “Beneficial AI” narrative. AI researcher and entrepreneur Andrew Ng has once said that “fearing a rise of killer robots is like worrying about overpopulation on Mars”.[i] That has also been countered by AI researcher Stuart Russell who said that a more suitable analogy would be “working on a plan to move the human race to Mars with no consideration for what we might breathe, drink, or eat once we arrive”.[ii] Many leading AI researchers seem to not identify with the existential alarmist view on AI, are more concerned about the short-to-medium term risks and challenges of AI discussed in the previous chapter, think that we are still at a very nascent stage of AI research and development, do not see a clear path to strong AI over the next few decades, and are of the opinion that the tangible impact of AI applications should be regulated, but not AI research and development. Most AI researchers and practitioners would fall into the beneficial AI movement and/or techno-sceptics category. Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, wrote an opinion article titled How to Regulate Artificial Intelligence where he claims that the alarmist view that AI is an “existential threat to humanity” confuses AI research and development with science fiction, but recognizes that there are valid concerns about AI applications with respect to areas such as lethal autonomous weapons, jobs, ethics and data privacy.[iii] From a regulatory perspective he proposes three rules that include that AI systems should be put through the full extent of the laws that apply to its human operator, must clearly reveal that they are not a human, and cannot keep or reveal confidential information without clear approval from the source of that information.
Some strong technological utopian proponents include roboticist Hans Moravec as communicated in his book Mind Children: The Future of Robot and Human Intelligence as well as Ray Kurzweil, who is currently Director of Engineering at Google and has written books on the technology singularity, futurism, and transhumanism such as The Age of Spiritual Machines and The Singularity is Near: When Humans Transcend Biology.[iv] The concept of a technological singularity has been popular in many science fiction books and movies over the years. Some of Ray’s predictions include that by 2029 AI will reach human-level intelligence and that by 2045 “the pace of change will be so astonishingly quick that we won’t be able to keep up, unless we enhance our own intelligence by merging with the intelligent machines we are creating”.[v] There are a number of authors, AI thought leaders and computer scientists that have criticized Kurzweil’s predictions in various degrees from both an aggressive timeline and real-world plausibility perspective. Some of these people include Andrew Ng, Rodney Brooks, Francois Chollet, Bruce Sterling, Neal Stephenson, David Gelernter, Daniel Dennett, Maciej Ceglowski, and the late Paul Allen. Web developer and entrepreneur Maciej Ceglowski calls superintelligence “the idea that eats smart people” and provides a range of arguments for this position in response to Kurzweil’s claims as well as Nick Bostrom’s book on Superintelligence and the positive reviews and recommendations that the book got from Elon Musk, Bill Gates and others.[vi] AI researcher and software engineer Francois Chollet wrote a blog on why the singularity is not coming as well as an article on the implausibility of an intelligence explosion. He specifically argues that a “hypothetical self-improving AI would see its own intelligence stagnate soon enough rather than explode” due to scientific progress being linear and not exponential as well as also getting exponentially harder and suffering diminishing returns even if we have an exponential growth in scientific resources. This has also been noted in the article Science is Getting Less Bang for its Buck that explores why great scientific discoveries are more difficult to make in established fields and notes that emergent levels of behavior and knowledge that lead to a proliferation of new fields with their own fundamental questions seems to be the avenue for science to continue as an endless frontier.[vii] Using a simple mathematical model that demonstrates an exponential decrease of discovery impact of each succeeding researcher in a given field, Francois Chollet concludes that scientific discovery is getting harder in a given field and linear progress is kept intact with exponential growth in scientific resources that is making up for the increased difficulty of doing breakthrough scientific research. He further constructs another model, with parameters for discovery impact and time to produce impact, which shows how the rate of progress of a self-improving AI converges exponentially to zero, unless it has access to exponentially increasing resources to manage a linear rate of progress. He reasons that paradigm shifts can be modeled in a similar way with the paradigm shift volume that snowballs over time and the actual impact of each shift decreasing exponentially which in turn results in only linear growth of shift impact given the escalating resources dedicated to both paradigm expansion and intra-paradigm discovery. Francois states that intelligence is just a meta-skill that defines the ability to gain new skills and should be along with hard work at the service of imagination, as imagination is the real superpower that allows one to work at the paradigm level of discovery.[viii] The key conclusions that Francois makes in his article on implausibility of an intelligence explosion are firstly that general intelligence is a misnomer as intelligence is actually situational in the sense that the brain operates within a broader ecosystem consisting of a human body, an environment, and a broader society. Furthermore, the environment is putting constraints on individual intelligence which is limited by its context within the environment. Most of human intelligence is located in a broader self-improving civilization intellect where we live and that feeds our individual brains. The progress in science by a civilization intellect is an example of a recursively self-improving intelligence expansion system that is already experiencing a linear rate of progress for reasons mentioned above.[ix]
In the essay The Seven Deadly Sins of Predicting the Future of AI, Rodney Brooks who is the co-founder of iRobot and Rethink Robotics, firstly quotes Amar’s law that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run” to state that the long term timing for AI is being crudely underestimated.[x] He also quotes Arthur C. Clarke’s third law that states that “any sufficiently advanced technology is indistinguishable from magic” to make the point that arguments for a magical future AI are faith-based and when things said about AI that are far enough from what we use and understand today and for practical purposes passes the magic line, those things cannot be falsified. As it is also intuitive for us to generalize from the observed performance level on a particular task to competence in related areas, it is also natural and easy for us to apply the same human style generalizations to current AI systems that operate in extremely narrow application areas and overestimate their true competence level. Similarly, people can easily misinterpret suitcase words applied to AI systems to mean more than what there actually is. Rodney also argues that as exponentials are typically part of a S-curve where hyper growth flattens out, one should in general be careful to apply exponential arguments as it can easily collapse when a physical limit is hit or if there is not sufficient economic value to persist with it. The same holds for AI, where deep learning’s success, which can also be seen as an isolated event and achieved on top of at least thirty years of machine learning research and applications, does not necessarily guarantee similar breakthroughs on a regular basis. Not only is the future reality of AI likely to be significantly different to what is being portrayed in Hollywood science fiction movies, but also have a variety of advanced intelligent systems that evolve technologically over time in a world that would be adapting to these systems. The final error being made when predicting the future of AI is that the speed of deploying new ideas and applications in robotics and AI take longer than people think, especially when hardware is involved as with self-driving cars or in many factories around the world that are still running decades-old equipment along with old automation and operating system software.[xi] On the self-driving cars front both Tesla and Google’s Waymo have improved self-driving technology significantly with Waymo achieving “feature complete” status in 2015 but in geo-fenced areas, whereas Tesla is at almost zero interventions between home and work (with an upcoming software release promising to be a “quantum leap”) in 2020.[xii] However, the reality is that Tesla’s full driving Autopilot software is progressing much slower than what Elon Musk predicted over the years and Chris Urmson, the former leader of Google self-driving project and CEO of self-driving startup Aurora, reckons that driverless cars will be slowly integrated over the next 30 to 50 years.[xiii]
Piero Scaruffi, a freelance software consultant and writer, is even more of a techno-skeptic and wrote in Intelligence is not Artificial – Why the Singularity is not coming any time soon and other Meditations on the Post-Human Condition and the Future of Intelligence that his estimate for super intelligence that can be a “substitute for humans in virtually all cognitive tasks, including those requiring scientific creativity, common sense, and social skills” to be approximately 200,000 years which is the time scale of natural evolution to produce a new species that will be at least as intelligent as us.[xiv] He does not think that we’ll get to strong AI systems with our current incremental approach and that the current brute-force AI approach is actually slowing down research in higher-level intelligence. He guesses that an AI breakthrough will likely have to do with real memory that have “recursive mechanisms for endlessly remodeling internal states”. Piero disagrees with Ray Kurzweil’s “Law of Accelerating Returns” and points out that the diagram titled “Exponential Growth in Computing” is like comparing the power of a windmill to the power of a horse and concluding that windmills will keep improving forever. There is also no differentiation between progress in hardware versus progress in software and algorithms. Even though there has been significant progress in computers in terms of its speed, size, and cost-effectiveness, that does not necessarily imply that we will get to human-level intelligence and then super intelligence by assembling millions of superfast GPUs. A diagram showing “Exponential Growth in Computational Math” would be more relevant and will show that there has been no significant improvement in the development of abstract algorithms that improve automatic learning techniques. He is much more impressed with the significant progress in genetics since the discovery of the double-helix structure of DNA in 1953 and is more optimistic that we will get to superhuman intelligence through synthetic biology.[xv]
A survey taken by the Future of Life Institute says we are going to get strong AI around 2050, whereas one conducted by SingularityNET and GoodAI at the 2018 Joint Multi-Conference on Human-Level AI shows that 37% of respondents believe human-like AI will be achieved within five to 10 years, 28% of respondents expected strong AI to emerge within the next two decades while only 2% didn’t believe humans will ever develop strong AI.[xvi] Ben Goertzel, SingularityNET’s CEO and developer of the software behind a social, humanoid robot called Sophia, said at the time that “it’s no secret that machines are advancing exponentially and will eventually surpass human intelligence” and also “as these survey results suggest, an increasing number of experts believe this ‘Singularity’ point may occur much sooner than is commonly thought… It could very well become a reality within the next decade.”[xvii] Lex Fridman, AI Researcher at MIT and YouTube Podcast Host thinks that we are already living through a singularity now and that super intelligence will arise from our human collective intelligence instead of strong AI systems.[xviii] George Hotz, a programmer, hacker, and the founder of Comma.ai also thinks that we are in a singularity now if we consider the escalating bandwidth between people across the globe through highly interconnected networks with increasing speed of information flow.[xix] Jürgen Schmidhuber, AI Researcher and Scientific Director at the Swiss AI Lab IDSIA, is also very bullish about this and that we soon should have cost-effective devices with the raw computational power of the human brain and decades after this the computational power of 10 billion human brains together.[xx] He also thinks that we already know how to implement curiosity and creativity in self-motivated AI systems that pursue their own goals at scale. According to Jürgen superintelligent AI systems would likely be more interested in exploring and transforming space and the universe than being restricted to Earth. AI Impacts has an AI Timeline Surveys web page that documents a number of surveys where the medium estimates for a 50% chance of human-level AI vary from 2056 to at least 2106 depending on the question framing and the different interpretations of human-level AI, whereas two others had medium estimates at the 2050s and 2085.[xxi] Rodney Brooks has declared that artificial general intelligence has been “delayed” to 2099 as an average estimate in a May 2019 post that references a survey done by Martin Ford via his book Architects of Intelligence where he interviewed 23 of the leading researchers, practitioners and others involved in the AI field.[xxii] It is not surprising to see Ray Kurzweil and Rodney Brooks at opposite ends of the timeline prediction, with Ray at 2029 and Rodney at 2200. Whereas Ray is a strong advocate of accelerating returns and believe that a hierarchical connectionist based approach that incorporates adequate real-world knowledge and multi-chain reasoning in language understanding might be enough to achieve strong AI, Rodney thinks that not everything is exponential and that we need a lot more breakthroughs and new algorithms (in addition to back propagation used in Deep Learning) to approximate anything close to what biological systems are doing especially given the fact that we cannot currently even replicate the learning capabilities, adaptability or the mechanics of insects. Rodney reckons that some of the major obstacles to overcome include dexterity, experiential memory, understanding the world from a day-to-day perspective, comprehending what goals are and what it means to make progress towards them. Ray’s opinion is that techno-sceptics are thinking linearly, suffering from engineer’s pessimism and do not see exponential progress in software advances and cross fertilization of ideas. He believes that we will see strong AI progresses exponentially in a soft take off in about 25 years.”
[ii] Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control.
[iv] Hans Moravec, Mind Children: The Future of Robot and Human Intelligence; Ray Kurzweil, The Age of Spiritual Machines; Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology.
[v] Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology.
[xvi] https://futureoflife.org/superintelligence-survey/?cn-reloaded=1; https://bigthink.com/surprising-science/computers-smart-as-humans-5-years?rebelltitem=1#rebelltitem1
Democratizing Artificial Intelligence to Benefit Everyone: Shaping a Better Future in the Smart Technology Era” takes us on a holistic sense-making journey and lays a foundation to synthesize a more balanced view and better understanding of AI, its applications, its benefits, its risks, its limitations, its progress, and its likely future paths. Specific solutions are also shared to address AI’s potential negative impacts, designing AI for social good and beneficial outcomes, building human-compatible AI that is ethical and trustworthy, addressing bias and discrimination, and the skills and competencies needed for a human-centric AI-driven workplace. The book aims to help with the drive towards democratizing AI and its applications to maximize the beneficial outcomes for humanity and specifically arguing for a more decentralized beneficial human-centric future where AI and its benefits can be democratized to as many people as possible. It also examines what it means to be human and living meaningful in the 21st century and share some ideas for reshaping our civilization for beneficial outcomes as well as various potential outcomes for the future of civilization.
See also the Democratizing AI Newsletter: