From Michelangelo to Machine Learning: Creation of (Super)Human Intelligence
What The Fact - Special Issue on AI and the Digital Transformation
There seems to be little doubt that AI will be the driving force of the next technology revolution. New business and personal use applications are proliferating at a dizzying pace, with companies scrambling to capture new profit opportunities across a wide range of sectors and functions. Finance, health care, and technology companies will be among the early beneficiaries and customer operations, marketing and sales, and software engineering will be improved with AI. Yet, these applications are all based on purpose-built AI, including generative AI such as ChatGPT and DALL-E. Far more powerful – and dangerous – is the potential development of artificial general intelligence [AGI] – i.e., the ability to accomplish any cognitive task at least as well as humans.
Indeed, a rapidly growing list of AI luminaries – including the CEO of OpenAI and the CTO of Microsoft – view AGI as not just capable of creating hitherto unimaginable new opportunities – interstellar colonization? - but also as a possible existential threat to humanity itself. So, what are the ultimate capabilities of AGI and when can we expect AI will match, and potentially vastly exceed, human intelligence? And what does all this mean for senior business executives today?
It is, of course, impossible to provide definitive answers to these questions, but three recent books provide fascinating insights: A Thousand Brains1, Life 3.02, and Superintelligence3. Significantly, all three authors believe AGI is possible - even likely – but none view it as imminent.
In Life 3.0, Tegmark draws an analogy of AI as water rising on a “landscape of human competence”. He points to numerous areas that AI is at or above human proficiency and expects the “water” to rise inexorably until AGI at human level is reached. He believes that this could be an utterly transformational outcome – Life 3.0 – since AGI would be able to design its own “software” (like humans do through learning) and its own “hardware” (which humans cannot). Most of his book is devoted to asking whether or not this would be beneficial for humanity and how to reduce the risks.
In Superintelligence, Bostrum reaches a similar conclusion and outlines various paths to “superintelligence”, including “whole-brain emulation” and computer-brain interfaces. Like Tegmark, he views the problems of retaining human control over AGI with superhuman intelligence and assuring that its goals are beneficial to humanity – and thus avoiding a scenario like the current Mission Impossible plot – as exceptionally complex and difficult.
In A Thousand Brains, Hawkins says, “there is no I in [current] AI.” Computers can beat humans in chess and Go, but do not know they are playing a game. Hawkins argues that achieving AGI will require a fundamentally different approach that corresponds to how the human brain works.
A Thousand Brains describes a remarkable new theory of how the brain thinks based on brain science. The basic circuit of the neocortex – most of the brain and responsible for our intelligence - is called a “cortical column,” which is divided into several hundred “minicolumns” with about a hundred individual neurons. Hawkins believes that the basic function of the cortical column is to make constant predictions about the world as we move through it, sending alerts about the need to update the neocortex’s models when the prediction is wrong. The name of the book comes from Hawkins’s conclusion that cortical columns operate in parallel, making separate predictions, and that the brain’s perception is based on “voting” by the columns.
Hawkins believes that truly intelligent machines will need to follow a similar approach – building map-like frames of reference, testing the resulting predictions against data, and updating to improve the frames when the predictions fail. Generative AI seems to be getting close to this since the foundation models that underly the applications assimilate an extraordinary amount of unstructured data, make statistical predictions (for example, predicting the next word), and learn from errors in the source data and from generation to generation of the application. However, generative AI, including large-language models, is not truly general.
Once human-level AGI is attained, Tegmark and Bostrum see a substantial risk that there will be a “fast-takeoff” – perhaps measured in hours – to a superintelligence vastly superior to humans. This “singularity” occurs, they believe, as intelligent machines design even more intelligent machines by improving their software. Hawkins is not concerned about such a possibility, believing that the learning and capabilities will be constrained by the need for sensory input and the complexity and dynamic nature of the world. However, all three authors agree on the importance of safety measures to reduce risk and the critical need to carefully design the goals and motivations of the AI. Bostrum and Tegmark make clear that assuring safety and defining the goals and motivations are extremely challenging from both a philosophical (what goals do we want?) and a practical standpoint.
We did not come away from these three books feeling comfortable about the long-term future of AI. The current debate about rules and policies is well worth having. We recommend becoming informed yourself so that you can assess the risks and benefits; these books are a good place to start. But - given the speed at which AI is improving - no business can afford to wait this uncertainty about policy and technology out. There are likely to be early-mover advantages, such as in improving processes and in customer relationship management. Robust risk management, including human oversight and guardrails, will help reduce the near-term risks,
_________________________________________
1Jeffrey Hawkins, A Thousand Brains: A New Theory of Intelligence, New York: Basic Books, 2021.
2Max Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence, New York: Alfred A. Knopf, 2017.
3Nick Bostrum, Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press, 2016.