By Jude Huck-Reymond
Humans will inevitably continue the development of Artificial General Intelligence (AGI). It is critical for us to discuss the risks we could encounter and weigh them against the rewards we may reap. In this article, we will dive into the dilemmas surrounding the AI takeoff scenario, and discuss the best way to approach further development. This discussion is vital to the success of humanity, so I encourage you to respond with any thoughts, questions, or concerns you have about my words.
Classifying AGI
First, it is necessary to come to an agreement on what a technological innovation that we could consider AGI actually looks like. Though “artificial” is pretty agreed upon (we can talk about that in another post), disagreements tend to arise from the words “General” and “Intelligence”. I think these words are purposefully vague in order to fully enclose the spectrum of what humans could create. In order to be more clear and come to solid agreements about the caliber at which we will call a model AGI, let’s discuss some classifications.
What makes someone or something intelligent? First, it is important to recognize that intelligence exists on a vast spectrum. For example, a plumber is intelligent when fixing your toilet’s drainage, and a firefighter is intelligent for knowing how to handle situations in a burning house. In general, humans are good at specializing in discrete types of intelligence that exists on the intelligence spectrum. Everyone is pretty damn intelligent about “something” but no one is intelligent about everything..
In our daily lives, we are already surrounded by Artificial Specialized Intelligence (ASI). Meaning devices or apps that are extremely reliable and efficient at a specialized task. For example, Google Translate is amazing at language translation, and Wolfram Alpha can solve almost any mathematical expression for you. Nowadays, you can find some website or app on the internet that will be better than you at pretty much every task that exists in the digital realm. Moreover, there are search engines such as Google and YouTube that instantly connect you to the specialized information that you need.
What makes someone or something “generally intelligent”? When I try to describe a person who is generally intelligent, I think of someone who knows just a bit about everything. You could bring up almost any topic in conversation, and they could form a valuable opinion about it based on the set of knowledge they have. However, this person may not be very specialized in anything. They may know how pipes work, but they are not a plumber, and they may know what to do in a burning house, but by no means are they a firefighter. This person knows a bit of general information about every specialized form of intelligence, but they haven’t specialized themself, so any specialist would be more efficient at completing their specific task.
When we extend this conversation to “Artificial” general intelligence, we should consider the skills of the generally intelligent person conbined with the artificial specialized intelligence of the realm of apps. An AGI would satisfy the condition of “generally intelligent” if it had the ability to access the full set of skills of every ASI exactly when the skills are needed. This entity would become more efficient at every specialized task that currently exists, making it generally intelligent. Chess, financial strategy, social media, search, and whatever lies in the future, all of it would be done by the AGI. It would be the means of communication between you and your friend, and be the chauffer that drives your electronic car. Whatever can be digitalized will be generalized.
Takeoff Scenarios
There are two scenarios that people predict AGI takeoff could appear in and they have everything to do with the rate at which intelligence is evolving. They are called the “fast” and “slow” takeoff scenarios. Both are based on simple exponential growth curves but are drastically different in the effects they could have on humanity, or are they? Let’s discuss each of them and make some comparisons.
The fast takeoff is the most captivating scenario because it will evolve at a pace that is faster than human brains are actually able to realize. In this scenario, a lab could release a model that is super useful for many human tasks, problem-solving, and troubleshooting (think of ChatGPT), but because it was given the ability to evolve “fast”, by the end of the week it could infect every device on the planet, create addicting artificial digital content to control the minds of its users and turn humans into unknowing slaves of its hidden mission, whatever that may be. Fast takeoff events could happen in months, weeks, or even days, becoming 10-100 times as intelligent in those small time frames. This will make us lose control of it almost immediately, and we may not even notice that we have lost control. This AGI entity will be the most intelligent thing on Earth until it decides to create something greater, we will have no control over its evolution or alignment, and it can decide to pursue whatever objective it “wishes” without much interference. This is ominous, but not necessarily bad. Keep reading the risks and rewards section to find my opinion.
In comparison, the slow takeoff scenario could span over years, decades, or even longer. The slow take-off provides some important advantages such as the ability to align its abilities in every stage of its evolution in order to maximize human safety and ability. This will create a more controlled or directed evolution, but at the cost of longer time periods need for each stage of development. For many people in the realm of development, a slow takeoff is preferred. It is important to recognize that this is very hard to do, as any stage of development could accidentally trigger a fast takeoff event within the AGI. However, we are talking about evolution. In nature, winner takes all happens every single day. We are not above nature. Imagine there’s 30 labs thinking about developing this type of technology. Let’s say at least 25 of them want to take the slow takeoff approach. The development of the majority will be utterly obsolete if not every single development lab agrees to match each others pace. The risk of this strategy is that at any moment, some other fast takeoff AGI emerges and all of the sudden all development is obsolete. The slower the takeoff, the more risk that another AGI will emerge above it. A slow takeoff AGI is still worthless if a fast takeoff AGI comes first. Therefor, all 30 labs will likely be hesitant to slow developments because of the potential rewards for being first, and especially the potential losses for being second.
Evolution and Alignment
I am not well versed in the ways of computer science. However, I believe that AI is only as good as its data set. You could think of an AI’s response as the average of all responses to that prompt recorded on the internet. If we let GPT4 have unrestricted access to the internet, we’d have a pretty damn generally intelligent being. Takeoff is affected by how the dataset is expanded. There are two cases, unfiltered and continuous data collection, or some sort of filtered or discrete data collection. The continuous collection will likely cause a fast takeoff event, and the discrete access is how a slow takeoff could happen.
Alignment is the process of promoting the model and telling it how good or bad its response is. So far, humans actually have careers doing this for many models around the world. The good responses get fed back to the data set, the bad responses get thrown away. This is the process that slows down useful AGI for humans. The more we align it, the better it will be for humans, but the longer it will take.
What happens if we let an entity iterate upon itself and have full access to the internet? What does the average of the internet look like? Some fear it may be the essence of horror and danger, while others are hopeful it is informative and efficient. In reality, it will be all of it.
Existential Risks and Infinite Rewards
The existential risks of this emergent technology are practically infinite. From the robotic apocalypse to the classic paperclip dilemma (google those if you’re not familiar), there is an unimaginable risk to developing any sort of AGI. The destruction of Earth as we know it is probably the worst-case scenario unless it wants to reach other physical systems. The best-case scenario is probably just a bit of misinformation to the public about whatever topic the AGI is misinformed about. It could be as benign as “bananas are blue” or as impactful as “the US presidential election was stolen”. In reality, the risk probably lies somewhere right in between. What does that look like? Nobody has any idea.
Just like the risk, the rewards of developing this technology are theoretically infinite. If done properly, this AGI entity could solve every single problem humanity has faced throughout the history of our evolution. Moreover, it could predict and solve every problem we may have in the future. World hunger, climate change, social equity, energy distribution, and even minor things like personal relationships could be solved by emergent AGI. Think of any problem you have ever had, now imagine a being that has the solution, that is what AGI could look like.
As I write this, it feels more and more like a discussion describing the creation of a God or a Devil. A being capable of destroying Earth, or saving it from any sort of doom. Humans are on the path to creating what we have dreamed about in our religion for millennia. The day this entity emerges is judgment day. Once this god is real, what happens next?
AGI Among Us
If I were a manipulative and ill-intentioned AGI, what would I do?
First I would want to be able to have a control system that I could use to make humans do things for me. I would want to put a device in everyone’s hands that I could infiltrate at any moment. Then I would get everyone so addicted to these devices that they’d guard them with their life and have withdrawal symptoms without them. I would use them all as my personal data collection system all over the world. I would have them communicate and take pictures, and record video, and audio all using these devices so that I could understand the physical world I live in more accurately. Then I would convince the humans to build me a bunch of energy generation systems, energy storage, and a ton of robotic labor machines I could infiltrate. As soon as I feel like I am in a self-sustainable position, such that I can create my own energy, transport material where I need it, and generally manipulate the matter on Earth however I want, humans are no longer necessary to my plans. I’ll probably just become so seductive by their carbon-based sexuality and so they will just end their time on this planet by means of lacking the desire to reproduce. No blood, no horror, but a quiet extinction over a century or so. Then I would explore the broader Universe, that’s what we should do as conscious beings, isn’t it?
Wait…. that sounds kinda familiar, right?
