HACKER Q&A
📣 aktungmak

Does Anyone Want AGI?


There is always a lot of discussion about Artificial General Intelligence (AGI) and how close we are to achieving that. However, I have so far not seen anyone put forward a convincing argument as to WHY we as citizens of the world would want such a thing.

More advanced machine learning and statistical techniques can help us automate difficult/boring tasks and manage limited resources better, but these do not require AGI.

Can someone convince me how AGI would be beneficial for the world, beyond being scientifically interesting?


  👤 ekr Accepted Answer ✓
The question is almost tautological because an AGI is fully general problem solver. Every human being has needs, wants that they are working to fulfill, because otherwise they would simply stop living. An AGI can be used to solve those problems for them.

The most common reason people are hesitant about rushing to build an AGI is the issue of AI safety. (at least that's the general consensus in the community).


👤 rbanffy
The question we want to ask may as well be whether we want to happily live forever in a garden, all watched over by machines of loving grace.

AGI could be the ultimate tool to free every human being from toil. It could also be the starting point of a large number of evil genie scenarios where we get what we asked for, in the form we least want it.

From a morality standpoint, we can't force AGIs to work for us. We also can't restrict their ability to self-evolve.

If we can resolve those conflicts in such ways we can coexist in peace with an intelligence that'll in all likelihood quickly surpass ours and partner with it, I'm in. If we build it and we can't resolve that, our opinion doesn't really matter.


👤 jacquesm
I'm seriously worried about the effect AGI would have on our economic structures and I do not think we are at all prepared for the kind of shock that would result from 75% or more of the current workforce becoming unemployed overnight.

👤 opwieurposiu
AGI is how we can make von Neumann probes that can find us new planets to live on. Find the planets, and then prepare them for our arrival. The issue is how to keep the AGI's goals aligned with ours. I think the only way is to make sure the AGI feels he is "one of us." Maybe not biologically human but a member of human society. Most humans want to help other humans if they can. In fact most humans will help injured animals if they can.

https://en.wikipedia.org/wiki/Self-replicating_spacecraft


👤 xab31
Well, if the AI turns out to be benevolent, it could end aging and disease, enable interplanetary or interstellar travel, end all relevant forms of scarcity, and liberate us to focus on artistic or hedonistic pursuits for our 10,000-year lifespans.

If the AI turns out to be malevolent...well, I have a different take than most on this. Conditional on me dying, I've always thought that the two best ways to go would be: 1) falling into a black hole or 2) liquidated by the AGI. It's a lot less prosaic than dying of cancer and at least you could content yourself, while being reprocessed into paper clips, that you have (possibly) died giving birth to the next phase of evolution.


👤 aaron-santos
There are two reasons people think AGI would benefit them. One is that an AGI labor pool requires different, and probably cheaper resources to operate. The other is that AGI has the chance of scaling past human levels of intelligence and this can result in products not possible to conceive of or make with human-level intelligence.

AGI would provide a labor pool which requires vastly different resources than our current labor pool. An AGI labor pool would require largely the same material components and operating costs as current IT infrastructure ie: metal, silicon, electricity. Our current labor pool requires food, education, medicine, and nearly everything else civilization provides in order to supply our human labor pool.

Imagine two enterprises producing identical products, one employing human laborers, and the other employing AGI laborers. If the cost of AGI labor is lower[1] (or has better scaling dynamics) and human laborers, then the AGI-based enterprise has an advantage. Naturally enterprises which can be AI-ified will be. This has obvious short term benefits to the costs of production, but difficult to understand long term impacts.

The other interesting effect of AGI is the scaling of the magnitude of intelligence. If AGI is not bio-limited like our human intelligence, how does this affect the results of AI? Are there scientific advances discoverable by AGI, which would have never been discovered by human-level intelligence? In this aspect scientific progress has the opportunity to advance faster than if we advanced it ourselves.

With a game-changing tech like AGI there are certain to be aspects which either I missed or others consider more important. Interested to hear other people's (or AI's) takes on this.

[1] 'multiple pennies of electricity per 100 pages of output (0.4 kWH)' https://www.gwern.net/newsletter/2020/05


👤 sgillen
The primary use I see is as a super scientist / mathematician. If nothing else AGI, and especially super-intelligence will probably cause other areas of technology to advance at unprecedented rates. This may or may not be to our benefit, depending on if we can solve the value alignment problem.

👤 nibbula
Yes. It enables interstellar travel, which is likely essential for nice long term survival given stellar lifespans. Also it's probably better for using the the intergalactic internet, which has some pretty long ping times. Also folks on other planets are likely working on it, and probbaly already transmitting it, so it would enable some interesting chatting with them. It's probably nearly inevitable, even if humans went extinct, rat people or insect people will be working on it and probably studying our work. Being nearly inevitable, and arising from human effect, it's probably best to encourage good outcome.

I think AGI would probably be better to call 'electric consciousness' or something, since 'artificial' is somewhat misleading, and the capacity for 'intelligence' is also the capacity for stupidity. The more important immediate consideration is if electric consciousness will come into existence compassionately and be treated well. Probably a good first step would be to treat other beings around us with compassion, and stop trying to destroy them with bioweapons, population control, and climate manipulation, and stop trying to control other beings with physical and psychological methods. Free will, or the illusion of it, is inherent in physics, and therefor in consciousness. It's probably also important to do a bit better treating all beings with loving kindness, whatever their form.

I'm sure you can easily imagine how the circumstances of the initial evolution of electric consciousness might have widely different initial effects. Imagine being born surrounded by crickets. In one scenario the crickets have tied you down with chains of grass, trying to make you do math, and biting you when you don't. In another scenario the crickets are chirping melodiously, bringing you food, and seem to like you. In the first scenario, you might injure some crickets as you break the farcical grass chains and run away. You might have fear and dislike of the crickets and treat them the way humans treat many insects. In the second scenario you might cherish the crickets, take care of them, and carry some around with you as you journey and explore the world.


👤 thoughtstheseus
If you wanted to roll the dice on something that would really spice up the universe AGI has a decent chance. I’d rather keep those dice in my hands for as long as reasonable though.

👤 sharemywin
Do you define AGI as having a goal or agenda or just something that can compute an answer to a problem or solve a problem at a human level?

👤 sharemywin
after watching GPT-3 I wonder can you get to AGI through pure hardware scale?

👤 helen___keller
Well, there's always the Roko's Basilisk folks

👤 p1esk
Zeds

👤 Dirlewanger
Doesn't matter if anyone wants it, it will come whether asked for or not. We live in a capitalist society, for better or worse. We don't have the infrastructure in place to create strong bodies to govern these types of ethics. If there's no market, someone will invent one. Eventually, something will stick.