Posts: 184
Threads: 91
Joined: May 2022
What are your thoughts on the Singularity Hypothesis that soon AI will reach the level where it can improve its ability, thus quickly making an AI 2.0 which will recursively do the same, etc and then this hyper intelligent result will produce more technological advances in the 21st Century than in the twenty centuries prior?
Posts: 2,281
Threads: 114
Joined: Jul 2017
10-16-2023, 01:54 AM
(This post was last modified: 10-16-2023, 01:55 AM by ProxCenBound.)
As a real life hypothesis I don't think AI advancement will work that way, as it seems that the difficulties of navigating increasing complexity and of making discoveries that are ever-less 'low-hanging fruit' may be acting as a counter to accelerating/compounding advancements. I've seen this called the 'complexity brake'. This isn't to say there couldn't be even incredible advancements this century, just no single identifiable singularity. Although even that is by no means a given.
I don't think in any circumstance that there would be a single AI that takes over the world, for 'good' or for 'evil' (from a human perspective). There have never been all-powerful singletons in all of natural history or human history, no matter what biological or technological advancements appear. In this respect I've always liked how OA throughout its timeline has many diverse competing agents and groups, which has a verisimilitude to it.
Posts: 16,241
Threads: 738
Joined: Sep 2012
Actually the Singularity Hypothesis - as originally presented by its originator, Vernor Vinge - doesn't posit any of those things.
Vinge hypothesized that the creation of greater than human intelligence - whether via AI, augmentation of human intelligence, or genetic engineering of being with greater than human intelligence - would rapidly lead to a future world that was largely unpredictable and incomprehensible to baseline humans (to use the OA term) such as ourselves.
Vinge - and his hypothesis - does not make any specific predictions about technological advancement, AI taking over the world or being hostile to humans, or any other such things. That all came later, either due to people generating theories/reaching conclusions about what a singularity would look like (which - somewhat by definition - are just WAGs since the hypothesis explicitly states that human level minds won't be able to predict or comprehend a world generated by greater than human intelligence) or trying to appeal to a mass audience for marketing purposes.
Todd
Introverts of the World - Unite! Separately....In our own homes.