The Orion's Arm Universe Project Forums
The Singularity Hypothesis - Printable Version

+- The Orion's Arm Universe Project Forums (https://www.orionsarm.com/forum)
+-- Forum: Offtopics and Extras; Other Cool Stuff (https://www.orionsarm.com/forum/forumdisplay.php?fid=2)
+--- Forum: Real Life But OA Relevant (https://www.orionsarm.com/forum/forumdisplay.php?fid=7)
+--- Thread: The Singularity Hypothesis (/showthread.php?tid=6044)



The Singularity Hypothesis - Tom Mazanec - 10-15-2023

What are your thoughts on the Singularity Hypothesis that soon AI will reach the level where it can improve its ability, thus quickly making an AI 2.0 which will recursively do the same, etc and then this hyper intelligent result will produce more technological advances in the 21st Century than in the twenty centuries prior?


RE: The Singularity Hypothesis - ProxCenBound - 10-16-2023

As a real life hypothesis I don't think AI advancement will work that way, as it seems that the difficulties of navigating increasing complexity and of making discoveries that are ever-less 'low-hanging fruit' may be acting as a counter to accelerating/compounding advancements. I've seen this called the 'complexity brake'. This isn't to say there couldn't be even incredible advancements this century, just no single identifiable singularity. Although even that is by no means a given.

I don't think in any circumstance that there would be a single AI that takes over the world, for 'good' or for 'evil' (from a human perspective). There have never been all-powerful singletons in all of natural history or human history, no matter what biological or technological advancements appear. In this respect I've always liked how OA throughout its timeline has many diverse competing agents and groups, which has a verisimilitude to it.


RE: The Singularity Hypothesis - Drashner1 - 10-16-2023

Actually the Singularity Hypothesis - as originally presented by its originator, Vernor Vinge - doesn't posit any of those things.

Vinge hypothesized that the creation of greater than human intelligence - whether via AI, augmentation of human intelligence, or genetic engineering of being with greater than human intelligence - would rapidly lead to a future world that was largely unpredictable and incomprehensible to baseline humans (to use the OA term) such as ourselves.

Vinge - and his hypothesis - does not make any specific predictions about technological advancement, AI taking over the world or being hostile to humans, or any other such things. That all came later, either due to people generating theories/reaching conclusions about what a singularity would look like (which - somewhat by definition - are just WAGs since the hypothesis explicitly states that human level minds won't be able to predict or comprehend a world generated by greater than human intelligence) or trying to appeal to a mass audience for marketing purposes.

Todd