Posts: 7,322
Threads: 296
Joined: Jan 2013
11-13-2016, 04:04 AM
(This post was last modified: 11-13-2016, 08:19 AM by Rynn.)
(11-13-2016, 01:46 AM)Drashner1 Wrote: Having worked with doctors in a couple of past portions of my career, I'm not really of a mind that they are all that much smarter (or any smarter at all really) than anyone else. At least if we define 'smarter' as 'general problem solving ability'.
A doctor may be able to perform complex surgeries spectacularly well - and be totally helpless when it comes to changing the oil on their car. Or bad at relationships. Or barely able to balance their checkbook.
The same often applies for pretty much any other profession that we tend to culturally associate with 'superior intelligence'. Humans are often especially good in one or more areas but not so good in others.
There is also the impact of things like education, experience, and general interest or personality, all of which can impact how good someone is at something (and how bad they are at other things).
(Note: left laptop at work this weekend so laboriously typing on iPad )
Broadly agreed. As someone who works in a field stereotyped for intelligence (nanomedical PhD) I think the term as a general trait is often not appropriate. Plenty of intelligent activities are very formulaic and could be performed well by anyone with the background knowledge and dedication. This isn't always true, some people are better at spotting patterns and experiencing useful incubation (I.e. answer popping into ones head) but presumably that is the result of biology and learned behaviour, it's just harder to study and replicate as its unconscious.
Jumping off from that we have a clear target for intelligence amplification. Any system, be it a smartphone or neural implant, that can provide instant answers and protocols resembles high intelligence:
Early setting example
Alice frowned at the blank screen on the agribot's flank. The machine was supposed to have weeded the field that morning but had frozen only an hour into the job. Alice's mother was the farm's engineer and could probably fix the bot in no time, but she was out of town for a conference all week. Not wanting to call customer services and deal with their overpriced and overly-friendly chatbots Alice resolved to fix the agribot herself. Slipping on her iShades she logged on to HowToDo.com and uploaded the agribot's specifications. A few minutes later a protocol file landed in her inbox. Opening it caused several new icons to appear in Alice's visual field, some hovered in front of her whilst the rest clustered around the agribot. The largest was the first node in a complex flowchart helping her to diagnose the problem. As the hours passed Alice toiled over the bot; info glyphs explaining the esoteric components of the machine and showing animations of how to check and fix them. Working through the protocol tree Alice eventually smiled in triumph as the agribot burst to life. As the glyphs faded around her she stood and watched the bot trundle along the field, fully operational and back on track thanks to her.
Late setting example
As he walked through the grounds, near bored to tears, a patch of rusty brown amongst the flowers caught Bob's attention. Curious he approached until he was looking down upon the aberration. Long, spindly threads of some earthy material had grown all over the blue roses. He searched his mind trying to recall if he'd ever seen anything like it during his long sojourn on the estate. He drew a blank, quite literally. His natural thoughts flickered through irrelevant connections and his exoself memoir-sense was silent. Drawing closer he began to mutter to himself; "What's this then...weird shape, striated? Yes striated. Funny angles where the threads meet, maybe fifty or sixty degrees all of them. Ah no, fifty-seven point two precisely. Not a fungi...no not biological at all but technological." If Bob cared enough about the source of his conclusions his exoself would have induced a synthetic feeling alongside the answers it was feeding his subconscious. As it was he didn't and so was blissfully ignorant that his thought process was being nudged towards a rational optimum, as well as being supplemented with micro-knowledge downloads. Rapt now Bob was convinced the growth was a mutant strain of soil nanomycelium. There was some self-awareness that he had never studied nanoengineering or horticulture, but Bob was now fascinated by both topics. As a bench extruded under him he spent the rest of the day thinking and learning; half formed questions in his mind were nipped in the bud by didactic snippets merging with his concept map. Scenes of molecules danced in his Cartesian theatre as he contemplated metabolic pathways of diamondoid-based replicators. As evening drew Bob finalised an antibot to deal with the mutant strain (a simple fix for the simple sabotage committed by the groundskeepers). Returning to the house Bob smiled; his schedule was going to have to be rearranged around his new hobby, he looked forward to a long period of doing little besides remembering knowledge for the first time.
*
Argh that all took far too long to type. The takeaway message from the examples is that at the low end IA can be like following a recipe for problem solving. A recipe that is presented in a way that is easy to follow, customised for the user and interactive (e.g. The protocol could say "Check the right Flanginator" and Alice could respond "where is that, what does it do, how does it work?" And the protocol will teach her). At the high end the individual feels like they are solving the problem using their own knowledge, even if they have no prior skills or experience. What's really happening is that a problem-solving and teaching program are monitoring the effort from the exoself and when they predict/detect a sub-optimum/erroneous train of thought or a gap in knowledge they shunt the correction into the user's mind.
Innate IA like genemods likely work by a) making artificial aides unnecessary and B) making the use of even better aides safer and easier. A homo superior doesn't need a user friendly, simplified how-to. Their brains were optimised for speed learning at breakneck pace in communication formats too complex for baselines. They also don't need "low" level IA-scripts to manipulate their thoughts, their high neural plasticity means their brain will grow a perfectly suited neural network to solve the types of problems they are facing. Exoself IA for superiors is going to be more quantitative than qualitative, adding raw processing power and a software hierarchy that their brilliant minds can direct from an executive position.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 7,322
Threads: 296
Joined: Jan 2013
11-13-2016, 05:56 AM
(This post was last modified: 11-13-2016, 06:46 AM by Rynn.)
Just wanted to add something to the last point RE IA-scripts running in an exoself. I imagine that a common implementation of this technology from a user perspective is for a sophont to download an IA-package and first activate its "generalist" function. This feature is a jack-of-all trades intelligent program, over time it monitors the user's thought patterns and knowledge levels and begins to intervene. From the user perspective this would be an odd sensation; over minutes-months they'd notice how much easier they are finding it to come up with solutions to problems. Complex answers easily seem to unravel in their mind and when they make conclusions based on that further solutions also come easy. This is of course the program guiding their thoughts, pushing them down the right routes in the phase-space of solutions and away from bad ones.
Beyond the generalist function specific ad-ons could be downloaded for certain problems. This might be as broad as "social interaction" or as narrow and esoteric as "Old Nipponese semantic compiler for recent-experience-inspired haiku creation". In any case collecting more ad-ons will let a user more easily discern patterns and work through solutions in a given application. There maybe trade offs in using some ad-on modules over others but also beneficial synergistic effects of combing otherwise unrelated ad-ons.
However there is a downside to this: there may be conflicts between patterns or different "brands" of ad-on. In the current era in civilised spaces the risks will be well characterised and "Best Practice Protocols" so well advanced and baked in that user's are protected. But earlier in the timeline, in more hazardous areas or in individual situations with sophonts ignoring warnings (perhaps willing to accept some side-effects in return for an intelligence boon) IA could be quite dangerous. The dangers could be quite varied; full on mental collapse, irrational interpretations of reality (e.g. Applying the intentional stance to the weather and then drafting complex legal claims against storms for noise level violations) and even all the way up to perversions and blights.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 7,322
Threads: 296
Joined: Jan 2013
11-13-2016, 08:50 AM
(This post was last modified: 11-13-2016, 08:51 AM by Rynn.)
Last post on this from me in a row I promise! As I said earlier I've got a lot to say on this topic on account of it being a subject of huge personal fascination. Regarding how to classify different types of intelligent augmentation I think two extremes of the IA spectrum can be identified.
Top-down approach: Extended Phenotype via exocortex.
Technology is used in synergy with the mind to create a system more intelligent than the sophont alone. Low end examples include AR interfaces, net connected devices and workflow optimisation. High end tech mostly centres around merging software with the mind via a DNI to gain general and specialised boosts to pattern recognition, problem solving, memory recall and initiating/maintaining productive states-of-mind.
Bottom-up approach: Enhanced phenotype via bodymods.
Fundamental augmentation of the body to enhance the innate capabilities of the brain. At the low end simply ensures a genome with the good chance of producing a healthy, efficient mind under a range of environmental conditions (primarily during development). At the high end radically alters neural architecture and behaviour for a wealth of new and better capabilities; qualitatively similar to high-end top-down but without lag due to sophont/exoself interaction. Disadvantage maybe undesirable changes to toposophy, psychology and personality.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 16,079
Threads: 732
Joined: Sep 2012
(11-13-2016, 04:04 AM)Rynn Wrote: Late setting example
As he walked through the grounds, near bored to tears, a patch of rusty brown amongst the flowers caught Bob's attention. Curious he approached until he was looking down upon the aberration. Long, spindly threads of some earthy material had grown all over the blue roses. He searched his mind trying to recall if he'd ever seen anything like it during his long sojourn on the estate. He drew a blank, quite literally. His natural thoughts flickered through irrelevant connections and his exoself memoir-sense was silent. Drawing closer he began to mutter to himself; "What's this then...weird shape, striated? Yes striated. Funny angles where the threads meet, maybe fifty or sixty degrees all of them. Ah no, fifty-seven point two precisely. Not a fungi...no not biological at all but technological." If Bob cared enough about the source of his conclusions his exoself would have induced a synthetic feeling alongside the answers it was feeding his subconscious. As it was he didn't and so was blissfully ignorant that his thought process was being nudged towards a rational optimum, as well as being supplemented with micro-knowledge downloads. Rapt now Bob was convinced the growth was a mutant strain of soil nanomycelium. There was some self-awareness that he had never studied nanoengineering or horticulture, but Bob was now fascinated by both topics. As a bench extruded under him he spent the rest of the day thinking and learning; half formed questions in his mind were nipped in the bud by didactic snippets merging with his concept map. Scenes of molecules danced in his Cartesian theatre as he contemplated metabolic pathways of diamondoid-based replicators. As evening drew Bob finalised an antibot to deal with the mutant strain (a simple fix for the simple sabotage committed by the groundskeepers). Returning to the house Bob smiled; his schedule was going to have to be rearranged around his new hobby, he looked forward to a long period of doing little besides remembering knowledge for the first time.
Hm. This is a fun example, but I can't help feeling that it is still either from pretty early in the setting or that it represents a fairly 'basic' level of IA, possibly because Bob has his system settings configured to making teaching him things the default or preferred option.
Consider the scenario above, but now add in things like the municipal computronium described here, skill modules, and the ability to temporarily 'upgrade' and multi-plex one's consciousness/perceptions/intelligence as described in this story, as well as the impact of advanced communication tech that would allow Bob (or Bob + Exoself) to:
a) use his DNI to order one of the home nanoforges to whip up some synsect size sampling and analysis bots in a matter of seconds to minutes.
b) have them fly to his location or have one of the house bots deliver them.
c) have the results of their analysis wirelessly transmitted to either a lab onsite of a municipal lab/nanoengineering space or spaces (those 6000 vots could each use one in principle) that rapidly simulate and synthesize a countermeasure and test it against a copy of the infestation that was synthesized onsite from the data gathered by the sampler bots
d) transmit an encrypted copy of the countermeasure design to Bob's home nanoforge where it is synthesized and then applied by other home bots or delivered to Bob for him to apply.
Bob could, in principle, see the infestation, and have a fix completed and in use within minutes while using the tech mentioned above (skill mods, exoself, mental augments) to sort of experience and 'supervise' the entire process - perhaps directly up to a point, perhaps with his exoself doing most of it and Bob mainly feeling like he was having a very vivid daydream or fantasy of sorts. Once the task is done, Bob reverts back to his 'normal' self and may not fully remember or understand what he did, unless he re-engages his exoself and augmentations.
Just some thoughts,
Todd
Posts: 7,322
Threads: 296
Joined: Jan 2013
11-13-2016, 09:12 AM
(This post was last modified: 11-13-2016, 09:43 AM by Rynn.)
Very true there are a bunch of ways Bob could have got the issue solved faster but this scenario is focused on him learning the relevant knowledge and integrating the relevant skills. Skill modules have two modes (that I could probably do with naming better in the article), one in which they puppet the person to get the job done and one where they slowly merge with the person so that they innately acquire the skill. The second was what Bob was doing.
RE labs and stuff that is totally correct but the hinted at background to the situation is that the estate groundskeepers (who in my mind are a specialised submind of the local angelnet) had sabotaged the envirotech deliberately in collaboration with a submind for Sophont Fulfilment; the latter having noticed Bob's boredom and contrived an adequate experience to stimulate him
Also the key features that make it a later setting example are how Bob is experiencing IA. It's seamlessly merged with his conscious mind. Unless he asks or is predicted to want to know Bob doesn't feel any difference between natural memories of his life experience and artificial ones drawn from an encyclopaedia file. The observations he makes could be a product of his brain or his natural thoughts could have been suppressed before they fully formed (because they were wrong/suboptimal e.g. "Perhaps it's a bunch of caterpillars") and replaced with the observations of the vastly more intelligent exoself programs. Contrast this to Alice who is consciously having to put effort in to follow instructions and learn from her intelligent aide. What she can do is fantastic, she can go from knowing nothing to picking up basic agribot repair skills in no time (all the while getting the job done) but it's functionally the same as if she was always accompanied by a skilled teacher. She still has to work for it. I hope that distinction makes sense, one is definitely less sophisticated and technologically simpler than the other.
EDIT: great story btw, I don't think I've read it before. It prompts a thought that "intelligence amplification" arguably overlaps heavily, but not completely, with "productivity amplification". The latter being any technology that boosts productivity, this may coincide with intelligence (such as a lateral thinking exoself module making one faster at solving riddles) but in many cases has no relation to it (like the mental link Shen has to her gardening bots). In the case of some of the tech we've discussed in this thread the distinction may be as subtle as ones personal settings, e.g: do you solve a problem by having your exoself do it for you or do you merge with the relevant skill modules and "do it yourself". From an external, black box perspective there may be no difference. But from a personal experience and personality shaping perspective the choice could be significant.
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 16,079
Threads: 732
Joined: Sep 2012
(11-13-2016, 09:12 AM)Rynn Wrote: Very true there are a bunch of ways Bob could have got the issue solved faster but this scenario is focused on him learning the relevant knowledge and integrating the relevant skills. Skill modules have two modes (that I could probably do with naming better in the article), one in which they puppet the person to get the job done and one where they slowly merge with the person so that they innately acquire the skill. The second was what Bob was doing.
Fair enough. I thought that might be the case (as mentioned in my earlier post), but wasn't sure. Just wanted to note that this wasn't the 'only' option available to Bob and that, as with so much of the later setting, it comes down to a matter of choice.
(11-13-2016, 09:12 AM)Rynn Wrote: Also the key features that make it a later setting example are how Bob is experiencing IA.
I wasn't actually questioning whether Bob was a later setting example - just that it felt incomplete to me and did not fully encompass all the options available to Bob (and that might not be known about by all readers of the thread, who might conclude that what was described was the only option for Bob). However, since part of the scenario is that Bob is deliberately limiting his options due to personal preference and similar reasons, that issue goes away.
(11-13-2016, 09:12 AM)Rynn Wrote: EDIT: great story btw, I don't think I've read it before. It prompts a thought that "intelligence amplification" arguably overlaps heavily, but not completely, with "productivity amplification". The latter being any technology that boosts productivity, this may coincide with intelligence (such as a lateral thinking exoself module making one faster at solving riddles) but in many cases has no relation to it (like the mental link Shen has to her gardening bots). In the case of some of the tech we've discussed in this thread the distinction may be as subtle as ones personal settings, e.g: do you solve a problem by having your exoself do it for you or do you merge with the relevant skill modules and "do it yourself". From an external, black box perspective there may be no difference. But from a personal experience and personality shaping perspective the choice could be significant.
Thanks!
Agreed on all points here. I would suggest that whether or not one has the exoself do it or 'learns' in the process may be impacted by such factors as personality and personal belief, how often one does a particular task, whether or not one enjoys doing the task at the time, and various limits that may apply in terms of available processing power and other hardware, side effects and personality changes (ranging from annoying to dangerous), cultural custom and law, and so forth.
The other side of this coin (and just to complicate matters) is that OA tech also allows a sophont to literally 'change their mind' if they want to and are willing to accept potential personality changes and such. In other words, they can make themselves enjoy doing something if they want to.
It occurs to me that a major part of the weirdness of OA from a RL perspective is tied up in the sheer amount of morphological freedom available to its inhabitants and all that it implies.
Hmm.
Todd
Posts: 1,437
Threads: 46
Joined: Sep 2016
I think the micro-stories that Rynn wrote for this thread will be great "everyday life" examples to write into some already written EG articles.
"Alice frowned at the blank screen on the agribot's flank. "
"As he walked through the grounds, near bored to tears, a patch of rusty brown amongst the flowers caught Bob's attention."
Or also flash-stories or short stories without much context solely for the example of everyday life. I can't find the forum thread where is suggested something related to "short stories".
Posts: 7,322
Threads: 296
Joined: Jan 2013
(11-14-2016, 09:48 PM)Avengium Wrote: I think the micro-stories that Rynn wrote for this thread will be great "everyday life" examples to write into some already written EG articles.
[...]
I can't find the forum thread where is suggested something related to "short stories".
Thankyou and is this the thread you're looking for?
http://www.orionsarm.com/forum/showthread.php?tid=2461
OA Wish list:
- DNI
- Internal medical system
- A dormbot, because domestic chores suck!
Posts: 522
Threads: 90
Joined: Mar 2016
If people who are Homo Superior come into enough people, it would be awkward at first. It would be like the X-men. They would probably live together in a community like a community for Superiors.
Posts: 522
Threads: 90
Joined: Mar 2016
I doubt they would have a lot of social life with baselines and nearbaselines. It would be hard to relate to people who can't understand their own minds. They are both modos, but relating would be like kidergardeners vs. PHD's. They would have nothing to say to each other.
It is no surprise Superiors rule society. No one else can think like they do. With all of the robots around, I'm surprised the superiors even bother with employing baselines. After 2 centuries, the robots would probably take up most jobs on Earth. I would think the cis lunar volume would locked up too since most humans except the vaccum adapted are pricey to put in space. Even in primitive 2016 robots rule space.
I definitely think people are gonna get their kids upgraded to compete in the job market, buy they are going to have a hard time in the job market with robots and vecs who work for next nothing. I would think that they would upgrade at the same rate, in which case vecs have the edge in development time. Also, AIs have a big advantage over nearbaselines, at least from what I have seen in the timeline.
Thoughts?
|