05-19-2021, 07:12 AM
Quote:If AIs are created in a coded 'top down' manner it would seem doable to insert some form of back door access or the like into them.
However, if AIs instead are 'grown' via some sort of self-modifying code or neural net architecture that doesn't lend itself to ready prediction about the final structure, would such things still be as doable?
In past discussions, the latter method has been described as injecting a degree of uncertainty into the development of an AI of the same S-level as the creator (or less than 2 S-levels below the creator) - and might also make creation of such secret back doors difficult.
Thoughts?
Todd
I guess that with AIs you also have to overcome the problem that they can read/analyze their own mind/software and change it at will, so the code for any backdoor must be extremely discreet and is always at risk to be deletet/changed.
Thinking about it, changing frequently your own internal protocols could be a quite effective way to counteract intrusions.
This regarding the software, maybe it could be possible to insert a backdoor at the hardware level.
Semi-professional threads diverter.