05-16-2018, 03:08 PM
(05-16-2018, 01:09 PM)Drashner1 Wrote: a) Magmatter can pack an enormous number of 'atoms' into a very tiny volume and by its nature 'views' any environment below the level of a neutron star as being near absolute zero and almost utterly inert. S3 or higher archai might therefore create magmatter based quantum computers or masses of entangled particles that could retain their 'functionality' up to and including conditions that would reduce any conventional matter to plasma.
An excellent point. Magmatter quantum computronium would be extremely compact, extremely fast (though extremely heavy per quantum gate), and (at least up to the temperatures where room temperature superconductor magnetic shielding gives out) trivially easy to shield. And it lets you make very small S3s and S4s (assuming the latter don't just collapse into black holes). On the other hand, per qubit of storage, magmatter quantum memory is going to be far heavier than normal matter quantum memory (for magmatter processing, the speed advantage probably balances out the weight disadvantage, but for memory it doesn't).
(05-16-2018, 01:09 PM)Drashner Wrote: b) S4 plasma processor based computronium can pack an S3 level mind into about a kilogram of computing substrate (not counting the magmatter support structure). As such, archailects could potentially create 'quantum modules' or sub-selves such as you describe up to that level. Maybe more if they combined this with swarms of nested void bubbles.
Really? Wow, I don't recall seeing the math on that on EG -- is there a thread about it? An S3 in a kilogram (not including magmatter -- that sounds like a heavy proviso) yet a godstar is only an S5? That seems like a weird ratio...
I'm having difficulty believing in hot-plasma-based quantum computronium -- heat (thermal phonons, collisions, or worse still thermal photons) really is the biggest enemy of quantum computing. Single maybe-charged atoms held in position in magnetic or optical traps are a promising way to do quantum computing -- as in there are labs currently working on them -- but you really don't want them hot enough that their outer electrons keep randomly jumping up and down between energy levels and leaking or absorbing thermal photons, or uncontrolled enough that they keep colliding. So it would have to be a cold plasma, not a hot one, with the positions of the atoms pretty rigidly maintained or marshalled by optical or magnetic traps from the magmatter, and ionisations and electrons jumping between energy levels happening as needed by the processing, never at random due to heat. You'd need a lot of magmatter to do the structure for that, and I suspect given that much magmatter (other than issues about your computronium collapsing into a black hole) it would be more mass-efficient to just build compact magmatter computronium. I guess if you really are building magmatter computronium, but trying to space it out far enough that it doesn't just turn into a black hole as soon as you assemble much of it, ordinary matter atoms might be useful as spacers. Can you make an odd sort of crystal in which the normal-matter atoms or ions and the little magmatter circuit elements alternate and hold each other in place? If so, then you might then also use the ordinary matter atoms as relatively lightweight quantum memory (say using nuclear magnetic moment spin flips), slow but still less mass per qbit stored than magmatter, so good for a mass reduction.
I'm thinking that as well as the existing taxonomy of classical ultimate computronium, we need a corresponding one for quantum computronium. Fortunately diamond is quite promising for both, so the Diamond Belt works. But I think there are going to be two distinct sorts of ultimate chip, with different temperature, power, and shielding requirements for the ultimate q-chip -- different enough that intermixing them closely becomes a nontrivial engineering problem (at least for modosophonts), and I suspect the taxonomy of quantum computronium above that is going to get even more interesting. I suspect an S3 fully-quantum moonbrain, if it isn't inside a void mote, is more likely to be out among the backgrounders where its ultimate q-chips are kept nice and cold.
Incidentally, one of the nice things about quantum computronium is that there are only two steps that ever produce any heat: the quantum error correction produces heat when it fixes an error, and deleting your working memory once you're done would also produce heat. However, since most designs for quantum computronium are fully reversible, rather than deleting all your intermediate results, normally you can just copy the classical answer you got, then run the entire algorithm backwards again to reset your working memory to the original state, without emitting any heat -- admittedly this means the running time doubles (though you get the result halfway though, so with no slowdown), but that way you emit no heat at all other than from any error correction you need to do. (Yes, I know that sounds like impossibly perfect efficiency -- perfection is what you need to be able to run quantum computing without error correction, and to the extent that you're still having, finding and fixing errors, you're still emitting heat. This is another way of looking at why making quantum computronium is hard -- you have to get you computation to within a tiny fraction of thermodynamic perfection, since every single quantum of heat energy emitted is an error, and if your error rate is too high your error correction won't be able to keep up.)
Another neat concept I haven't seen in OA is reversible computronium -- this can be classical, but it has the same property described above that it's so close to 100% efficient that again the only major cause of heat being emitted is deleting data (normally in classical computers mostly intermediate results in the computation are deleted as soon as you no longer need them -- but if you're reversible computronium, you're storing those rather than deleting them so you can reverse the computation, so rather than emitting unneeded intermediate results as heat you instead fill up memory with entropy). So again, you can run it, copy the answer, unrun it back to the original state, and since you never actually deleted anything you're producing no heat (other than whatever corresponds to how far away you are from 100% efficiency). Thus it needs practically no power (always helpful). In fact, for any really efficient classical processor (so presumably for ultimate chips) the main cause of heat is deleting data you no longer want -- you're basically emitting the entropy of that data as heat in order to delete it. I assume ultimate chips will be potentially reversible compoutronium, and it will be the programmers choice whether to delete intermediate data now, or reverse the computation later -- you can trade heat emission off for memory usage. (The Negentropists probably have a law against this -- all civilian non-organic computronium must be fully reversible, and except in emergencies or with special dispensations must be used in reversible mode.)
This also suggests a nifty technique, which I think we should call "entropy piping" -- if you're not running reversible computing, so you do have data to delete (that you don't have the space or inclination to store untile you can reverse your computation), rather than deleting it in situ, send that data along a bus to right next to the heatsink, and then delete it there. That way you can control where in your 2000km-wide moonbrain the heat gets emitted: at the surface right next to the base of a radiator fin, not deep inside the center. So a moonbrain will have a huge fractal trees of databusses one rooted at the base of each radiator fin and with twigs all through its volume, piping no-longer needed data to the base of the radiator fin for deletion to emit heat there -- which should be a lot more controllable than trying to do the same flow rate with heatpipes. Basically, keep your unwanted information digital so you can control moving it around, and turn it into entropy only once you've moved it where you want to emit it. Thus the name "entropy piping". I don't recall seeing this mentioned in the literature, but it's the obvious consequence of the form of the efficiency limits of classical computing.
One other little gem from those efficiency limits: for all almost-ultimate-efficiency computronium designs, the cooler it runs, the less power it needs/heat it generates per bit of data deleted (basically because, for a bit to be distinguishable from the thermal noise, the hotter the chip is the more energy each bit has to contain, and thus will release when deleted). You would have thought that running computronium hot was helpful, since it lets you have bigger temperature differentials and thus generate bigger heat flows for your heat dissipation mechanism, but that isn't the case, since the cooler you run, the less heat you need to dissipate (unless you're in space and your heat is going to a black body thermal radiator -- those do work a lot better at higher temperatures). So on a planet, you should run somewhat-but-not-hugely hotter than your environment. So if you archai is short on energy (as opposed to on carbon), they shouldn't be using plasma processing, they should be using cold ultimate chips -- less energy used per unit of computation. Admittedly, if you're an S-brain, shortage of energy probably isn't your first concern, rate of black-body heat dissipation from your surface is -- but it does represent an advantage for matrioshka nodes, or cold J-brains: they're a lot more energy efficient.