The Orion's Arm Universe Project Forums





clearing bits far away from computation
#1
Irreversible computing can be done as reversible computing plus bit clearing. Swapping bits is reversible. You don't have to clear bits at the same place that computation consumes zeros. You could clear bits someplace safe to generate heat, toss a bunch of bits to the computation, swap zeros for random bits, then toss the random bits back to be cleared again.

Does that help with making computation compact and low temperature?
Reply
#2
To be honest, I'm not entirely clear on what you're describing here.

I'm somewhat familiar with the concept of reversible computing, but not with 'bit clearing' or why you would need to perform irreversible computing if you were doing reversible. Can you provide some references that describe this in more detail and/or an example of what this might look like with actual hardware, please? I think that would help everyone get a better grasp of what is being described/proposed so that we could consider it properly and offer a more cogent response.

Thanks in advanceSmile

Todd
Reply
#3
(06-06-2015, 11:08 AM)Drashner1 Wrote: To be honest, I'm not entirely clear on what you're describing here.

I'm somewhat familiar with the concept of reversible computing, but not with 'bit clearing' or why you would need to perform irreversible computing if you were doing reversible. Can you provide some references that describe this in more detail and/or an example of what this might look like with actual hardware, please? I think that would help everyone get a better grasp of what is being described/proposed so that we could consider it properly and offer a more cogent response.

Thanks in advanceSmile

Todd

http://en.wikipedia.org/wiki/Reversible_computing is spot-on. I'm talking about physical reversible computing. The laws of nature are reversible. Many processes can be reversed, like a ball bouncing off the floor, and other processes can return to their original state, like a planet spinning or an ion looping in a magnetic field. You try to minimize friction and uncertainty. Reversible physical computing is just using reversible physical processes to compute some function. Like atoms bouncing around in a gas, there's nothing to keep a completely reversible computer from running forever with no additional energy, but your uncertainty about the state tends to compound over time.

Physically reversible computing can only perform logically reversible operations, also known as permutations or bijections or one-to-one mappings. Logically reversible computations are easy to reason about today. They always have an equal number of input and output bits. For example, given the variables (x,y), you can reversibly do (x,y) -> (x+y, y). It's reversible because you could then do (x+y, y) -> (x+y-y, y) = (x,y), see you got (x,y) back. And (x) -> (-x) is reversible because if you apply it twice you get (x) back: (x) -> (-x) -> (--x) = (x). You could swap bits reversibly given only reversible operations for addition and subtraction and negation: (x,y) -> (x+y, y) -> (x+y, x+y-y) = (x+y, x) -> (x+y-x, x) = (y,x).

Most computations we do today are irreversible. For example, (x,y) -> (x+y) isn't reversible because it doesn't allow you to recover both x and y. You can turn any deterministic irreversible computation into a logically reversible one by saving all the inputs ... (x,y,z,0) -> (x,y,z,f(x,y,z)). If you view f(x,y,z) as 0 xor f(x,y,z), then that operation is its own reverse, (x,y,z,0) -> (x,y,z,0 xor f(x,y,z)) -> (x,y,z,0 xor f(x,y,z) xor f(x,y,z)) = (x,y,z,0). Often reversible computations would require temporary working memory, which would start 0 and end 0 but be something else in the middle.

Quantum computing is a strange thing I haven't understood well, but it's related to reversible computing in that all quantum computations are required to be reversible. It has uncertainty about the initial state, and the computation goes unobserved and the wave function goes through all possible paths, and interference of all those paths makes some outputs more likely than others. Reversible computing doesn't have to be quantum computing.

Being a child of my age, I'm thinking of a physically reversible computer as a non-quantum digital computer, where it knows how to do a set of logically reversible operations, and the input is the program and the input data, and the inputs are completely known beforehand. At the end of the computation you add the output to some zeros, then reverse everything except that final add, leaving you with the input plus the output instead of the input plus zeros. Then you copy the output somewhere useful, replace it in the computer with zeros, and swap the input program and data for the next program and data to work on.

The only thing that has to cost energy is taking bits you no longer want and setting them to zeros regardless of their original values (http://en.wikipedia.org/wiki/Landauer's_principle ). That's bit erasure. Everything else can be reversible.

What I was posting about was that, with physically reversible computing, setting garbage bits to zero doesn't have to be done near the computation. You can keep the computation cold by doing bit erasure elsewhere.
Reply
#4
Ok, so to make sure I'm understanding you correctly, what you're asking/proposing is the application of reversible computing to computing in OA.

Reversible computing is based on the idea that it is possible to design/create computer systems that perform their operations (or as many operations as can be performed) in a fashion that allows you to get an output and then essentially run the computational process backwards. The end result of this is to minimize the amount of waste heat being produced by the computational process. In a 'perfect' system, there would be no irreversible computational processes taking place and no waste heat produced. Although a truly 'perfect' system is not achievable, it is possible to get arbitrarily close to this.

There has been some work in the real world on developing reversible computing hardware and the programs to run them, but the technology/technique is still in its relative infancy.

Based on some stuff I've read on this (not a lot), a reversible computing device could use minimal energy for computation, produce minimal waste heat, but would be slower than an irreversible system since it would have to reverse all it's computations and therefore take twice as long to get X amount of computation done. Although at the speed of modern or future computers, this is unlikely to matter much from an end user perspective in many/most cases.

Does this correctly sum up the concept so far?

Continuing on, and where I'm having a harder time following you...

You seem to be suggesting that it would be possible to have a reversible computer sitting in a room A. That it could do it's thing and produce an output.

Before moving on to the next set of computational operations, the computer must do two things. First, it must copy the output to storage somewhere. Second, it must erase the results of the computation (setting everything to zeros) so that it can get the new set of inputs for the next computation. This is an irreversible process and would normally produce heat.

However, in your scenario the computer would instead transmit the bits making up the results to another room B (possibly located at quite some distance from Room A), where they would be erased, generating heat in Room B rather then in Room A, and thereby helping to keep the computing hardware in Room A even cooler than it would otherwise be as a result of performing reversible computations.

You are also suggesting that the hardware in Room A could also be more compact than it might otherwise be, because it would not need either space or hardware for passive or active cooling (even the much smaller amount that the use of reversible computing would allow), but would only need a means of transmitting the bits making up the result to Room B.

Am I describing this correctly, or am I missing something?

Assuming I am (and I realize I may be missing something vital or totally off base here), the question that comes to mind is how you could literally transport the bits you want to erase from the computer in Room A to the computer in Room B without erasing them from the system in Room A.

My (possibly incorrect) understanding is that in computing when you 'move' bits you are actually copying and transmitting them to a new location while erasing the originals/setting them to zero (thereby generating heat). You can choose not to erase the bits so now you have an original and a copy. But you aren't physically moving anything really so its not like you can suck all the bits out of the computer and having nothing left without actually having erased something.

So, it would seem you would end up with a copy of the bits you wanted to get rid of in Room B, while the original is still in Room A, still needing to be erased. Which would generate heat.

I can certainly see the utility of reversible computation in many situations as a way of producing very efficient computers. Where I'm having a hard time is seeing how you're going to get the bits in the computer to a remote location for erasure/setting to zero without still leaving them all in the original computer still needing to be erased (and generating waste heat in the process).

So, what am I missing and how are you proposing that this would be handled?

Thanks!

ToddSmile
Reply
#5
Yes, you've got it exactly right so far.

Put a really big reversible computer in the place you'd expect a star, and a bunch of bit erasers spread out in a periphery sphere 1AU away in all directions. Pack zeros into convenient sized rocks and toss them from the periphery to the computer. When they reach the computer, unpack them into grains of sand, position those very close to the computation. Reversibly swap the zeros for results to consume/erase. Collect the grains of sand back into a rock, and throw it back out. Throwing rocks back and forth is reversible. The periphery can use the results for something before zeroing out the bits and repeating the process. The latency of throwing rocks back and forth isn't terribly important to the computer, zeros are all the same, you can make up for greater latency with more rocks. The periphery would care about latency of rock throwing because that's how long it takes to get usable results.

Keep the central computer cold, and as compact as you can have matter without gravity being a bother. Maybe if it's cold enough we can use hydrogren and helium to construct the computer, which would be fortunate because those are the only raw materials that tend to be on hand. It could be computing arbitrarily fast; cold just means low entropy. The periphery can be a small fraction of the weight of the central computer. The temperature and surface area of the periphery determine how quickly bits can be erased (allowing new results to be reported). I suspect clearing bits would be the bottleneck rather than computing resources.
Reply
#6
I can't say I'm following this as well as Todd, to come at this from a different direction what would be the advantage of this? The set up you mention of having computers spread over an AU travelling around swapping bits sounds either extremely slow or extremely high energy (in terms of transport costs). What's the USP?
OA Wish list:
  1. DNI
  2. Internal medical system
  3. A dormbot, because domestic chores suck!
Reply
#7
(06-07-2015, 02:14 PM)Bob Jenkins Wrote: Yes, you've got it exactly right so far.

Ok, since I'm still wrapping my brain around some aspects of the second part of this, I'm going to take this back to the two room scenarios I described above. So (for sake of example) are you suggesting that the results of the computation in Room A would be stored in a portable device, such as a thumb drive? And that someone would walk into the room with another thumb drive containing zeros and swap them out? The computer would have reversed its computations after storing the results in the thumb drive so the only thing it contains is the initial inputs?

So the person walks in, swaps out the drives and walks into Room B, where the results on the drive are copied out to elsewhere and then the drive is erased, generating heat in Room B in the process.

The computer in Room A is still going to need to wipe the initial inputs, which will generate some heat, but this should be less than that produced by erasing both the input and the output.

That all said, as Rynn points out this process (whether taking place over interplanetary distances or between rooms) seems like it would be vastly slower than using a straightforward irreversible process, or just using a reversible process for all the computation (thereby minimizing heat production in the place it seems most likely to occur in the largest amounts) before transmitting the results out of the room and erasing them (generating a small amount of heat, but presumably less than would be produced by the full irreversible form of the computational process).

There is also the issue that the process of physically moving the input/output devices in and out of position is going to produce some amount of waste heat somewhere. In the case of the two rooms, the opening and closing of the door will let heat into Room A and the person carrying the thumb drives will generate heat (and consume energy to keep their body going, which also generates more heat, if we're looking at the big picture). Active cooling or other methods can be used to compensate for this to keep Room A/the Room A computer cool, but that's going to consume energy and is really just putting the heat somewhere else if we look at things globally.

Considering the 'star computer' (I realize it wouldn't be the size of a star necessarily) - The amount of matter that you could use before heating via gravitational compression became an issue would be comparatively small. This also means that the bodies traveling to and from the central unit would experienced much less gravitational acceleration, making their orbits that much slower and slowing down the overall process that much more. There would also be the need to maneuver the input/output units around (expending energy, producing heat) and breaking them up and reconstituting them during the I/O process (also likely to produce heat). A significant amount of this might be recovered or avoided by using reversible processes, but the larger these systems are, the harder it may become to manage them to a high level of efficiency. And the time frame would seem likely to become ever longer.

Likely there would be a point of diminishing returns where going for maximal possible reversibility would be offset by the sheer amount of hoop-jumping that has to be gone through to achieve it as well as the long wait times needed to get the answer out of the computer due to the low speed of the I/O process. Or where the collective total of all the small amounts of waste heat produced by less than perfect efficiency at each of the many many steps needed to make everything reversible eventually outweighs the advantages gained by such a system.

At least that's how it seems it would be to me at this point, and assuming I'm understanding what you're describing correctly. Overall, it seems that reversible computing would certainly have some utility in at least some (possibly many) applications, but there would also be limits on the the process that would make using it for all types of computing impractical or at least not worth the bother.

Of course, if I'm missing something, please let me knowSmile.

Todd
Reply
#8
The entropy radiator could be a flat ring, extending outwards from the 1AU line; basically a heat radiator. This shape would prevent the radiator from radiating heat back onto the processor core (but not completely, of course). Sure you could keep the core very cold, but the cooling process itself would emit heat, so the overall temperature of the core plus radiators (when considered as a whole) would increase.
Reply
#9
If the implementation of the reversible operations generates more heat than the bit clearing, yah, it reduces to normal irreversible computing, which has been explored before. So I'm assuming reversible operations generate negligible heat. The bit clearing takes place at a distance, and doesn't have to be compact, so it can be lower temperature than if it were compact. The energy cost of clearing bits is linear with the temperature at which it is done. It's easy to radiate heat only outward, you use a reflector like a piece of foil.

The computing would all be at the center. It has temporary results (which it reverses), and long term storage. Most computing uses inputs of a little new data mixed with a lot of old results, or purely old results. Latency for fetching old results is small because the center is compact and the results are already all there in long term storage. Communication between computing elements is also fast because the center is compact. Every day it writes a little more long term data and throws away an equal amount of old data. The data being thrown away is continuously streamed out to the heat radiators by tossing rocks. Cleared bits, to write new data on, continuously stream in. Input from the outside universe can trickle in too. The tossing is even reversible. There's nothing at the center that in principle needs to consume energy or generate heat.
Reply
#10
If gravity is the main problem with packing matter into the center, a stationary sphere isn't the best approach. For example a ring rotating about its center of gravity could hold more mass total feeling the same maximum gravitational acceleration, in exchange for longer communication between opposite points on the ring.
Reply


Forum Jump:


Users browsing this thread: 12 Guest(s)