Erasmus00 Posted December 21, 2007 Report Posted December 21, 2007 I was going to post this on Craig's thread of a similar title, but realized that I could drag the thread off of its purpose. The purpose of my thread is to demonstrate that information/thermodynamic aren't just analogous but actually the same thing (i.e. information carries thermodynamic entropy). This line of reasoning owes everything to Charlie Bennett. Note: I always work in units where temperature is measured in energy units, which means boltzmann's constant is 1. This means my entropy is dimensionless. I apologize if its confusing, but its what I'm used to. Lets examine the following simple device (courtesy of Leo Szilard): A molecule is is trapped in a chamber with a shutter in the middle,and moveable pistons at each end. Now, imagine that we quickly close the shutter (the particle is either on the left, or right sound). We now have a machine that can measure which side the particle is on (left or right), and pushes the other movable piston all the way closed. Pushing this piston closed takes no work, as there is no particle there to offer resistance. Next, we open the shutter, allowing the particle to expand the piston isothermally. This lets us extract some useful work, as follows (p is pressure, V is volume) [math] dW = p dV [/math][math] W = \int_{V_i}^{V_f} \frac{NT}{V}dV[/math][math] W = NT \ln \frac{V_f}{V_i}[/math] In the second line, we used PV=NT to replace pressure with volume to do our integral. Its isothermal, so temperature is constant. So, in our case, [imath] W = T\ln 2 [/imath]. Now, we can close the shutter and start all over again. However, since we have extracted useful work from our engine, its entropy must have changed. Using the thermodynamic formula [imath] \Delta S = \frac{\Delta Q}{T}[/imath] we see that our entropy has actually dropped by [imath]\ln 2[/imath], but nothing else has changed! How can we save the second law, which tells us total entropy always must increase? The answer lies in the following- we measured whether the particle was in the left or right side, and we had to store this information somewhere. At best, this information can be stored in 1 bit of information, which Shannon would tell us has information entropy [imath] \ln 2[/imath]. So the second law is saved because information entropy equals thermodynamic entropy! In any physical situation, we have to store bits in a storage medium, which is limited. So we can't extract useful work forever, eventually we have to wipe the harddrive- and this will take an input energy! Bare minimum, it will take the energy we just harvested from our engine. edit: minor edits, for (hopefully) clarity -Will freeztar 1 Quote
modest Posted December 21, 2007 Report Posted December 21, 2007 Using the thermodynamic formula [imath]Delta S = frac{Delta Q}{T}[/imath] we see that our entropy has actually dropped by [imath]ln 2[/imath], but nothing else has changed! How can we save the second law, which tells us total entropy always must increase? I think you must consider the environment outside the cylinder as where the particle gets its energy. This setup would not work in space as the particle needs to feed off the temperature in the room it's in. If you consider the entropy of the environment and allow for a bit of friction maybe total entropy has increased w/o invoking an increase in informational entropy. - modest PS how do I quote latex when it's stripping the \'s...? Quote
Erasmus00 Posted December 21, 2007 Author Report Posted December 21, 2007 I think you must consider the environment outside the cylinder as where the particle gets its energy. This setup would not work in space as the particle needs to feed off the temperature in the room it's in. If you consider the entropy of the environment and allow for a bit of friction maybe total entropy has increased w/o invoking an increase in informational entropy. The environment is, indeed, what we need to consider (nothing changes in the engine). What we are doing with this engine is taking heat from the environment and creating useful work with it WITHOUT a cold reservoir! If you could run this device for ever, you could run any system down to absolute 0, having converted all that heat into work. This would be a huge violation of the second law! Its by considering the information that we get around this. Also, the history of Szilard's engine is quite interesting (Szilard himself believed that the act of measuring was what caused the change in entropy, which Bennett showed can't always be true). I recommend Bennett's 1987 article "Demons, Engines, and the second law." Maxwell's demon type problems have a very interesting history. edit: I think I misunderstood the question- fixed my answer. -Will Quote
snoopy Posted December 21, 2007 Report Posted December 21, 2007 I think you must consider the environment outside the cylinder as where the particle gets its energy. This setup would not work in space as the particle needs to feed off the temperature in the room it's in. If you consider the entropy of the environment and allow for a bit of friction maybe total entropy has increased w/o invoking an increase in informational entropy. [math] Delta S = frac{DeltaQ}{T} [/math]- modest PS how do I quote latex when it's stripping the \'s...? enclose in latex /latex in [] sorry that didnt work dont know either modest 1 Quote
modest Posted December 21, 2007 Report Posted December 21, 2007 Ok,After reading your reply I reread your original post and see now the point the system demonstrates. It would seem to be a violation of the second law taken as a whole if it worked except for (as you say) the act of detecting the particle, acting on that info via the trap door, and resetting that system or info. I don't see why this couldn't be considered a bit of information with an associated entropy attached. I think it could also be considered in purely mechanical or thermodynamic terms. You could say that the energy lost in the detection/reset sequence must be larger than the energy gained by one cycle of work. I think this would have to be true. You could also maybe consider the detection sequence as the missing cold reservoir - actually, if the assumption that this is where the entropy is going is true, it would maybe have to be the cold reservoir. In any case, this sounds like a fascinating way to explore the limits of entropy. It would be interesting to explore something other than a trap door like a one-way filter (if such a thing exists), or using the partial pressure of 2 gases. If you have a link to the paper you are talking about online could you please post it - I couldn't find it. - modest Quote
Erasmus00 Posted December 21, 2007 Author Report Posted December 21, 2007 In any case, this sounds like a fascinating way to explore the limits of entropy. It would be interesting to explore something other than a trap door like a one-way filter (if such a thing exists), or using the partial pressure of 2 gases. If you have a link to the paper you are talking about online could you please post it - I couldn't find it. For information on all sorts of attempted violations start here: Maxwell's Demon. In the references section, you'll find the complete reference for the Bennett paper as well. I don't see why this couldn't be considered a bit of information with an associated entropy attached. I think it could also be considered in purely mechanical or thermodynamic terms. You could say that the energy lost in the detection/reset sequence must be larger than the energy gained by one cycle of work. I think this would have to be true. You could also maybe consider the detection sequence as the missing cold reservoir - actually, if the assumption that this is where the entropy is going is true, it would maybe have to be the cold reservoir. In a sense this is the point- the information itself must be physical. It has to carry entropy (in your words, the information is, more or less, the cold reservoir of the engine). In order to act on this information, it has to be stored, at least temporarily, somewhere. And if that information has not been erased, the engine never really got back to where it started (its a different, higher entropy state). If we want to erase that memory and start again, we'll have to input more energy then we have harvested. Bennett showed that the detection sequence itself could be made reversible (and hence, cannot be what changes the entropy), and there are all sorts of schemes that allow you to bring the energy cost of measurement down to an arbitrarily small value, so the trick isn't in the measurement (though Szilard thought it was). -Will Quote
CraigD Posted December 22, 2007 Report Posted December 22, 2007 :thumbs_up :) I was hoping the recent spate of threads on information and thermodynamic energy would lead to Maxwell’s demon, Sizlard’s engine, Landauer and Bennett. Will’s explanation is right on, I think: informational entropy isn’t analogous to thermodynamic entropy – it’s precisely equivalent to it. Also, information can’t be considered “metaphysical” or “pure abstraction”, but must be represented in some physical medium. Arguably all sound science supports this view. Rather than choosing units with a Boltzmann constant ([math]k[/math]) of 1, as Will does, to allow more intuitive examples, I find it helpful to use SI units. For example, erasing this post at room temperature requires a minimum of about [math]3.1 \,\mbox{bits/English character} \cdot 7200 \,\mbox{characters} \cdot 300 \,\mbox{K} \cdot \,\mbox{k} \dot= 10^{-17} \,\mbox{Joules}[/math]. :) This tiny quantity is entirely washed out by the many Joules/second (watts) of any commonplace computing hardware, but is there as a theoretical limit to computer power efficiency. There’s a lot of history around Maxwell’s demon, early resolutions of the paradox it presented, Sizlard’s resolution of it, and later refinements and modernizations by folk leading up to the defining work of Bennett in the 1980s. The salient feature of this history, IMHO, is that it’s a series of compelling and widely accepted, but disagreeing explanations of the paradoxical nature of information and physical reality. To clearly understand and communicate this history, a clear understanding of the basic design, similarities, and differences of the two engines, Maxwell’s and Sizlard’s, is crucial:Both involve a box with a shutter that can be opened and closed. A key assumption is that the shutter can be made to require arbitrarily little work to move as quickly as necessary (well balanced, friction-free, etc), so that it doesn’t require more energy than the engine produces. In both cases, at least one particle (usually described as a gas molecule) is inside the box.In Maxwell’s version, a daemon “sees” each particle, calculates its velocity (speed and direction), and opens then closes the shutter as needed. In his description, many particles are in the box, with a typical (Boltzmann) assortment of speeds – though the engine would still work with a single particle. The daemon “sorts” the hotter/faster particles from the slower/cooler ones, creating a heat difference that is used to power the engine. The key feature of Maxwell’s engine is that the daemon detects a particle before moving the shutter.In Sizlard’s version, the shutter is opened and closed without attempting to see the particle(s). In his description, a single particle is in the box – though the engine would still work with many particles. The “daemon” – now intuitively imaginable a simple mechanism – determines which half of the divided box the particle is on, and “hooks up” the engines “transmission” as needed. The key feature of Sizlard’s engine is that the daemon detects the particle after moving the shutter[/b\.Prior to Sizlard’s 1929 explanation, the best accepted explanation’s of Maxwell’s 1867 paradox involved considering what abilities were in principle possible for a “daemon”. The best-accepted conclusion was that, in a system at thermal equilibrium with no unpermitted influx of energy, such as illuminating light, there was no way any possible daemon could see a particle, because the emitted glow of the particle would be indistinguishable from the glow of walls of the box. Looking into a very hot oven (kiln) illustrates this effect – until the walls and contents have cooled slightly (and at slightly different rates), the contents (eg: baked stoneware) are “lost” in the overall glow. An emerging formal understanding of atoms and radiation allowed this explanation to be made well and formally. In 1912, Marian von Smoluchowski attempted to come up with a design that didn’t need to light-based “seeing”, resulting in a “valve” consisting of a spring-loaded swinging shutter. He then explained why it couldn’t work – a mechanically efficient spring would eventually allow the door to swing at random, doing nothing useful, while any dampened spring would (like the shock absorber on a car) produce heat – never less than it extracted for the engine. Sizlard’s explanation also didn’t require any light or “seeing” at all – its after-the-action measurement could be done from outside the box with a sensitive scale. Unlike Smoluchowski’s attempt, it works, in principle. All the previous radiation theory explanations were thrown out. Sizlard, largely intuitive and practical engineer that he was, fumbled the explanation, not realizing the significance of “resetting the bit” at the heart of his “transmission shifting” mechanism. (In his drawings, this bit is usually a little sign with “L” or “R” on it. He appears to have assumed that, like the other parts of the mechanism, this bit could be made to require arbitrarily little energy, and thus ignored it. Not until 1961 would Rolf Landauer write the formula we’ve been using above ([math]E_{\mbox{to erase a bit}} = k T \ln 2[/math]). He didn’t much stress it or it’s connection to the Maxwell and Sizilard engines, so it wasn’t ‘til Bennett’s 1982 paper that science-literate folk began using his explanation in place of the old “can’t see it” ones. I first encountered it in a magazine article – a 1986-1988 Scientific American, I recall, sitting in a restaurant on my way to the subway on the way home from work, if memory serves me correct, one of those wonderful reading experiences common among we who read science literature. :) An important point, I think, is that Sizlard’s engine requires no unexplained “daemon” mechanism. A purely mechanical, “unintelligent” design is possible. The few realistic mechanical drawings I’ve seen feature a lot of balance beams, pivots, and latches, and are based on the idea that you can determine which half of the shutter-divided box the molecule is on by weighing it. Though to the best of my knowledge, no such engine has actually been built, and if one were, it would likely be many orders of magnitude less thermodynamically efficient than the ideal given in Landauer’s formula, it is in principle physically possible – in short, filling some sort of memory with random bits can do physical work in a system at thermal equilibrium. Most discussions like this thread’s, including Maxwell, Sizlard, and Bennett’s papers, had nice illustrations. Though I’ve searched at some length for the last couple of years for good weblinks to such illustrations, I’ve yet to find one, or draw my own. Here are two good ones, with discussions and histories similar to the above and earlier in this thread: http://www.aueb.gr/pympe/hercma/proceedings2005/H05-FULL-PAPERS-1/MOUE-MASAVETAS-KARAYANNI-1.pdf; Laplaces's Demon and Maxwell's Demon, the Demons of Classical Physics - Numericana. Quote
jartsa Posted September 30, 2010 Report Posted September 30, 2010 There is a box of clocks, the clocks are running and synchronized. In the box time runs at certain rate. Now we shake the box a bit. Now there is a little bit disorder in the readings of the clocks, because of time dilation. The disorder, before it entered the clocks, obviously was in time. Therefore we deduce that shaking a box causes the temperature of time to increase. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.