Jump to content
Science Forums

bigsam1965

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by bigsam1965

  1. The posts that I posted on dilation of the Stephan-Boltzmann constant and on the Sandage test for perfect Tolman surface brightness keeps me in the game. Without the inferred difference between the test and Tolman surface brightness being attributed to lookback luminosity evolution, the test appears to suport conservation of photon energy. The test should be repeated for a standard candle such as SNE Ia to determine if photon energy is conserved or not conserved. I need to study the galaxy lookback luminosity evolution model used to in the research. General relativity is base on what we see and what we see is a source image that is dilated and we see the image in dilated time. Relative to It's dilated size the image is moving slower than it's original motion. I can demonstrate that based on the dilated image of the source and time dilation that the Stephan-Boltzmann constant is dilated by [math](1+z)[/math], which cancels one of the negative powers of [math](1+z)^{-4}[/math] in the [math]T^4[/math] term of the Stephan-Boltzmann equation. Through the dilated image, time dilation and the Schrodinger equation, this can also be demonstrated for sources that are not blackbodies. I can describe these two concepts in words. To post the full derivation would be tedious. I could possibly attach the derivation. I still have not receive an explanation of where the radiant energy loss goes if the local static-space equation [math]E=h\nu[/math] actually means an energy loss in expanding space.
  2. I should has remained silent on the issue. I have not looked at those equations in years. Before I spoke I should have done my homework. My careless statement has led the conversation in a direction that I had not intended.
  3. I have no problem with the photon temperature dropping like one over the scale factor. I use it in my model.
  4. A test of Tolman surface brightness (Lubin&Sandage 2001) has been conducted. Perfect Tolman surface brightness uses [math](1+z)^4[/math] which includes non-conservation of photon energy. Galaxies in three clusters were tested. The test concluded that the exponent was 2.59 for the R band and and 3.37 for the I band with [math]q_0=1/2[/math]. The sensitivity was shown to be less than 23% between [math]q_0=[/math] 0 and 1. Without lookback luminosity evolution, this result supports surface brightness using [math](1+z)^3[/math] and conservation of photon energy. The authors of the paper use a theoretical lookback luminosity evolution model to explain away the difference in the test and perfect Tolman surface brightness. The authors may be right. There are many lookback luminosity evolution models for galaxies.
  5. Will, I missed your post earlier. I have been through tensor analysis, differential geometry, general relativity, the Swartzchild solution, and the derivation of the Friedmann-Lematre metric. I must admit that I have not studied the details of the theory in the last few years. I am using the matter-dominated part of the Friedmann-Lemaitre metric. Most of my work over the last four years has been concentrated on a solution of the FL metric. I do not use the Robertson-Walker shell. Space-time is modeled as a perfect fluid. modest was refering to light as an adiabatic process. I was familiar with the equation about the ratio of specific heats, and it has always struct me as curious that one of the specific heats is not real. I don't disagree with the the equation that he presented. What I do maintain is that the question of whether photon energy is conserved or not conserved is an open question. Sandage conducted experiments on Tolman surface brightness and inferred that the substantial difference in his results with Tolman surface brightness is due to luminosity evolution of the source, which may be true. I have shown elsewhere from the dilated-source image and dilated time due to expansion of space that the Stephan-Boltzmann constant is not invariant in the relativistic transformation and is dilated by the stretch factor (1+z). This has an effect of (1+z) on the observed energy flux of a black body source at dilated distance from an observer.
  6. This is interesting since there is no coefficient of specific heat at constant pressure for a photon gas.
  7. CC, you use a shotgun approach. Keep your powder dry and reload. I will answer these two questions first and get back to the others later. The Friedmann_Lemaitre metric in it's expanded form has both dilated distance and dilated time. It is difficult to integrate over dilated time. Proper lookback time is the the normal flow of time from the present toward the beginning of the expansion. As redshift approaches infinity proper lookback time approaches Hubble time or the age of the Universe. Proper time in my model is the distance between observer and source when the photons we observe were first emitted from the source. So I use a simple trick of calculus called the chain rule to transform dilated time to proper time, so I can integrate with a linear independent variable which is proper time. I think the 1 hydorgen atom per cubic meter is a crude estimate of the density of space between galaxies. The density I use is the mean density of space, which includes all matter in space.
  8. modest, I wish you well and get well soon. I am not attacking you, so stay calm. We are having some fun here. Let's keep it that way. You said that the radiant energy of expanding space is conserved even though photon energy is reduced as space expands. I showed above that total radiant energy is not conserved under your explanation. Please demonstrate where the lost photon energy is to conserve radiant energy.
  9. That has been demonstrated only in local-space labrotories, not in the expansion of space where we are seeing the past Universe in dilated time. The wave lengths are dilated by the expansion of space. This is not the same as observing the spectrum of a hot object as it cools and the wavelength of the maximum intensity of the spectrum moves toward the red end of the spectrum.
  10. Let's take your adiabatic analogy and look at it. First of all normal gas has coefficients of specific heats. It puzzels me how anyone can obtain coefficients of specific heats for a photon. Also, particles expanding with space is not the same as particles expanding through space in an adiabatic process. If photon energy is conserved then as space expands radiant energy density goes down by [math](1+z)^{-3}[/math] and the total radiant energy of expanding space is conserved. However if we assume that the photoelectric effect applies in expanding space as it does in the static-space of a laboratory then an additional reduction of energy density by the scale factor [math](1+z)^{-1}[/math] occurs and the total radiant energy of the system becomes [math]E=(1+z)^{-1}E_0[/math], where [math]E_0[/math] is the radiant energy emitted from a source. This means that the system has lost a total of [math]E_{lost}=z(1+z)^{-1}E_0[/math]. I would like to know where this lost energy went. I propose that the energy is still there because it is stretched with the photon wave length. In your adiabatic process with photons losing energy, you claim that the total energy is still there, show me where it is.
  11. It seems to me that your are treating a photon as if it is a hot billet similar to the tired-light theory. Your analogy of an adiabatic gas is a very weak one. It's lecture time. See you later.
  12. I don't know all the difficulties involved, but a simple experiment might settle the issue. Choose a source of known absolute magnitude. Collect photon counts over a given time period. Also, measure the total brightness (energy flux) over the same time period. Sandage has done such an experiment and obtained a result that is about halfway between photon energy conservation and photon energy nonconservation. More work needs to be done in this area.
  13. The problem that you have with this google book source, is where does the photon energy go and what is the mechanism that causes the energy to be lost? What changes the momentum of the light? Are we bringing back the tired light theory in disguise, or some sort of unknown matter that is causing the light to scatter similar to Compton scattering. Since spatial expansion is stretching the wave length of the photon, it seem reasonable that the photon energy is also being stretched with the wave length. This makes more sense to me than just assuming that the photon energy is lost with no mechanism to explain the lose.
  14. An empirical way to obtain the dilated distance for a coasting universe follows. [math]D_0[/math] is the proper distance between source and observer. [math]D=(1+z)D_0[/math] is dilated distance between source and observer for a coasting universe. [math]{\Delta}t_0=c^{-1}D_0[/math] is the proper time that photons travel from source to observer while the Universe is coasting. [math]v_H=(D-D_0)/{\Delta}t_0[/math] is the Hubble flow velocity for a coasting universe. Substituting above definitions into the Hubble-flow-velocity equation yields [math]v_H=cz[/math]. The Hubble law is [math]v_H=H_0D[/math]. Equating the two equations yields [math]D=c{H_0}^{-1}z[/math]. (This is the same dilated-distance equation that was obtained from the FLS solution of the Friedmann-Lemaitre metric for a flat, coasting universe.) Substituting this equation into Equation (4) of the original post yields an emperical match of the SNe Ia Hubble diagram for the Hubble constant equal to 56.96 km/s per Mpc without using general relativity. The only assumptions were that the Universe is coasting and photon energy is conserved.
  15. My solution does not have the 2/3 value of the einstein de Sitter solution in it, because I transform the dilated time derivative into a proper time derivative usnig the chain rule.
  16. I do not use the Einstein de Sitter model, so my parameters are not the same as the Einstein de sitter model. I specifically said this in an earlier post and I described how I solve the Friedmann-Lemaitre metric. I obtain an age of 17.16 Gyr for the expansion.
  17. modest, lambda is zero and the hubble constant is 56.96 km/s per Mpc which translates to a density of 3.65 equivalent proton masses per cubic meter. To obtain the past density at a particiular redshift multiply the current density by [math](1+z)^3[/math]. The Hubble flow velocity is [math]cz[/math] which means that the redshift of a source due to expansion as the Universe expands will remain constant. The model is flat and expanding and there is no curvature.
  18. modest, thanks for the paper. It is not my model. I find the discussion of nucleosynthesis very interesting and I will spend some time studying it in detail. My model is a coasting model in the general class of Friedmann-Lemaitre solutions which means that FLS falls within the theory of general realtivity.
  19. That equation of the FLS model does not use a Taylor series, yet the FLS model matches the SNe Ia Hubble diagram, galaxy counts of the Durham group, The Hubble constant of the Sandage Consortium, and flatness of the CMB. Go Figure! For a coasting universe the deceleration distance as defined in the paper you are using is equal to the proper distance of my model. I call it proper distance because it equals the speed of light times proper time. The proper distance that you referred to is my dilated distance because dilated distance is what we see when we observe sources in space. As it turns out, for a coasting universe, the dilated distance that we see is also the actual distance to the source, but we can only infer this from the model of a coasting universe. By the way there are two competing teams for the Hubble constant: the Sandage Consortium (mean Hubble constant equals to 55-57 with a long distance scale) and the HST Key Project (mean Hubble constant equals to 71-73 with a short distance scale). Direct methods (Bananos, et al. 2006 and others) support the Sandage Consortium. The FLS model supports the Sandage Consortium.
  20. It is interesting that you are using a paper that has problems with the Taylor series expansion of the LCDM model for [math]D_L[/math] and is proposing changes to Hubble's law. The FLS model does not use a Taylor series. The solution is an exact solution of the metric for a coasting universe and the model does not propose changes to Hubble's law. I defined proper distance in my last post and that is the way I use it in my model.
  21. I never said the proponents of LCDM explicitly assumed a static space-time; however, I am saying that backing out those distances after you have solved for [math]D_L[/math] in the metric is the wrong way to solve for distance. By doing so you end up with the wrong values for the Hubble constant and the cosmological constant. By the way, the deceleration distance above is proper distance. Where proper distance is the distance between observer and source when photons being observed were first emitted from the source. We are seeing the proper distance dilated by (1+z).
  22. SNe Ia appear farther way than expected by the proponents of LCDM because one component of the distance modulus [math]\mu[/math] is due to the reduction of effective luminosity relative to source intrinsic luminosity. This component is not a real distance, and can lead some people to the conclusion that SNe Ia are farther away than expected for a flat, coasting universe.
  23. The pre-1998 critical model has a number of unattractive features that had introduced critical density into the model; therefore making the model unstable. The model had the Universe at critical density and any pertubation could cause the Universe to either have runaway expansion or contraction. The thought back then was that something caused the Universe to start expanding and as space expanded gravity would slow the expansion. I scraped the critical density concept and returned to the earlier metric when [math]k [/math] on the [math]kc^2[/math] term of the metric was either -1, 0, 1. I do not use the Einstein-deSitter model with k=0 to obtain a coasting model. I approached the solution of the Friedmann-Lemaitre metric as a symmetry between gravity and antigravity (In string theory a graviton-antigraviton symmetry). With this in mind, I set [math]k=1[/math] and [math]\Lambda=0[/math]. Then I split the resulting metric into four equations: two equations for past and future spatial contraction and gravity and two equations for past and future spatial expansion and antigravity. Then I picked the appropriate expansion equation for determining [math]D[/math]. Both time and distance are dilated in the metric and integrating over dilated time is not a fruitful direction. I, therefore, transformed the dilated time derivative into a proper time derivative, using the chain rule, and the dilated distance into proper distance. I obtained proper distance as a function of the stretch factor [math]a=1+z[/math]. Then I converted proper distance into dilated distance to obtain the coasting-universe solution. In the FLS model antigravity provides the solution to the flatness problem and the horizon problem at the CMB.
  24. You may not assume that I added an additional [math](1+z)[/math]. Because of the spreading out of photons as space expands, the effective luminosity [math]L[/math] equals [math](1+z)^{-1}[/math] times intrinsic luminosity [math]L_0[/math]; this is because fewer photons per second are crossing over the source-centered spherical boundary at an observer than the original number of photons per second that were emitted from the source. If photon energy is stretched and conserved, no additional scaling is requred, thus, [math]L=(1+z)^{-1}L_0[/math]. Therefore, for conservation of photon energy [math]D_L=(1+z)^{1/2}D[/math], as I presented in the original post. If photon energy is not conserved then an additional [math](1+z)^{-1}[/math] is multiplied times [math]L_0[/math], thus resulting in [math]L=(1+z)^{-2}L_0[/math]. Therefore, for nonconservation of photon energy [math]D_L=(1+z)D[/math], as I presented in the original post. The point that I have been trying to make is that [math]D_L[/math] is not the distance to solve for in the FL metric, because [math]D_L[/math] has a component that is due to reduction in the effective luminosity relative to intrinsic luminosity, this component is not a real distance. It has been interpreted as a real distance by some. The LCDM standard model solves for [math]D_L[/math] in the FL metric, and thus has let the horse and fox escape before the fox hunt. To be continued.
  25. General relativity will not fail; however, a more comprehensive theory which includes general relativity will replace general relativity. IMHO the LCDM standard model will not survive. My reasons for the previous statement are posted on the "Astronomy and Cosmology" section of this forum.
×
×
  • Create New...