Jump to content
Science Forums

Recommended Posts

Posted

The "Fine Structure Constant" is the dimensionless (unitless) "pure number":

 

[math]\alpha\approx 137.035999084^{-1}[/math] .

 

It is so absolutely basic, fundamental and important to physics,

that its symbol is the very first letter of the Greek alphabet..."alpha",

yet it is so poorly understood, that in almost a century,

all we have managed to determine are those measly twelve digits,

at a cost of perhaps thirty million dollars per digit.

 

Richard Feynman, one of the three greatest physicists of the twentieth century,

(the other two being Albert Einstein and Stephen Hawking) once wrote about it.

 

Quoting Richard Feynman:

It has been a mystery ever since it was discovered,

and all good theoretical physicists put this number up on their wall and worry about it.

Is it related to [math]\Pi[/math], or perhaps to the base of natural logarithms ?

Nobody knows. It's one of the greatest damn mysteries of physics...

a magic number that comes to us with no understanding by man.

You might say that the "hand of God" wrote that number,

and "we don't know how he pushed his pencil".

We know what kind of dance to do to measure this number...

but we don't know what kind of dance to do on the computer

to make this number come out, without putting it in secretly !

 

But what if it was suddenly discovered, (perhaps right here at Hypography)

that this number could be determined not just by physical measurements,

but by logic as well?

 

What if this number was merely an unavoidable consequence

of some lower or upper bound on some simple mathematical function?

 

I have my own views on what this "mysterious" number really is,

and I would like to share them with you.

 

Don.

Posted
The "Fine Structure Constant" is the dimensionless (unitless) "pure number":

 

[math]\alpha\approx 137.035999084[/math] .

...

What if this number was merely an unavoidable consequence

of some lower or upper bound on some simple mathematical function?

 

I have my own views on what this "mysterious" number really is,

and I would like to share them with you.

 

Don.

 

back when i first ran into this discussed, it was still argued that it was an integer. :hyper: anyway, technically you gave the reciprocal; it should read [math]\alpha\approx\frac{1}{ 137.035999084}[/math]

 

you wouldn't be the first to try & find an algebraic/logical derivation, as this article relates, but the latest stick in that bucket is that the constant may not be constant. :doh: still, very fine work how you earlier applied it to non-figurate numbers. :turtle:

 

Fine-structure constant - Wikipedia, the free encyclopedia

 

...More recently, improved technology has made it possible to probe the value of α at much larger distances and to a much greater accuracy. In 1999, a team led by John K. Webb of the University of New South Wales claimed the first detection of a variation in α.[16][17][18][19] Using the Keck telescopes and a data set of 128 quasars at redshifts 0.5 < z < 3, Webb et al.. found that their spectra were consistent with a slight increase in α over the last 10–12 billion years.

...

Arthur Eddington argued that the value could be "obtained by pure deduction" and he related it to the Eddington number, his estimate of the number of protons in the Universe[33]. This led him in 1929 to conjecture that its reciprocal was precisely the integer 137. Other physicists neither adopted this conjecture nor accepted his arguments but by the 1940s experimental values for 1/α deviated sufficiently from 137 to refute Eddington's argument.[34] Attempts to find a mathematical basis for this dimensionless constant have continued up to the present time. For example, the mathematician James Gilson suggested (earliest archive.org entry dated December 2006 [1]), that the fine-structure constant has the value:

Posted

To: Turtle,

 

Quoting Turtle:

technically you gave the reciprocal; it should read:

 

[math]\alpha\approx\frac{1}{ 137.035999084}[/math]

 

Thanks Turtle, symple typoe, I fiksed it.

 

Quoting Turtle:

you wouldn't be the first to try & find an algebraic/logical derivation, as this article relates.

 

I would rather be the last, and get it right.

 

James Gilson's "result" is definitely not the answer. For one thing,

it is off by about seven or eight standard deviations from the experimental data.

That's quite a lot!

 

For another thing, it does absolutely nothing to explain just why

the numbers 137 and 29 should be so very, very, very, very vaerry special,

as indeed they would be if his result were true.

 

And for yet another thing, those kinds of results are really just "a dime a dozen".

For instance, I can "claim" that the simple fraction:

 

[math]\frac{34259}{2*5^3}[/math]

 

is the "real" value of the fine structure constant on the grounds that it's "close"

(about as close as his) to the experimental value,

and on the grounds that all the numbers involved are primes...

the three in the denominator being the "three smallest primes in the universe"!

 

But you see, to me, that kind of reasoning borders on numerology!

(...borders nothing...for God's sake... it is numerology!)

 

On the other hand, my function for estimating the number of

polygonal numbers of rank greater than [math]2[/math] under [math]x[/math], which is:

 

[math]x-x*\left(A*\Pi*e+e\right)^{-1}-\left(x-x*\left(A*\Pi*e+e\right)^{-1}\right)^{\frac{1}{2}}*\frac{1}{2}[/math]

 

actually requires that the adjustable constant [math]A[/math] be adjusted to almost precisely

the present value of the fine structure constant [math]\alpha[/math] in order for it to be an "upper bound",

so that no matter how many polygonal numbers of rank greater than [math]2[/math]

are actually counted by computer, the approximation given by this function

would always slightly exeed that count.

 

Thus, for the first time ever, we have established that a value very,

very close to the present value of the fine structure constant

is absolutely necessary for the proper operation of actual functions and formulas.

 

Quoting Turtle:

the latest stick in that bucket is that the constant may not be constant.

 

That's why I am so exited about this.

My function for approximating the number of prime numbers was already accurate,

but that accuracy improved dramatically when I realized that it too had to be

adjusted, so that its roots gave the value of the fine structure constant.

 

You see, establishing effective upper and lower bounds

is the key to all such "counting functions",

and unlike other functions for approximating the number of prime numbers under x,

my function has an effective lower bound that is

unbelievably close to the actual number of primes under x.

 

Moreover, the "error term" in my function can be adjusted

so that the function crosses the actual value of [math]Pi(x)[/math]

no less than six times before [math]x=10^{23}[/math].

 

Accurate indeed!

 

Most importantly, it can be made to cross [math]Li(x)[/math] in pretty much the same manner,

so that it can be said to have a very subtle and interesting fluctuation,

which may or may not, put a mathematical limit as to how

accurately, in principle, the fine structure constant can be measured.

 

We simply don't know yet, but we will find out.

 

Quoting Turtle:

still, very fine work how you earlier applied it to non-figurate numbers.

 

Thanks Turtle, but in my view, it was a "team effort".

 

What really surprised the heck out of me was that in all this time...

from the ancient Greeks (who thought that polygonal numbers were

"sacred", "atomistic" and "foundational") to the present "computer age",

no one had ever developed or discovered a truly accurate

"approximation formula" or "counting function" for them.

 

We are the first ever to do so! :)

 

You are truly the wandering, wondering Turtle, with a penchant for

wandering into places where no Turtle has gone before.

 

Thanks for your wonderfull thread, and for giving me the opportunity

to test the effectiveness of my new and radical calculus based on "cohesive terms".

 

Don.

  • 3 weeks later...
Posted

I wrote a one page summary of my

"special polygonal number counting function involving the fine structure constant"

and put it up on my website:

 

An Introduction to Recently Discovered Cohesive Terms

 

Here at Hypography, we now have a golden opportunity to make

a meaningfull contribution to both physics and mathematics.

 

If someone who is really good at programming computers (like Donk)

was to dedicate a spare, unused computer to the calculation of [math]\varpi(x)[/math],

and simply let it run continuously

for however long it takes to calculate it to about [math]x=10^{26}[/math],

then the value of [math]\alpha[/math] would, at that point,

require some "fine tuning" in order for the function [math]B(x)[/math]

to remain an upper bound with an ever decreasing percentage of error.

 

That "fine tuned" value of [math]\alpha[/math] would then be either

a superior approximation of the actual fine structure constant, or...

a new and important mathematical constant whose value

happens to be very close to that of the fine structure constant.

 

Either way, the result would be very exiting

because my "special polygonal number counting function"

is every bit as simple and "mainstream" as [math]Li(x)[/math].

 

Since I work at a high school, I will also suggest to several of our teachers

that they consider an exiting, ongoing project such as this

to help induce and foster more interest in both math and physics.

 

Don.

Posted

Quoting Richard Feynman:

It has been a mystery ever since it was discovered,

and all good theoretical physicists put this number up on their wall

and worry about it.

Is it related to [math]\Pi[/math] , or perhaps to the base of natural logarithms ?

 

How prophetic was that?!?! The now "standard" function for approximating

how many polygonal numbers of rank greater than 2

there are under a given number [math]x[/math] involves both [math]\Pi[/math] and [math]e[/math].

 

An Introduction to Recently Discovered Cohesive Terms

 

Quoting Richard Feynman:

You might say that the "hand of God" wrote that number,

and "we don't know how He pushed His pencil".

 

He pushed His pencil in order to best approximate

how many polygonal numbers of rank greater than 2

there are under a given number [math]x[/math].

 

Don.

  • 2 weeks later...
Posted
Here at Hypography, we now have a golden opportunity to make a meaningfull contribution to both physics and mathematics.

 

If someone who is really good at programming computers (like Donk) was to dedicate a spare, unused computer to the calculation of [math]\varpi(x)[/math], and simply let it run continuously for however long it takes to calculate it to about [math]x=10^{26}[/math], then the value of [math]\alpha[/math] would, at that point, require some "fine tuning" in order for the function [math]B(x)[/math] to remain an upper bound with an ever decreasing percentage of error.

 

That "fine tuned" value of [math]\alpha[/math] would then be either a superior approximation of the actual fine structure constant, or... a new and important mathematical constant whose value happens to be very close to that of the fine structure constant.

 

Either way, the result would be very exciting because my "special polygonal number counting function" is every bit as simple and "mainstream" as [math]Li(x)[/math].

 

Since I work at a high school, I will also suggest to several of our teachers that they consider an exciting, ongoing project such as this to help induce and foster more interest in both math and physics.

 

Don.

Thanks for pointing me in this direction, Don. If I remember correctly, my modified, optimised sieve took a day or so to reach a gigabyte. That gave me a 1-gigabyte file made up of spaces (representing figurates) and zeros (nonfigurates). The rest of the effort went into reading that file byte-by-byte, factorising each nonfig and saving as a file, one line per nonfig, showing number and list of factors (or prime).

 

This sieve looks similar, but with no factoring or analysis. Just a simple count of the numbers. Easy... but you want it to go a little further. 10^26, forsooth! A mere 17 orders of magnitude greater than my previous effort! If a billion takes a day, how long for 10^17 billion? I make it a couple of trillion years, give or take a week or two. :hihi:

 

Those same timings imply that we can add three orders of magnitude to your table in a matter of months, which looks to be worth the effort. Since I don't have a terabyte of free disc space for the array, I'd have to work out an algorithm that allowed me to deal with the problem in slices.

 

I have a few ideas about distributing the workload around a network, but I'll have to think about that one as well. I'll get back to y'all :) Meanwhile, if there are any other coders out there who want to brainstorm it, I'm in!

Posted

To: Donk,

 

Thank you and... :bow:

 

Quoting Donk:

Those same timings imply that we can add three orders of magnitude to your table in a matter of months, which looks to be worth the effort.

 

Yer dern tootin it's wurth the effert.

Please excuse the strong language,

but I'm passionate about this.:hyper:

 

The function [math]B(x)[/math] is as important to polygonal numbers

as the function [math]Li(x)[/math] is to prime numbers,

so regardless of whether or not the constant [math]\alpha[/math]

in that function turns out to be the "actual" fine structure constant,

it still needs to be determined to as many decimal places as possible,

simply because this function is both mainstream and fundamental.

 

Don

Posted

What about the definition:

 

[math]

\alpha = \frac{e^2}{4\pi\epsilon_0 \hbar c}

[/math]

 

Thats pretty neat, and if you work in other units I think you can get it down to e squared.

 

But really whats more special about this constant than Newtons constant of gravity?

 

We have laws that describe the nature of how forces work in this universe, but if we want to make a measurement we have to invent a system of units, however you slice it there is going to be some constant to fix up the differences.

Posted
Thats pretty neat, and if you work in other units I think you can get it down to e squared.

 

I think, at least according to wiki, it's the square of the ratio of e and the plank charge qP which would make it unitless. That's, I think, the big difference between [math]\alpha[/math] and, for example, Newton's G. The first is dimensionless so that it's not only a physical constant, but a dimensionless physical constant.

 

~modest

Posted

To Donk,

 

Quoting Donk:

Meanwhile, if there are any other coders out there who want to brainstorm it, I'm in!

 

I have a similar thread in a non-science forum (Marilyn Vos Savant's)

where a couple of coders who go by the names "Robert 46" and "Kemosabe"

are posting their codes.

 

They managed to determine [math]\varpi(x)[/math] for [math]x=10^{10}=6403587409[/math]

 

but did not have enough RAM to determine [math]\varpi(x)[/math] for [math]x=10^{11}[/math].

 

I told them that they should contact you.

 

Now, I don't know if they did,

but I do know that they read some of your posts

in the "Non-Figurate Numbers" thread.

 

Anyway, I'm really looking forward to working with you again

and I'm totally jazzed as to how this thing is progressing.

 

Also, it would be really great if all you coders were to verify

each others results.

 

You can find their posts here:

 

www.marilynvossavant.com :: View topic - Calculating The Fine Structure Constant

 

Don.

  • 2 weeks later...
Posted

Here is the latest developement on how counting

polygonal numbers of rank greater than 2 may involve

another dimensionless physical constant besides [math]\alpha[/math].

 

If Me/Mp=1836.152672478^-1... is the electron proton mass ratio, then:

 

[math]B(x)-B(x)*\alpha*1836.152672478^{-1}[/math] results in:

 

__[math]x[/math]___[math]\varpi(x)[/math]___[math]B(x)-B(x)*\alpha*1836.15^{-1}[/math]...__Difference_______%Error

_[math]10^{1}[/math]___3__________5____________________________2___________.666666667

_[math]10^{2}[/math]___57_________60___________________________3___________.052631579

_[math]10^{3}[/math]___622________628__________________________6___________.009646302

_[math]10^{4}[/math]___6,357_______6,364_______________________ 7___________.001101148

_[math]10^{5}[/math]___63,889______63,910______________________21__________.00032869508

_[math]10^{6}[/math]___639,946_____639,963_____________________17__________.00002656474

_[math]10^{7}[/math]___6,402,325___6,402,362___________________ 37__________.00000577915

_[math]10^{8}[/math]___64,032,121__64,032,274__________________ 153_________.00000238943

_[math]10^{9}[/math]___640,349,979__640,350,098________________ 119_________.00000018584

_[math]10^{10}[/math]_6,403,587,409__6,403,587,495_______________ 86__________.00000001343

_[math]10^{11}[/math]_64,036,148,166__64,036,148,539___________373________.000000005825

 

Almost perfect !!! Simply AMAZING!!!

 

This is by far the most accurate "counting function" for a

well known "unpredictable" sequence of numbers that I have

ever seen, and it involves the two most important

dimensionless physical constants [math]\alpha[/math] and Me/Mp.

 

The % of error is quickly approaching 0, and may just stay there!

 

I sincerely hope that all coders are "on this like white on rice".

 

Counting polygonal numbers of rank greater than 2 may soon give us

better determinations of both [math]\alpha[/math] and Me/Mp than we ever dreamed of !

 

Don.

Posted

The story so far:

 

My first attempt at counting polygonal numbers took about 14 hours to run up to a billion. All the numbers matched don's table.

 

I tried again, using QBasic64. It's a much more powerful implementation than QB45: in particular it handles byte array sizes to over 1,000,000,000, and long integers up to 19 digits.

 

My routine for nonfigurate numbers involved setting up a 1-Gb random-access file to use as an array. It worked, but all that disk access meant that it wasn't fast. QB64, working in memory only, reached a billion in about 100 seconds.

 

Then I had to figure out how to get to higher values. There’s the simple-minded approach:

 

1) set a pointer (P) to zero

2) calculate a swathe of values from 1 to P + 1 billion. Ignore any values below P

3) count the hits

3) clear the array

4) increment P by 1 billion

5) loop back to (2)

 

You'll see the problem. In order to get the triangular numbers between 1 and 2 billion, we have to recalculate all the triangular numbers over again from 1. And do it again for 4-gonal, 5-gonal... And then do it again for 2 billion to 3 billion, and so on.

 

As I say, it took around 1:40 for the first pass. Then 3:00 for the next. Then 3:51, 4:33, 5:10, 5:50... 10^10 took around 44 minutes.

 

The logical approach is to calculate the first s-gonal number above P for each new value of S:

 

If S is the number of sides in a polygon, the formula for the nth s-gonal number is [math]{(frac{S}{2}-1)n^2-(frac{S}{2}-2)n}[/math].

 

So for a given S, it isn't hard to work out the smallest value of n to get past the pointer value. That saves all the wasted time recalculating values just to get to the new start point. Unfortunately, it didn't seem to work out like that. My new routine gave the following timings for each billion: 1:59, 7:53, 11:08, 14:25, 17:47, 20:59..., taking over three hours just to get to 10^10.

 

I didn't understand it. The first version has to count through over 44,000 triangular numbers below a billion to get to the ones it needs. On the next pass, for 2-3 billion, it has to go through over 63,000. Even for 100-gonal numbers it counts around 4500 under a billion and 6400 under 2 billion. And those numbers go on increasing. So why is it so much quicker to do it that way than solving a quadratic to get to the required figure directly? While I was figuring out the problem, I set my original routine running. It took about 62 hours to get to 10^11:

 

            10                 3
           100                57
         1 000               622
        10 000             6 357
       100 000            63 889
     1 000 000           639 946
    10 000 000         6 402 325
   100 000 000        64 032 121
 1 000 000 000       640 349 979
10 000 000 000     6 403 587 409
100 000 000 000    64 036 148 166

 

Then came that slap-on-the-head moment. The bigger S gets, the fewer S-gonal numbers there are below a certain value. And mostly the routine will be dealing with very big values of S, where the machine overhead in starting from zero and adding a few times to get to the start point is going to be much less than calculating a quadratic equation. I’ve revised the routine yet again, so that smaller numbers will have the start point calculated but larger ones will count from 1. It seems to be faster, but not dramatically faster - 10^12 is still out of my reach unless I can come up with something new.

Posted

To: Donk,

 

Simply breathtaking! I'm stunned!

 

Considering the difficulties involved, you have made a most precious calculation indeed!

 

I will now edit my last post to include that calculation.

 

Notice that it is still an upper bound and near perfect!.

 

Thus, we have even more compelling evidence that both the fine structure constant

and the electron-proton mass ratio are required in this function.

 

Not only that, we now also have a most easy to use very accurate counting function

for polygonal numbers of rank >2, which now seems all the more necessary

considering how hard it really is to count these numbers by computer!

 

Most importantly, we now know that at least in principle,

this function will yield better approximations of both

the fine structure constant and the electron-proton mass ratio!

 

Don.

Posted

I’m trying to calculate the count with which this thread concerns itself ,

“the number of polygonal numbers of rank greater than 2 under x”,

and appear to be misunderstanding it.

 

For example, when I count the third and greater triangular, square, through 34-gonal numbers less than 100, rather than the 57 that everyone else gets, I get 75. Here’s the list of polygonal numbers I count, with the running counts:

S  S-gonal number (count) ...
3  1 (0) 3 (0) 6 (1) 10 (2) 15 (3) 21 (4) 28 (5) 36 (6) 45 (7) 55 (8) 66 (9) 78 (10) 91 (11) 105 (11)
4  1 (11) 4 (11) 9 (12) 16 (13) 25 (14) 36 (15) 49 (16) 64 (17) 81 (18) 100 (18)
5  1 (18) 5 (18) 12 (19) 22 (20) 35 (21) 51 (22) 70 (23) 92 (24) 117 (24)
6  1 (24) 6 (24) 15 (25) 28 (26) 45 (27) 66 (28) 91 (29) 120 (29)
7  1 (29) 7 (29) 18 (30) 34 (31) 55 (32) 81 (33) 112 (33)
8  1 (33) 8 (33) 21 (34) 40 (35) 65 (36) 96 (37) 133 (37)
9  1 (37) 9 (37) 24 (38) 46 (39) 75 (40) 111 (40)
10  1 (40) 10 (40) 27 (41) 52 (42) 85 (43) 126 (43)
11  1 (43) 11 (43) 30 (44) 58 (45) 95 (46) 141 (46)
12  1 (46) 12 (46) 33 (47) 64 (48) 105 (48)
13  1 (48) 13 (48) 36 (49) 70 (50) 115 (50)
14  1 (50) 14 (50) 39 (51) 76 (52) 125 (52)
15  1 (52) 15 (52) 42 (53) 82 (54) 135 (54)
16  1 (54) 16 (54) 45 (55) 88 (56) 145 (56)
17  1 (56) 17 (56) 48 (57) 94 (58) 155 (58)
18  1 (58) 18 (58) 51 (59) 100 (59)
19  1 (59) 19 (59) 54 (60) 106 (60)
20  1 (60) 20 (60) 57 (61) 112 (61)
21  1 (61) 21 (61) 60 (62) 118 (62)
22  1 (62) 22 (62) 63 (63) 124 (63)
23  1 (63) 23 (63) 66 (64) 130 (64)
24  1 (64) 24 (64) 69 (65) 136 (65)
25  1 (65) 25 (65) 72 (66) 142 (66)
26  1 (66) 26 (66) 75 (67) 148 (67)
27  1 (67) 27 (67) 78 (68) 154 (68)
28  1 (68) 28 (68) 81 (69) 160 (69)
29  1 (69) 29 (69) 84 (70) 166 (70)
30  1 (70) 30 (70) 87 (71) 172 (71)
31  1 (71) 31 (71) 90 (72) 178 (72)
32  1 (72) 32 (72) 93 (73) 184 (73)
33  1 (73) 33 (73) 96 (74) 190 (74)
34  1 (74) 34 (74) 99 ([size="4"]75[/size]) 196 (75)
35  1 (75) 35 (75) 102 (75)

I’m clearly doing something wrong, but can’t figure out what. :coffee_n_pc: Can someone please show me?

Posted
I’m trying to calculate the count with which this thread concerns itself ,

“the number of polygonal numbers of rank greater than 2 under x”,

and appear to be misunderstanding it.

 

For example, when I count the third and greater triangular, square, through 34-gonal numbers less than 100, rather than the 57 that everyone else gets, I get 75. Here’s the list of polygonal numbers I count, with the running counts:

S  S-gonal number (count) ...
3  1 (0) 3 (0) 6 (1) 10 (2) 15 (3) 21 (4) 28 (5) 36 (6) 45 (7) 55 (8) 66 (9) 78 (10) 91 (11) 105 (11)
4  1 (11) 4 (11) 9 (12) 16 (13) 25 (14) 36 (15) 49 (16) 64 (17) 81 (18) 100 (18)
5  1 (18) 5 (18) 12 (19) 22 (20) 35 (21) 51 (22) 70 (23) 92 (24) 117 (24)
6  1 (24) 6 (24) 15 (25) 28 (26) 45 (27) 66 (28) 91 (29) 120 (29)
...

I’m clearly doing something wrong, but can’t figure out what. :coffee_n_pc: Can someone please show me?

 

Your output/formula looks correct, but the same number is counted multiple times. 28, for example, is counted as as 5 and 26.

 

~modest

 

modest is correct on the "what", however the "why" is interesting. in simplest terms, some numbers belong to multiple figurate subsets. for example, every-other 3-sided number is a 6-sided number, or all 6-sided numbers are 3-sided, if you prefer. (28 is interesting because it is perfect, and all (known) perfect numbers are 6-sided.) 36 is interesting because it is both 3-sided and 4-sided.

 

i'd say the reason this didn't come up as problematic for calculating the cardinality of the figurate set in this thread earlier, is that Donk is using a sieve method rather then a generating expression. that is, he makes an array of size X fills it with the ordered integers, & then starts testing them one-by-one to see if they are in the figurate set or not. the test is an algebraic modification of the generalized expression for figurate numbers, and running it once on a figurate number will only give one instance of its figurativeness. :turtle: more than you may want to know on it is in this thread: :phone: >> Non-Figurate Numbers

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...