Jump to content
Science Forums

The Holy Grail Of Mathematics.


Don Blazys

Recommended Posts

To: Freeztar,

 

Thanks for the information. It's important that at least one mathematician/computer whizz here at hypography be able to determine the root of the expression:

 

sin(x^(1/2))-ln(ln(x))

 

which is approximately x= 6.2207156287788...

 

to as many decimal places as possible, because a similar calculation is necessary in my formula. The better the approximation, the more primes will be generated in order of magnitude! It's really amazing!

 

Don.

Link to comment
Share on other sites

To: Freeztar,

 

Thanks for the information. It's important that at least one mathematician/computer whizz here at hypography be able to determine the root of the expression:

 

sin(x^(1/2))-ln(ln(x))

 

which is approximately x= 6.2207156287788...

 

to as many decimal places as possible, because a similar calculation is necessary in my formula. The better the approximation, the more primes will be generated in order of magnitude! It's really amazing!

 

Don.

 

You can probably get what you need by using Maxima. I've only tinkered with it, but found it unintuitive and haven't pursued it since.

 

It's free, so it's worth a shot if you have some time on your hands and don't mind learning how to use it.

Link to comment
Share on other sites

To: Freeztar,

 

Thanks again, I will try to get the one of the computer teachers at my school to help get me started, but in order for other Hypographers to duplicate and verify my results, they will also have to aquire the capability to determine such roots to as many decimal places as possible.

 

Don.

Link to comment
Share on other sites

Do your calculations in the computer language PERL. It comes with a math package that performs all calcs to 60 (that's sixty) decimal places.

 

A book that might help with this: Mastering Algorithms with Perl

 

A relevant preview:

 

A root of a function [math]y=f(x)[/math] is the x value at which y is zero. In this section, we'll look at how to find roots of functions, via both closed-form solutions that generate exact answers for polynomials and iterative methods that creep up on the roots of any function.

 

The first step in solving an equation is determining what type of equation you have. If you have only a single polynomial (for instance, you want to find where [math]-5x^2+3x+7[/math] is equal to 9), you can express that as [math]-5x^2+3x-2=0[/math] and use the technique in Section 16.2.1 later in this chapter to find the value of x for which this is true, as long as the polynomial has no exponent higher than 3.

 

If you have a higher-degree polynomial, or a nonlinear equation, use the Newton method described in Section 16.2.2.

 

If you have multiple linear equations, use Gaussian elimination, described in Section 7.12 in Chapter 7. There are many optimizations that you can make if your equations fit certain criteria, but that's beyond the scope of this book. Consult any of the sources in Section A.4 in Appendix A for more detail.

 

If you have multiple nonlinear equations, use the multidimensional Newton method described in Section 16.2.3 later in this chapter.

 

-source

 

~modest

Link to comment
Share on other sites

Actually Don's case is of one variable so neither Gauss-Seidel nor Newton-Raphson are necessary. I think the best bet is to use Newton's method; the derivative isn't all that wicked; unless I've made one of my terrible blunders it's:

 

[math]f(x)=\sin x^{\frac12}-\ln\ln x[/math]

 

[math]f'(x)=\frac{\cos x^{\frac12}}{2x^{\frac12}}-\frac{1}{x\ln x}[/math]

 

Now to reach 60 decimal places, starting from the value already estimated, prolly takes quite a lot of iterations and, since consecutive values will certainly be within the convergence basin, it might be worthwile using an alteration of Newton's method which I have tried in the past but I'm not sure which would be computationally less intensive. It depends on the weight of one extra ln and subtraction against an extra multiplication and three extra divisions; if the natural logarithm is lightweight enough in perl the trick could be faster.

Link to comment
Share on other sites

If you really find it important to improve precision on that computation you could always contruct an ad hoc numeric type or use a language which handles higher precision, perhaps Craig's favourite language would suit the purpose.
All of my favorite hand-made calculators are exact precision integer and rational number based, but I could cobble together trig and logarithm approximating functions to some defined precision pretty quickly, and solve

 

[math]\sin x^{\frac12}-\ln\ln x = 0[/math]

 

Using a simple binary search. An answer to a couple of thousand decimal digits precision shouldn’t be too hard.

 

I’ve lotsa work today, and play plans for tonight, but hopefully can post a result late tonight or tomorrow. As a teaser, here’s an approximation of [math]\sin 2[/math] using a common infinite series,

 

[math]\sin x = x - \frac{x^3}{3!} +\frac{x^5}{5!} - \frac{x^7}{7!} + \dots [/math]

 

ZL HPM
s X=2
s A=0,(B,C)=X,(D,E)=1 F CT=1:1 D RD(.I,C,D),RA(.A,A,I),RM(.C,C,B),RM(.C,C,B),RA(.E,E,1),RM(.D,D,E),RA(.E,E,1),RM(.D,D,"-"_E) W CT,". ",A," =~",@A,! R R
1. 2/1 =~2
2. 2/31 =~.6666666666666666667
3. 14/151 =~.933333333333333333
4. 286/3151 =~.9079365079365079365
5. 2578/28351 =~.9093474426807760141
6. 141782/1559251 =~.9092961359628026295
7. 5529506/60810751 =~.9092974515196737419
8. 580598114/6385128751 =~.9092974264614476255
...
31. 53824986296478094273294897582567285743149004618784793429340024238/59194037845657181864228121483295877940276107097847234991455078125 =~.9092974268256816954

31 iterations gives about 65 digits precision.

Link to comment
Share on other sites

This is exciting!

I agree :hyper:

 

Using Qfwfq's derivative and newton's method (starting at 6.22) with perl at 100 digit accuracy I get:

 

6.220715628778645210593969670313416058685026190653406984465260697578731589801844981308042757550472361

 

I'm out of my depth with this so this result should be verified. I'm also concerned and very confused that this converged after only 7 iterations. I wasn't expecting that at all. :confused: I used:

[math]x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\,\![/math]

Algorithm:

use Math::BigFloat;

$n = 1;

my $x = Math::BigFloat->new(6.22 ,100);
my $half = Math::BigFloat->new(0.5 ,100);
my $one = Math::BigFloat->new(1 ,100);

while ($n < 10) {
   $xtohalf = $x->copy()->bpow($half);
   $logx = $x->copy()->blog();
   $loglogx = $logx->copy()->blog();
   $sinxtohalf = $xtohalf->copy()->bsin();
   $fx = $sinxtohalf->bsub($loglogx);
   $cosxtohalf = $xtohalf->copy()->bcos();
   $xtohalf2 = $xtohalf->copy()->bmul(2);
   $xlogx = $x->copy()->bmul($logx);
   $a = $cosxtohalf->bdiv($xtohalf2);
   $b = $one->copy()->bdiv($xlogx);
   $fpx = $a->bsub($b);
   $c = $fx->bdiv($fpx);
   $x->bsub($c);


print "iteration number $n: $x \n";
$n += 1;
   }

 

Output:

iteration number 1: 6.2207156179471774117867122228858042459075718269744372662836
23662266398210344786538184650666024140324
iteration number 2: 6.2207156287786452081126066015652927240261786788708637739507
49260445300617582198175750501665456398001
iteration number 3: 6.2207156287786452105939696703134160585548006057806747212612
23028733807513931628148855900138989610367
iteration number 4: 6.2207156287786452105939696703134160586850261906534069844652
60697578731589801486300349586510071804013
iteration number 5: 6.2207156287786452105939696703134160586850261906534069844652
60697578731589801844981308042757550472360
iteration number 6: 6.2207156287786452105939696703134160586850261906534069844652
60697578731589801844981308042757550472363
iteration number 7: 6.2207156287786452105939696703134160586850261906534069844652
60697578731589801844981308042757550472361
iteration number 8: 6.2207156287786452105939696703134160586850261906534069844652
60697578731589801844981308042757550472361
iteration number 9: 6.2207156287786452105939696703134160586850261906534069844652
60697578731589801844981308042757550472361

 

~modest

Link to comment
Share on other sites

To: Modest,

 

Hopefully, CraigD will be able to verify your calculation. Even if it's good to only 20 or 30 decimal places, that would still be a significant improvement over my ability to make these types of calculations, and your participation would therefore help greatly towards presenting empirical evidence that my formula will indeed generate the entire (endless) sequence of primes, in order of magnitude, using only the constants pi and e!

 

Don.

Link to comment
Share on other sites

Hopefully, CraigD will be able to verify your calculation.

 

Yes, CraigD is much better at this kind of thing than I.

 

I started over at 250 digit accuracy and got this:

 

6.22071562877864521059396967031

3416058685026190653406984465260

6975787315898018449813080427575

5047236089986553666526211212228

2582152764102504185301445876882

1758023982143107611879667495337

3863345212891628638680669171900

823407981

 

Greater accuracy than that (I tried 500 digits) was getting into very significant calculation times.

 

~modest

Link to comment
Share on other sites

To: Modest, CraigD, Qfwfq, and anyone else who might be working on determining the root:

 

sin(x^(1/2))-ln(ln(x))= 0.

 

Determining the root in the above subtraction is the same as determining the "intersection" of:

 

sin(x^(1/2))

 

and

 

ln(ln(x)).

 

Would that make a difference in how the computer performs the calculation, and if it does, then can the "intersection" result be used to verify the "Newtons method" result?

 

Don.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...