Jump to content
Science Forums

Recommended Posts

Posted
I wonder if it's possible to encapsulate the process of science in a general algorithm? Something on the lines of:

problem->operation->solution

I think so.

 

On a very high level, the “orthodox scientific method” described by Popper and others can be described something like this:

  1. Create explanation/model/theory/hypothesis X
  2. Using X, generate prediction P
  3. Test P experimentally
  4. If P dis-verified, modify X
  5. repeat steps 2+

The process is fraught with points of failure. For example, in step 2, we might make an incorrect prediction due to a failure to “express” X correctly. In step 3, we might make an error in designing and performing the experiment, so falsely verify or dis-verify P, and incorrectly modify of fail to modify X. We might make predictions that can’t be experimentally tested (eg: they require bigger machines than can ever be built), causing the process to stall. Only steps 1 and 4 (and 5, which isn’t really a step, but a process flow statement) steps affecting X, are “foolproof” – in principle, we can create and modify X into all manners of “badly wrong” states without breaking the process. We can even theorize at random – provided we’re not concerned with getting a theory X that’s useful in a timely manner.

Posted

Thanks for the reply.

A lot is made of falsification, but it seems to me that potential falsification is all that's required and that any relevant model will include potential falsification. Prediction and relevance seem to me to capture the essentials.

Posted
A lot is made of falsification, but it seems to me that potential falsification is all that's required and that any relevant model will include potential falsification.
I think the actual making of falsifying predictions – statements of the form “Do A, and if R is not the result, theory X is false” - is essential to the scientific method.

 

The view that “falsifiability” (AKA “refutation”) is more an ideal than an actual practice in science as it is actually done is widely held by people who think about such things, and is one of the central themes of works like Kuhn’s “The Structure of Scientific Revolutions”. A very colloquial paraphrasing of this view is that “real science” (or, in Kuhn’s terminology, “normal science”) may make falsifying predictions, and test them, but tends to ignore refuting results until they either go away (with enthusiastic, if not necessarily conscious “tweaking” by theorists and experimentalists) or are so compelling bad that their finally accepted, with revolutionary consequences.

 

I think Kuhn’s and others views are valuable and insightful, but must be carefully applied, with a detailed understanding of the underlying science. For this thread’s purposes, I believe we can ignore such criticism, and consider “ideal science” only.

Prediction and relevance seem to me to capture the essentials.
I agree that prediction is an essential part of science.

 

I don’t think relevance is essential – though I may misunderstand the context in which you’re using it, ugh. Scientific approaches can, I think, be applied to things with little relevance to any one but those applying them, and still be, in form, scientific.

 

Though I’m likely in a fringe opinion group, I believe scientific approaches can actually, in form, be applied to completely unreal things, such as computer simulations. It is, I think, as useful an approach for exploring the unreal as for exploring the real.

Posted
I don’t think relevance is essential – though I may misunderstand the context in which you’re using it, ugh.

 

I can't speak for Ugh, but the way I interpreted what he meant was that a prediction must be falsified in a manner that is specific to the original question/theory. The prediction, whether it be true or false in the end, must have relevance to the original question/theory. It seems obvious upon first consideration, but if not dutifully practiced, can lead to bad results.

 

As a very simplified (and completely atrocious) example, suppose I have a theory that grasshoppers are able to jump so high because they eat so much chlorophyll. I decide to test my theory and make a prediction that grasshopper legs have a higher concentration of chlorophyll in their legs compared to a jumping spider. Obviously (hopefully it's obvious), the prediction has no "relevance" to proving/disproving my original theory/question.

 

I'm not sure I did a good job of explaining my interpretation of Ugh's comments, so I look forward to his personal explanation. :)

 

The modern scientific method could be called the "encapsulating algorithm of science", but there is no single algorithm and any attempt to merge the specificities of each into a coherent general algorithm befuddles me a bit at the moment. In a very basic way, Craig seems to have done a good job of this in post #2.

Perhaps it would be interesting to recant the history of the scientific method and comment on the changes it has undergone and the different algorithms applied? By understanding the evolution and specified, individual uses of these algorithms, then perhaps we can understand better how to encapsulate them all into a general algorithm.

  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
×
×
  • Create New...