PDA

View Full Version : Puzzling Problems with Popper



Higgins
07-28-2008, 09:07 PM
I'm puzzled the the popularity of the notion of "falsification"...

It was Karl Popper's Idea and he and it seem to be largely discredited:

For example:

http://www.stephenjaygould.org/ctrl/gardner_popper.html

And a counter-argument:

http://www.friesian.com/gardner.htm

but note all the rules referred to in this:

http://ndpr.nd.edu/review.cfm?id=4201

Ruv Draba
07-29-2008, 04:26 AM
I'm no expert on Popper, and don't pretend to be a philosopher but I hung out with philosophers for a large part of my postdoctoral research (which was in formal logic), so I'll kick in some informal opinion here.

It's long been recognised that induction (http://en.wikipedia.org/wiki/Inductive_reasoning)is an axiom of mathematical thought. There's no way to prove or disprove it but it's an awfully convenient axiom for dealing with well-ordered structures. Induction lets us generalise on the behaviour of well-ordered systems by considering a few examples and then showing how it can hold for the rest of them.

Higgins, please correct me on this if appropriate, but Popper's problem with induction seems to revolve around it not being [I]falsifiable - and therefore presumably being an article of faith in the orderliness of the world.

In practice, our life strategies often rely on a world that's very orderly - at least where it counts. If we ever drive a car on an undivided road, we are using something like inductive reasoning to assure ourselves that oncoming cars won't collide with us (the first hundred don't, and everyone has presumably been trained the same way, so...) If we accept pay in currency rather than millet we are using inductive reasoning to assure ourselves that the bills we get paid today will be worth much what they were worth yesterday (prices have been stable for the last month, and nothing much has changed, so...)

Inductive reasoning fails at times. It's not absolutely true that all vehicles will stay on their side of the road, and that our currency will hold its value from day to day. We are sometimes catastrophically shocked when these propositions fail. But even knowing that this occurs we seldom abandon inductive reasoning. Instead we'll track a list of special exceptions and then stay alert for them. Why? I believe it's because inductive reasoning is too convenient and we don't have anything comparably useful to replace it.

In practice, the truth-values of our propositions seldom need to hold forever or in all cases. They just need to hold for the scope and duration of the decisions we make about them. Since we need to make a lot of decisions about very complex systems (like traffic and economies), we need our reasoning to operate within tolerances, rather than perfectly.

Often we simply don't have time to check for the falsification of our propositions before we act on them. It's normally only in really risk questions (will the aircraft stay in the air; did this person kill the murder victim) that we check for falsifiability. Under those circumstances we are wise to reject propositions that don't falsify, but under many 'ordinary' circumstances it may not matter.

In my experience, philosophers are sometimes driven more by aesthetics than practicality. Elegant aesthetics can get abstract thinkers excited, but then the pragmatism of living with them can make them disenchanted. Of course, some idealists will still want to try and aspire to ideals - even when they're not practical.

That's my speculative account for why Popper might have seen such acclaim followed by such discreditation, and perhaps why he still has defenders.

From a purely personal perspective, I do look for falsifiability on important matters; I often won't accept advice on risky matters if it comes without some means of disproving it. Specific examples include financial investment advice (how do I know that the shares won't tank tomorrow?), research proposals (under what circumstances could this research fail), automation of critical systems (how could this blow up), and large creative projects (under what circumstances could this take twice as long and get only half as far?) On the other hand, there are also propositions I'm willing to accept without falsifiability tests. An example is when someone tells me 'I like you' or 'I enjoyed my dinner'. But I also know people who'll invest in shares just because they did well yesterday, and people who'll try and prove that you don't like them before they'll believe that you do. :)

Higgins
07-29-2008, 03:56 PM
falsifiable [/I]- and therefore presumably being an article of faith in the orderliness of the world.



I'm not sure of the exact nature of Popper's problems with induction. It seems to some that they are the same as Hume's and the answer would be the same as Kant's. I'm not really sure.
One thing that is clear is that Popper's work is not used in the current history of science as an academic discipline for the good reason that falsification has never been a real methodological part of the sciences.
What I find interesting is that it remains a major piece of pop rhetoric and it can be used for just about anything from discrediting creationism (I would say the problem is there is no particular set of geological mechanisms given by the creationists that have not already been gone over and found implausible by generarions of creationist geologists in the 18th century...see
http://www.h-net.org/reviews/showrev.cgi?path=169881160495510 )
to suggesting that one should be an agnostic because god is not a falsifiable concept. Which means that what cannot be falsified, cannot be rejected? An interesting pop twist on Popper's rather strange ideas.

Ruv Draba
07-30-2008, 07:48 AM
One thing that is clear is that Popper's work is not used in the current history of science as an academic discipline for the good reason that falsification has never been a real methodological part of the sciences.Not quite true. Applied science hinges on falsification. Engineering, for instance, won't consider a design satisfactory until a 'test until failure' step has been applied.

With pure science falsification isn't compulsory, but still appears.

For instance, there's a whole suite of reasoning methods in formal logic called refutatation procedures [Sorry - couldn't find a good link]. Quine, who's cited in one of the articles you mentioned, did quite a lot of research in this field.

The idea of a refutation procedure is that you try to model your assumptions and the negation of your conclusion. If there is no possible valid model that satisfies both, then your conclusion is either intrinsically valid, or necessitated by your assumptions.

Within reasonable formal logics (certainly, everything that Bertrand Russell ever touched, say), every provable proposition has a refutation proof and every refutation proof entails a non-refutation proof. Of the two, non-refutation proofs are often more illuminating because they frequently show why a proposition is true. On the other hand, refutation proofs are sometimes easier to find.

Out in the social sciences though, things get murkier. Some theories are falsifiable (e.g. by showing that there's not the expected correlation between A and B), but often that just begs the theory to make a weaker claim. Some theories aren't falsifiable - they're just ideologies.

It's this very fact that underpins the veiled contempt some hard scientists have for the social scientists - but also underpins the concerns that social commentators have had with positivism (http://en.wikipedia.org/wiki/Positivist).

For myself, I trust claims that can be falsified far more than claims that can't - simply because the level of surety is way higher. If someone says 'I liked the meal you cooked', that could be a factual statement or a political one - I can't necessarily tell. On the other hand if they say 'the meat was undercooked and here it is', I'll know exactly what's what.

Higgins
07-30-2008, 09:41 PM
Not quite true. Applied science hinges on falsification. Engineering, for instance, won't consider a design satisfactory until a 'test until failure' step has been applied.

With pure science falsification isn't compulsory, but still appears.

For instance, there's a whole suite of reasoning methods in formal logic called refutatation procedures [Sorry - couldn't find a good link]. Quine, who's cited in one of the articles you mentioned, did quite a lot of research in this field.

The idea of a refutation procedure is that you try to model your assumptions and the negation of your conclusion. If there is no possible valid model that satisfies both, then your conclusion is either intrinsically valid, or necessitated by your assumptions.

Within reasonable formal logics (certainly, everything that Bertrand Russell ever touched, say), every provable proposition has a refutation proof and every refutation proof entails a non-refutation proof. Of the two, non-refutation proofs are often more illuminating because they frequently show why a proposition is true. On the other hand, refutation proofs are sometimes easier to find.

Out in the social sciences though, things get murkier. Some theories are falsifiable (e.g. by showing that there's not the expected correlation between A and B), but often that just begs the theory to make a weaker claim. Some theories aren't falsifiable - they're just ideologies.

It's this very fact that underpins the veiled contempt some hard scientists have for the social scientists - but also underpins the concerns that social commentators have had with positivism (http://en.wikipedia.org/wiki/Positivist).

For myself, I trust claims that can be falsified far more than claims that can't - simply because the level of surety is way higher. If someone says 'I liked the meal you cooked', that could be a factual statement or a political one - I can't necessarily tell. On the other hand if they say 'the meat was undercooked and here it is', I'll know exactly what's what.

The problem is that the methods that allow you to get to an instance where "falsification" works out pretty much answer all the actual questions about what you are working on before you get to the falsification and there are plenty of obvious and simple questions where falsification is completely irrelevent.
For example, suppose we were looking for a positive particle with the same mass as an electron. If you get a cloud chamber photograph that shows a particle behaving as a positive electron...turning the finding into a "falsification" (ie it is false that there is not a positron) is a mere verbal game with no methodological implications at all.

Ruv Draba
07-31-2008, 08:24 AM
The problem is that the methods that allow you to get to an instance where "falsification" works out pretty much answer all the actual questions about what you are working on before you get to the falsification and there are plenty of obvious and simple questions where falsification is completely irrelevent.Agree, and hence the joke about Popper betting on a horse-race and saying 'Great! My nag failed to lose!' :)

Perhaps more useful (well, it is to me), is the use of constructivity and relevance in arguments.

Constructivity: you don't claim that white crows exist until you can produce a white crow or show how to find one.

Relevance: you ensure that whatever you're talking about relates constructively to what you're trying to prove. So if I'm hunting white crows and I start talking about savannah cats, I have to show that (say) savannah cats eat white birds and cough up the feathers - and that these birds might be crows.

Constructivity gives you falsification for free, and relevance helps keep us from going in silly circles or running off chasing butterflies.

Neither of these prevent us from speculating - it's just that when we speculate we agree to create a fictional world in which to do so. We state (constructively) which things we assume to exist there and which we're going to derive. We agree not to bring the results back into our own world as truths unless our assumptions can be constructively and relevantly validated here.

In such a scheme, science is largely a quarantine mechanism for eliminating unviable fictional worlds, and for deciding what we're allowed to import from the viable ones.

Higgins
07-31-2008, 04:54 PM
Agree, and hence the joke about Popper betting on a horse-race and saying 'Great! My nag failed to lose!' :)

Perhaps more useful (well, it is to me), is the use of constructivity and relevance in arguments.

Constructivity: you don't claim that white crows exist until you can produce a white crow or show how to find one.

Relevance: you ensure that whatever you're talking about relates constructively to what you're trying to prove. So if I'm hunting white crows and I start talking about savannah cats, I have to show that (say) savannah cats eat white birds and cough up the feathers - and that these birds might be crows.

Constructivity gives you falsification for free, and relevance helps keep us from going in silly circles or running off chasing butterflies.

Neither of these prevent us from speculating - it's just that when we speculate we agree to create a fictional world in which to do so. We state (constructively) which things we assume to exist there and which we're going to derive. We agree not to bring the results back into our own world as truths unless our assumptions can be constructively and relevantly validated here.

In such a scheme, science is largely a quarantine mechanism for eliminating unviable fictional worlds, and for deciding what we're allowed to import from the viable ones.

Cool! Fiction does seem to be a useful way of thinking about scientific approaches.
In my youth, of course, I was entranced with Frege, Russell and Carnap...but witnessing the "revolution" in Archaeology (roughly the 1970s) and Carl Hempel and Lewis Binford made me doubt the relevence of certain types of argument. Now, after reading a lot of Peter Galison and similar historians of science and working with medical scientists, I can see that a fictive element is very useful in thinking about actual scientific activity. So scientists often use "hand-waving arguments" (non-rigorous and somewhat rhetorical) and "toy models"...and often if you ask for an explanation of X you essentially get a very cleaned up story of X and you have to ask the right questions to find out what you actually need to know.

Pup
07-31-2008, 07:22 PM
I think where falsification is useful, as a practical thing, is to counter-act the human tendency toward excuses.

Somebody thinks, "I predicted X would happen, and it did. I'm psychic." So they try predicting something like coin flips or zener cards, and sometimes they can do it, and sometimes they can't. They think, it must be because they weren't concentrating hard enough. Or it only works for photos but not zener cards. But only some photos. Or they can do it a few times but then they need to take a break.

Finally one just wants to say, "What would make you believe you're not psychic?"

If the answer is, "Oh, I know I'm psychic. I just need more practice to figure out how it works," then we're no longer in the range of science, we're out into the range of faith.

Higgins
07-31-2008, 08:44 PM
I think where falsification is useful, as a practical thing, is to counter-act the human tendency toward excuses.

Somebody thinks, "I predicted X would happen, and it did. I'm psychic." So they try predicting something like coin flips or zener cards, and sometimes they can do it, and sometimes they can't. They think, it must be because they weren't concentrating hard enough. Or it only works for photos but not zener cards. But only some photos. Or they can do it a few times but then they need to take a break.

Finally one just wants to say, "What would make you believe you're not psychic?"

If the answer is, "Oh, I know I'm psychic. I just need more practice to figure out how it works," then we're no longer in the range of science, we're out into the range of faith.

Well...suppose your psychic said, "I don't know if I am psychic, but I get a statistically significant greater amount of blind guesses right than wrong."

How do you falsify a statistical argument?

I think the real problem with using some possibly useful small aspect of what is supposedly a scientific method (but rarely is) to rhetorically undermine other people's claims is that it invents yet another kind of
pop-cultural mode of explanation that is ultimately just confusing. There are plenty of other utterly confusing pop concepts (such as the idea of "PC" or anything with the word "quantum" or "uncertainty" or "post-modern" or "deconstruction") that could be used just as well and often with less confusion. Maybe "psychic" is just a post-modern PC concept for Quantum uncertainty? Maybe that formuation could use a little deconstructing.

As in the example I keep using where it is suggested that since the idea of God cannot be falsified, it must somehow remain plausible. ie. apparently if something is not falsifiable, it cannot be rejected as implausible.

Pup
07-31-2008, 09:59 PM
Well...suppose your psychic said, "I don't know if I am psychic, but I get a statistically significant greater amount of blind guesses right than wrong."

But then we're out of the realm of excuses.

Like the fellow says in the joke, "I can predict whether a coin will land heads or tails. Unfortunately, my power only works half the time." The first step is showing there is a phenomenon there, that operates differently from what we already understand about probability, and therefore needs explained.

If the psychic truly can get a statistically greater outcome than chance, in a reproduceable way, he's on the road to scientifically investigating a proven phenomenon.

But if he gets a positive outcome this Tuesday after eating lasagna, and says his powers must only work in those situations, and then tries next Tuesday under the same conditions, fails, and says, well, it must be because they only work on Tuesday after eating lasagna while wearing my red shirt...

The hypotheses and the tests could be endless. The simplest explanation is that his occasional successes are due to chance, as would be expected. And while yes, continuing to test different variables resembles the way real discoveries are made, there's a fine line between tenacity and delusion.

Society likes to point to people who were treated as crackpots in their day who went on to be proven right, but that doesn't change the fact that there are plenty of crackpots who were, well, crackpots, and in hindsight were never coming close to being right.

As a practical matter, expecting a claim to be falsifiable helps cut through the delusion. But also as a practical matter, it's not useful in situations where tenacity could win out: something will kill cancerous cells if only we can find the right substances applied in the right way.

You can't falsify that statement, but on the other hand, you can streamline the search by falsifying dead ends: If the tumors don't shrink in this experiment with aspirin, we'll stop testing aspirin in this particular way.


As in the example I keep using where it is suggested that since the idea of God cannot be falsified, it must somehow remain plausible. ie. apparently if something is not falsifiable, it cannot be rejected as implausible.

"Impossible" I could agree with; "implausible," no.

Higgins
07-31-2008, 11:28 PM
The hypotheses and the tests could be endless. The simplest explanation is that his occasional successes are due to chance, as would be expected. And while yes, continuing to test different variables resembles the way real discoveries are made, there's a fine line between tenacity and delusion.

Society likes to point to people who were treated as crackpots in their day who went on to be proven right, but that doesn't change the fact that there are plenty of crackpots who were, well, crackpots, and in hindsight were never coming close to being right.

As a practical matter, expecting a claim to be falsifiable helps cut through the delusion. But also as a practical matter, it's not useful in situations where tenacity could win out: something will kill cancerous cells if only we can find the right substances applied in the right way.

You can't falsify that statement, but on the other hand, you can streamline the search by falsifying dead ends: If the tumors don't shrink in this experiment with aspirin, we'll stop testing aspirin in this particular way.



In terms of scientific practice,there are a lot of things that come up when testing claims of efficacy (as in "psychic method A can predict X-type events" or "Drug Y extends survival time with disease B"). One is stating the hypothesis or "endpoint" characteristics before the test. Another is defining the population (and one psychic would not be enough to show anything). Another is reproducibility (a single psychic fails there too) and another is mechanism. It helps a lot to have a story to describe how the "efficacy" actually happens. Now an exact description of a mechanism is not always possible...but even a psychic should try if he wants to gain plausibility. The thing about "mechanisms" is that they are stated in terms of other plausible entities and with psychics there's nothing to go on for that.

sunandshadow
08-01-2008, 12:09 AM
Fascinating, I had never read about Popper before. I couldn't say whether Falsification has any current validity in science, but I can only imagine that it does have validity in psychology if you are studying the way people invent new beliefs, and from my own study of myth, Falsification seems to have been a method of thought used all the time both by characters in myths and by creators of myths.

Higgins
08-01-2008, 05:38 PM
Fascinating, I had never read about Popper before. I couldn't say whether Falsification has any current validity in science, but I can only imagine that it does have validity in psychology if you are studying the way people invent new beliefs, and from my own study of myth Falsification seems to have been a method of thought used all the time both by characters in myths and by creators of myths.

What is myth Falsification?

sunandshadow
08-02-2008, 12:23 AM
What is myth Falsification?

Oops, a comma got lost there. That should read: "From my own study of myth, Falsification seems..."

Ruv Draba
08-02-2008, 01:49 PM
The hypotheses and the tests could be endless. The simplest explanation is that his occasional successes are due to chance, as would be expected. And while yes, continuing to test different variables resembles the way real discoveries are made, there's a fine line between tenacity and delusion.Yep. This is why relevance is important. The hypotheses must follow a model that takes into account what we already know. An hypothesis without a model is just a guessing game; the number of possible hypotheses is infinite and history shows that if we don't have a model we nearly always guess wrong.

Ockham's razor relates to this: it advises us to work with the models we know unless none of them works.


If the answer is, "Oh, I know I'm psychic. I just need more practice to figure out how it works," then we're no longer in the range of science, we're out into the range of faith.
This seems to be somewhat more than faith to me. Faith of itself doesn't prohibit the chance of falsification. For instance, I have faith that my dentist is competent, but she could easily demonstrate to me that she's not. My faith in her competence is demonstrated though, when I let her put hardware in my mouth.

Faith needs to be a strong enough belief for us to commit action to it (otherwise it's merely opinion), but it needn't be so strong as to deny falsifiability. That seems to be some stronger form but I don't know what an appropriate name is. Anyway, scientists often exhibit faith, but when they start ignoring opportunities to check falsification then they stop doing science.

Dawnstorm
08-02-2008, 11:38 PM
This seems to be somewhat more than faith to me. Faith of itself doesn't prohibit the chance of falsification. For instance, I have faith that my dentist is competent, but she could easily demonstrate to me that she's not. My faith in her competence is demonstrated though, when I let her put hardware in my mouth.

Faith needs to be a strong enough belief for us to commit action to it (otherwise it's merely opinion), but it needn't be so strong as to deny falsifiability. That seems to be some stronger form but I don't know what an appropriate name is.

Actually, faith, to me, does deny falsifiability. Usually, shaking someone's faith has adverse effects on identity and - possibly - self-esteem. What you're talking about here, I'd call trust (and the stronger form you can't name I'd call faith), which merely has adverse effects on the relation between truster and trustee.

If you have faith in something and it lets you down, you'll make excuses, because admitting defeat is hurting yourself.


Anyway, scientists often exhibit faith, but when they start ignoring opportunities to check falsification then they stop doing science.

It is possible to set up theories in a way that the opportunities to check falsification don't arrive in the first place. This is was Popper was addressing, mostly, with his claim that falsifiability be a necessary precondition for science. He said such theories employ "immunisation tactics"; he levied such charges especially against Freudian and Marxist theories (see another recent thread).

Basically, without falsifiability, there's no peer review - a key concept for science.

***

Here's a bit of context for Popper's falsification statements:

Popper has a hierarchy of theories:

The wider the application of a theory, the better it is.

"John Smith from across the street has blue eyes," is a very limited theory.

"All people called John Smith have blue eyes," is a better theory, as it applies to more people, and is - thus - a better theory.

Compare this to:

Hyp1: All crows are black.

Hyp2: Some crows are not black.

Popper would say that science should strive towards Hyp1 (which, I think, shows his positivist bias quite well).

Now, let's assume both hypotheses are flase. Look at how to falsify these hypotheses:

Hyp1: Find a crow that is not black. As soon as you find one, Hyp1 is false.

Hyp2: Look at all crows and show that they are all black. Notice how verification is a lot easier, here?

There's a problem with countering Popper by citing type two examples: you're building a straw man. Popper never said this. To Popper "There is water on Mars," would not have been a hypothesis; it would be something to check to verify/falsify a hypothesis.

If your hypothesis is based on an induction that suggests there is water on Mars and you do find there's water on Mars, this does not yet validate your induction. It merely states there's water on Mars, as you have expected. Your reasons for said expectations might be mistaken (e.g. because of hidden variables).

What Popper considers a good hypothesis in the scientific sense is not arbitrary. "This horse will win the race," is not a scientific hypothesis; it's a specific prediction. There's a definite moment of truth that just doesn't exist for a statement such as "All crows are black." It's a continuum. "There's water on Mars," is somewhere inbetween "All crows are black," and "This horse will win the race." There's a inverse proportional relationship between specificity and verifiability. Highly specific claims are easy to verify, but - scientifically - they're not very useful. Highly generalised claims are very hard - if not impossible - to verify, but scientifically they're quite useful. Falsifiability gives you a handle on the problem: it generally produces relevant facts that are easier to check for.

I disagree with Popper mostly on his positivism (from a perspective of social-epistemological relativism, I think, but I haven't come to a clear conclusion on that point), but once I accept that, his demand for "falsification" is - I think - valid. You need to open yourself up to the risk of "getting it wrong". If you don't, you get stuck in a non-scientific, narcissitic loop.

***

As for "not being able to falsify God" = "God is a viable theory": that's not true. You fail to falsify God because there is no condition under which you could say that God doesn't exist (in an empirical context). God would have to be introduced into science as an axiomatic assumption, not as a hypothesis. (IMO, God is too unspecific to even make for a good axiomatic assumption, but this is besides the point, here.)

***

So what about "many worlds theory" vs. "Copenhagen Interpretation" in quantum theory? Does it even make a difference which one you chose? I'm honestly curious; my impression is that the discussions around this involve more faith (about how to make sense of strange science) rather than science. I notice "many worlds" is more popular in quantum computing - which may be a result of the mindset of the programmers.

I don't really understand quantum theory, so if anyone does: are there empirically specifiable differences? Or, differently put, what empirical circumstances could be used to falsify either hypothesis?

Ruv Draba
08-03-2008, 12:54 AM
If you have faith in something and it lets you down, you'll make excuses, because admitting defeat is hurting yourself.But that's how I'd feel if my dentist proposed to extract teeth that I thought were healthy! I'd give her a chance to prove that her knowledge is greater than mine and then I'd make the call. She could easily convince me with X-rays and physical evidence... and she'd have to work much harder to convince me based on model-theoretic argument.

How I deal with repeated betrayals and disappointments might depend on my disposition and the circumstance. Some folks make excuses; others still believe in the object of faith, but limit the opportunity to be betrayed/disappointed; others drop the faith or put it elsewhere. All are valid responses to faith betrayed I think.

It is possible to set up theories in a way that the opportunities to check falsification don't arrive in the first place. This is was Popper was addressing, mostly, with his claim that falsifiability be a necessary precondition for science.Thanks for this, Dawn. I don't agree with him, because I think that speculation and model-theoretic arguments (many of which can't be falsified) are critical for the development of ideas. However (please see Colorado Guy's recent questions on history and postmodernism (http://absolutewrite.com/forums/showthread.php?t=111490)), that doesn't give them the same legitimacy as those propositions which could be falsified, but haven't been.



Basically, without falsifiability, there's no peer review - a key concept for science.Falsifiability is important, but I don't agree that its lack prevents some accountable peer-review. You can peer-review on model-theoretic arguments for example - you just need a different frame which you agree in advance (which models will we accept for the purpose of review). You can also peer-review speculation, but you use different criteria again (e.g. is this likely to be productive). We do this when we review doctoral thesis proposals for instance. (Or when we critique fiction.)


The wider the application of a theory, the better it is.

[8<------Snip------8<]

Popper would say that science should strive towards Hyp1 (which, I think, shows his positivist bias quite well).Thanks again for this. I think that science has shown plenty of success in working with both the general and the specific. This is really just the pure/applied science distinction revisited, and we know that they feed each other. In medicine for instance, it's sometimes the individual patient that teaches the field.


As for "not being able to falsify God" = "God is a viable theory":I've been skirting that one for this discussion (it comes up enough in the spiritual genre subforums) - and I'll continue to do so here. ;)


So what about "many worlds theory" vs. "Copenhagen Interpretation" in quantum theory? Does it even make a difference which one you chose? I'm honestly curious; my impression is that the discussions around this involve more faith (about how to make sense of strange science) rather than science. I notice "many worlds" is more popular in quantum computing - which may be a result of the mindset of the programmers.I'm not a quantum theorist, but did some postdoctoral research in modal (http://en.wikipedia.org/wiki/Modal_logic)and temporal (http://en.wikipedia.org/wiki/Temporal_logic)logics. To my mind, they're handy things to have when you don't have a better model to work with. Duplicating your knowledge-base just to deal with different contingencies has to be about as inefficient a way of managing information as we're ever likely to see. :)


I don't really understand quantum theory, so if anyone does: are there empirically specifiable differences? Or, differently put, what empirical circumstances could be used to falsify either hypothesis?Intuitively I suspect that what you're looking at is a failure in sophistication of the model. (Of course, someone could start taking time-trips and prove me wrong. :))

RAMHALite
08-03-2008, 08:06 AM
Anyone following this thread has a clear idea of the enormous complexity of the falsifiability issue. Now, did that complexity deter the United States Supreme Court from making it a matter of law? Not in the least.

In Daubert v. Merrell-Dow Pharmaceuticals, the Court ruled that novel scientific expert evidence has to pass a test of falsifiability in order to be admissible in court. The only dissenting comment on that issue was written by Justice Rehnquist, who confessed that he did not understand the concept enough to mandate a falsifiability test for admission of novel scientific evidence/testimony. Perhaps he had some intuitive insight beyond that of his colleagues into the difficulties that the concept of falsifiability presents. BTW, the Court's decision in this case left it up to the trial judge to determine whether the falsifiability standard had been met. Now, I have faith in our judiciary, but really...

How's them apples?

--RAMHALite

Dawnstorm
08-03-2008, 01:35 PM
But that's how I'd feel if my dentist proposed to extract teeth that I thought were healthy! I'd give her a chance to prove that her knowledge is greater than mine and then I'd make the call. She could easily convince me with X-rays and physical evidence... and she'd have to work much harder to convince me based on model-theoretic argument.

How I deal with repeated betrayals and disappointments might depend on my disposition and the circumstance. Some folks make excuses; others still believe in the object of faith, but limit the opportunity to be betrayed/disappointed; others drop the faith or put it elsewhere. All are valid responses to faith betrayed I think.

Terminological nitpick, but how do you differentiate between trust and faith, then?


Thanks for this, Dawn. I don't agree with him, because I think that speculation and model-theoretic arguments (many of which can't be falsified) are critical for the development of ideas. However (please see Colorado Guy's recent questions on history and postmodernism (http://absolutewrite.com/forums/showthread.php?t=111490)), that doesn't give them the same legitimacy as those propositions which could be falsified, but haven't been.

Well, I haven't yet figured out whether I agree or not. I do think that interpretative, non-statistical theories in, say, sociology (ethnomethodology, symbolic interactionism, phenomenological sociology) are interesting and a worthwhile pursuit.


Falsifiability is important, but I don't agree that its lack prevents some accountable peer-review. You can peer-review on model-theoretic arguments for example - you just need a different frame which you agree in advance (which models will we accept for the purpose of review). You can also peer-review speculation, but you use different criteria again (e.g. is this likely to be productive). We do this when we review doctoral thesis proposals for instance. (Or when we critique fiction.)

True. I was being hasty again, I suppose.


Thanks again for this. I think that science has shown plenty of success in working with both the general and the specific. This is really just the pure/applied science distinction revisited, and we know that they feed each other. In medicine for instance, it's sometimes the individual patient that teaches the field.

That is true.

The main relevance of the distinction between abstract/specific, here is that, if you're talking about Popper's falsification concept, countering him with "people are looking for water on Mars, not for signs that there isn't one," is a red herring, because if you're treating a specific case verification is possible on account of the very low statistical population (1 planet, now).


I've been skirting that one for this discussion (it comes up enough in the spiritual genre subforums) - and I'll continue to do so here. ;)

Never worry, I'm not much interested in that either. It's just that you can't lay that sort of argument on Popper, when it's exactly that sort of thing he's meant to exorcise.

It is important to at least try and tell the difference between "we didn't manage to falsify" and "we don't know how to falsify".


I'm not a quantum theorist, but did some postdoctoral research in modal (http://en.wikipedia.org/wiki/Modal_logic)and temporal (http://en.wikipedia.org/wiki/Temporal_logic)logics. To my mind, they're handy things to have when you don't have a better model to work with. Duplicating your knowledge-base just to deal with different contingencies has to be about as inefficient a way of managing information as we're ever likely to see. :)

Thanks for the links. I'll have to think about the differences for some time. It's a new path to go down.


Intuitively I suspect that what you're looking at is a failure in sophistication of the model. (Of course, someone could start taking time-trips and prove me wrong. :))

I honestly don't see a practical difference between "multiple worlds of which I'll only ever see one" and "uncertainties that forever remain outside of my perceptive horizon". My intuition is that they're practically the same model, and the quibbling among theorists points towards psychological differences and/or semantics rather than the world. (But, again, I don't trust my judgement on all topics quantum mechanics at all.)

***

RAMAHLite: Thanks for that info. Very interesting.

Ruv Draba
08-04-2008, 08:52 AM
Terminological nitpick, but how do you differentiate between trust and faith, then?Etymologically there's not much difference. Faith comes from the Latin fides meaning "trust or belief". Trust comes from the Gothic trausti meaning "agreement" or "alliance".

In terms of usage, I see people use 'trust' most commonly in connection with people or occupations, institutions or products from which there is some expected obligation - e.g trust the police, trust the doctor, a trust fund, a trusted painkiller. Faith is sometimes used in place of 'trust' but also used in terms of us meeting the trust of another (e.g. trading in good faith, a faithful dog), and for abstracts like the economy, or collections like humanity. ('I have faith in the free market', or 'I have faith in humanity')

Ask someone else though and they might swap these senses -- I'm not a linguist and my impressions aren't authoritative.

I'm not aware of anything in either etymology though that requires trust or faith to be unshakeable - although the concept of 'faithful service' (in the sense of loyalty) carries something of that connotation at times.



The main relevance of the distinction between abstract/specific, here is that, if you're talking about Popper's falsification concept, countering him with "people are looking for water on Mars, not for signs that there isn't one," is a red herring, because if you're treating a specific case verification is possible on account of the very low statistical population (1 planet, now).I agree. Constructive proofs automatically come with falsification properties for free.


It is important to at least try and tell the difference between "we didn't manage to falsify" and "we don't know how to falsify".Hugely important!

In business investment decisions, falsification is critical. If someone says 'invest $1M and the following benefits will accrue', a business manager will say: who will see the benefits? how will the benefits be recognised? how much benefit should we expect to see and when will this recognition be confirmed?

This is in stark contrast to the benefits arguments I've seen in academic research grant applications. They're more like 'this research can lead to the following benefits and if we don't seek these benefits then here's how the world may end'. The issue here isn't that academics don't know how to falsify their benefits cases - they often simply don't care to, but instead prefer to argue from principle.



I honestly don't see a practical difference between "multiple worlds of which I'll only ever see one" and "uncertainties that forever remain outside of my perceptive horizon". My intuition is that they're practically the same model,That may depend on how we choose to view chance. If events are instant, irreversible and unambiguous as we often conceive them to be then 'possible worlds' may end up being logically (if not computationally) interchangeable with stochastic descriptions. However if events don't work like that then they could be completely different things.

Either way, from a computational perspective, modelling possible worlds can be very painful. (I know, because I used to have to do it.) It's bad enough keeping track of accumulated information in each possible world, but then you have to work out which possible worlds can access which, and when they can do so. There oughta be a smarter way. :)

Ruv Draba
08-04-2008, 08:57 AM
In Daubert v. Merrell-Dow Pharmaceuticals, the Court ruled that novel scientific expert evidence has to pass a test of falsifiability in order to be admissible in court.I'm not a jurist but that seems smart to me. Law requires reasoning on balance of probability and reasonable doubt - neither of which you can evaluate when there's no falsifiability (unless you're a politician in which case you look to opinion polls ;))

BTW, the Court's decision in this case left it up to the trial judge to determine whether the falsifiability standard had been met. Now, I have faith in our judiciary, but really...Actually, a decent businessman can do it so I'd hope that any of our pre-senescent judges might have a chance. :)

Dawnstorm
08-04-2008, 11:15 AM
Etymologically there's not much difference. Faith comes from the Latin fides meaning "trust or belief". Trust comes from the Gothic trausti meaning "agreement" or "alliance". In terms of usage, I see people use 'trust' most commonly in connection with people or occupations, institutions or products from which there is some expected obligation - e.g trust the police, trust the doctor, a trust fund, a trusted painkiller. Faith is sometimes used in place of 'trust' but also used in terms of us meeting the trust of another (e.g. trading in good faith, a faithful dog), and for abstracts like the economy, or collections like humanity. ('I have faith in the free market', or 'I have faith in humanity')

Ask someone else though and they might swap these senses -- I'm not a linguist and my impressions aren't authoritative.

Well, I have a minor linguistic education, but my degree is in sociology. Even if I were a linguist, my impressions wouldn't be any more authoritative than yours. I didn't study the semantics of the term. ;)

The "faithful dog" is interesting, though. Similar usage to "faith" between lovers, I think. Interestingly, I think I have evidence that my cat trusted me. Faithful? Hah!


I'm not aware of anything in either etymology though that requires trust or faith to be unshakeable - although the concept of 'faithful service' (in the sense of loyalty) carries something of that connotation at times.

No, and I never suggested that either is unshakable. I merely proposed that with "faith" more is at stake than with "trust". We're more willing to put trust in something, than faith. "Trust" doesn't exclude a backup plan, should you be wrong. "Faith" does. A shaken faith changes your world in a way trust doesn't. [At least, that's how I differentiate between the words.]

As I said, it's a terminological nitpick. But the concept I have is this:

We're allowed to place trust in assumptions we make, if we don't have the resources to check. We'll choose assumptions we trust more (i.e. educated guessing, not wild guessing). But we'll have to accept that others may want to check up on this; we'll have to encourage them to do it, by telling them we didn't.

We're not allowed to have faith in any assumption we make. We must allow others to check up on it. Always.

You may not agree with my terminology here, but to me - if you use faith for mere trust (again my terminology) - you obscure the above difference, even if you don't mean to. I keep using the terms "faith" and "trust", here, not because I'm in love with my usage, but because I can't think of terms that do a better job here. I'm open to suggestions.

Provided I have a point in the first place. Not always clear, with me.


That may depend on how we choose to view chance. If events are instant, irreversible and unambiguous as we often conceive them to be then 'possible worlds' may end up being logically (if not computationally) interchangeable with stochastic descriptions. However if events don't work like that then they could be completely different things.

Hehe. Yes, they could be. But how would we know? "Stochastic descriptions" are based on incomplete information, and now quantum mechanics suggests that there may not actually be a stable ground. I'm wondering, what sort of information would we need that we could tell a difference?

Some people may well be more comfortable with ever branching universes. Some people may well be more comfortable with a pulse of consolidation from many possible states. I'm not saying we should abandon the difference. (I've heard others, harder to understand, such as "world as information" etc.). What I'm wondering is: Is the difference expressable in deverging empiric expectations.

If not, the difference is just something that keeps people going in the face of the unfathomable maths. Until we have the difference in empirical terms (and we may well have it, with me not understanding it), I'm more comfortable with a black-box model, an honest "We don't know."

Ruv Draba
08-05-2008, 03:34 AM
with "faith" more is at stake than with "trust". We're more willing to put trust in something, than faith. "Trust" doesn't exclude a backup plan, should you be wrong. "Faith" does.Hmm. I'm thinking of this from a risk management perspective now. Risk is calculated as the likelihood of adversity multiplied by its impact. Rational risk managers have backup plans when the potential impact of adversity is medium or high, and where the likelihood is significant. If a rational person lacks a backup plan then it's either because they consider the likelihood to be insignificant, or the impact to be low [or in some extreme cases (e.g. the sun winking out tomorrow) there's no backup plan].

So if faith entails some degree of ill-preparedness for adversity, then that would mean either a reflection of low importance, high certainty of safety or poor understanding of risk. In practice though, people often exercise faith in areas of moderate uncertainty and high importance... so that would make people of high faith just lousy risk managers...

In practice, I don't think it works that way. We sign employment contracts in good faith, but we still do some contingency planning on our careers. I don't think that some backup planning reduces faith, or disqualifies our claims of fidelity. On the other hand, if we take a permanent job as a brief stepping-stone to another job then that's not acting in good faith... I think that the difference is whether the plan is a backup...

Perhaps faith then, has something to do with acting in integrity with our beliefs - semper fi. If I accept a job offer in good faith, then I'll stop looking for a better job. Trust certainly doesn't hold that connotation. But this also captures the 'faith costs' idea you mentioned - with which I agree.


We're allowed to place trust in assumptions we make, if we don't have the resources to check. We'll choose assumptions we trust more (i.e. educated guessing, not wild guessing). But we'll have to accept that others may want to check up on this; we'll have to encourage them to do it, by telling them we didn't.

We're not allowed to have faith in any assumption we make.This looks like a testability distinction. But in practice, don't we choose just the assumptions in which we have faith? We might have no special reason for trust (we may never have tested the assumption), but we're reasonably optimistic, so we extend faith into the assumption. We commit some effort - some fidelity - to what we believe is likely to be true.

Higgins
08-05-2008, 06:16 PM
hypotheses:


So what about "many worlds theory" vs. "Copenhagen Interpretation" in quantum theory? Does it even make a difference which one you chose? I'm honestly curious; my impression is that the discussions around this involve more faith (about how to make sense of strange science) rather than science. I notice "many worlds" is more popular in quantum computing - which may be a result of the mindset of the programmers.

I don't really understand quantum theory, so if anyone does: are there empirically specifiable differences? Or, differently put, what empirical circumstances could be used to falsify either hypothesis?

By Copenhagen Theory I assume you mean the old "collapse of the wave packet" plus the classical limit thing/corresponence rule. Like most pop cultural versions of science (and I think that covers Popper and falsification) that all belongs to the last wave of comprehensive popularization where the last advanced theories to get into the pop pipeline all date from about 1930.

by, say 1950, the whole of what is now thought of in the pop world as "quantum mechanics", was effectively replaced in the world of real science (as opposed to pop or Popperian science) by better ways of looking at microphysical or high-energy or cosmological events...

For example, Feynman's "sum over histories" basically puts many worlds and the correspondence theorem in the same bag and the results (Quantum Electro-Dynamics) are the most precisely verified mathematical models ever put forward.

See for example:

http://en.wikipedia.org/wiki/Feynman_diagram

http://en.wikipedia.org/wiki/Lamb_shift

Dawnstorm
08-05-2008, 09:26 PM
But in practice, don't we choose just the assumptions in which we have faith?

Why do discussions with me always end up in semantic tangles?

I'm looking at the above sentence, and I'm losing track of what I'm thinking. To me, using faith like you do makes sense, but for my purposes in this thread it's not descerning enough. This leaves me wondering whether I'm imagining differences.

I'm tempted to agree with above sentence applying what I think is your idea of "faith", but I'm not sure what I'm actually agreeing to, and this in itself is confusing.

The elements I agree to are:

1. We choose the assumptions that seem plausible to us.

2. We want to be right about our assumptions.

But as scientists we should also choose assumptions that are testable. That's because making assumptions is a flaw, even though it might be methodologically justified.


By Copenhagen Theory I assume you mean the old "collapse of the wave packet" plus the classical limit thing/corresponence rule.

The first one, yes. I'm not sure what "classical limit thing/correspondence" means. That one, too, maybe.

Thanks for the info.

Ruv Draba
08-06-2008, 04:43 AM
Why do discussions with me always end up in semantic tangles?Because we both like to zoom in on the semantically tangular? :D



I'm looking at the above sentence, and I'm losing track of what I'm thinking.I encountered the same when reading your comments to the effect that 'Realisation of the falsity of our faith should cost us'. Something feels right about that but I still can't work out what. :)


The elements I agree to are:

1. We choose the assumptions that seem plausible to us.

2. We want to be right about our assumptions.

But as scientists we should also choose assumptions that are testable. That's because making assumptions is a flaw, even though it might be methodologically justified.We commit something of ourselves when we put forward a theory. In my parlance, we've extended faith. The risk is that we waste a lot of time if the theory proves wrong, and we're embarrassed and humiliated and people lose their trust in our judgement. (Trust used there in the sense that professionals are trusted to do good in their roles).

Assumptions aren't necessarily a flaw; they're sometimes an asset because they can simplify the problem and lead us to insight. Even falsifying assumptions can result in a simpler final problem.

But yes, given a choice between a falsifiable assumption and an unfalsifiable one, rigorous reasoners should choose the former. If they're forced to the latter for some reason, then they need to label it clearly - or risk breaching trust.

More on trust and faith... I've had cause to think about this from a creative writing viewpoint. In the parlance I use, every large creative work is an act of faith on the part of its author. It represents a substantial commitment of time and energy in the face of great uncertainty. It costs us greatly if we fail.

As authors we often have backup plans that mitigate but don't entirely eliminate the cost of failure. We're aware of the potential cost of failure throughout our effort - there's no real denying or ignoring it. If they remain in the background those backup plans don't undermine our faith. But if we start toying with them too much they become the project itself, and then they do.

Our publishers however, are in more place to extend trust than exercise faith. Emblematic of that trust is whatever advance they may choose to give us, or whatever prepromotion they may do. They may tell us that they have faith in us, but in reality mostly it just costs them money if we fail - it doesn't stop them from investing in some other author.

As authors though, a conspicuous stinker of a project can damage our commitment to future projects even if our backup plan is successful. That's perhaps emblematic of the cost of failed faith: it takes time to recover from, while with failed trust you can often just move on.

robeiae
08-06-2008, 05:46 AM
Popper? Whack him over the head with a fireplace poker. Problem solved.