# How to use inductive / Bayesian reasoning (Simplified)

• ## Intro to Bayesian Epistemology / Inference

THREE KINDS OF REASONING
Abductive reasoning is reasoning or inferring to the best explanation. It takes a pool of competing explanations for some data, and in order to adjucate between them it asks about their relative simplicity, plausibility, explanatory power, explanatory scope, and so forth.

Deductive reasoning is the process of inferring a conclusion from premises. Here is an example of a deductive argument:

1. If something begins to exist, it has a cause.
2. The Universe began to exist.
3. Therefore, the Universe has a cause.

The form of this argument, using variables to stand in for the propositions, looks like this:

1. If p then q
2. p
3. therefore q

And we can use truth tables and truth trees to prove this argument form is formally “valid,” meaning that if you accept the premises, you are logically required to accept the conclusion. For more complex arguments, we can use “rules of inference” to prove it even more efficiently. Learning and using these rules to form valid proofs is what students learn in classes on logic.

Inductive reasoning, by contrast, only gets us probablities. It is “confirmation” or evidence-focused reasoning.

Bayesian Inference is the standard formalized way to use inductive reasoning.

• Stanford Encyclopedia of Philosophy: In the past decade, Bayesian confirmation theory has firmly established itself as the dominant view on confirmation;2

Instead of just using rules of inference as in deductive logic, it asks you to assign specific probabilities to claims representing your confidence level, i.e. your “credence values.” For example, you may think the proposition < God exists > has a .99 likelihood of being true, and if so Bayesianism says you are required then to think < God does not exist > has a .01 likelihood of being true. That's because the probablity of the claim and its negation have to add up to 1. So in ways like this, Bayesianism takes your credences and leverages probability theory to make sure “they dance in accordance with the probability calculus,” especially as you acquire new evidence and update your credences response to the new evidence. If you violate the calculus, then you will fall prey to so-called “Dutch Book” betting arguments, which are pragmatic self-defeat tests that demonstrate your irrationality.1

Thankfully, the core lessons from Bayesianism are easy to encapsulate and so this introduction doesn't need to be long. These lessons end up being largely identical to the theoretical virtues in abductive reasoning, listed above.

TWO ILLUSTRATIONS OF BAYESIAN INFERENCE

For a very simple controlled example, suppose you have two jars with 100 balls in each:

• Jar #1 has 99 white balls and one 1 black ball.
• Jar #2 has 99 black balls and 1 white ball

With the jars sealed and looking identical on the surface, you may start off being agnostic about which jar is which as you look at both of them. But if you are allowed to blindly draw a ball from one of them and the ball you pull out happens to be black, that may not prove you drew from Jar #2, but it certanily fits better on the hypothesis that you did. (It was the one with 99 black balls, after all). That means your drawing a black ball is evidence that you drew from Jar #2. But now suppose Jar #2 actually had 50 white balls and 50 black balls. Is that still evidence that you drew from Jar #2? Of course it is, because that is still better than Jar #1 which only has one black ball. Even if Jar#2 only had 2 black balls, you are still more likely to have drawn a black ball from Jar #2 than Jar #1. In fact, you would be twice as likely to have draw from Jar#2. This would still be slight evidence in favor of the hypothesis that you just drew from Jar #2 rather than Jar #1.

So lets generalize this to get our definition of evidence:

• Evidence for hypothesis X = An observation that rationally increases the likelihood hypothesis X being true, even if it only increases it by a little bit.

How can an observation do this? We just saw. An observation does this when it is more rationally expected (epistemically probable) given the hypothesis is true than given it is false. It is more rationally expected that you would draw a black ball on the assumption that you drew from Jar #2, so that’s why drawing a black ball constitutes evidence for the “I drew from Jar #2” hypothesis.

A less behaved illustration may help solidify understanding:

Suppose that in a murder trial, the murder weapon was brought forward and proven to have fingerprints on it that ostensibly match the suspect's. His name is John. Obviously, this is evidence (not proof) of John's guilt. But why? Is there a formal way to explain this? Yes! Here is why: because this particular observation O is more expected on the hypothesis H1 that < John is guilty > than on the hypothesis H2 that < John is innocent >. And again, we can also discern from the mathematics of Bayes that, the more expected the observation O is on H1 than H2, to a corresponding degree it is stronger evidence for H1 over H2. Would this automatically mean that H1 is true? Of course not; evidence is not proof. Even strong evidence can be outweighed or contextualized away, but your starting confidence in John’s guilt should presumably shift when you find out that his fingerprints are on the murder weapon.

PRIOR PROBABILITY

No discussion of Bayes would be complete without a discussion of "prior probability." This is the starting credence that you update as evidence comes in. A quick way to think of it is this: if you currently think the likelihood of God existing is .99999, then it may take a lot of evidence to move you to agnosticism at .5 or less than .5 (i.e. atheism). Likewise, if you start off thinking the likelihood of God's existence is .000001, then it may take a lot of evidence to move you to theism. You can find rational people with both kinds of prior probability, but that doesn't mean everyone is rational. Maybe they got to their prior probability after having updated in irrational ways, for example.

Where does our prior probability come from? Usually it comes from the last time you considered the issue (e.g. of God's existence) and updated it in response to new relevant data. For some people, the prior probability of God is very high based on their evidence (or their being irrational), and for others it is quite low. Some people are better than others at making sure their credences dance in accordance with the probability calculus.3:

CONCLUSION:

BeliefMap is an evidence and argument database, and it frames its arguments as “Bayesian” face-offs between green's view and red's view on some controversial question framed in the title. The arguments introduce new data, and we ultimately want to know whether the data pointed to is more expected on one view than the other, and by how much. The assumption is that we should update our confidence level in green or red's position on the claim based on how well their hypothesis fares against new data. Again, there are mathematically formal ways of doing all this which draw on basic probability theory but you don't need to get into the mathematics in order to get the gist of how Bayesian reasoning works.

The take-away here is that certain observations should cause your confidence in propositions to shift up or down, and it should shift in accordance with how much more likely the observation O is on H1 vs H2. If you keep this in mind, you can quickly cut past many of the bad methods and criteria people invent for evaluating questions like “Does God exist?” or “Did Jesus rise from the dead?”4