Showing posts with label Morality. Show all posts
Showing posts with label Morality. Show all posts

Tuesday, July 17, 2012

Morality from Evolution


-Prisoners dilemma, tit for tat, and cheating.

Today I want to look at how the (Darwinian) Theory of Evolution can model morality from purely egoistic assumptions.

Let us quickly recap the assumptions of Darwin:
  1. There are more offspring than can survive (there would be exponential growth if everyone survived) (survive means: have offspring, not survive forever).
  2. There are differences between offspring.
  3. These differences are heritable (if your father is taller than average, we expect you to be – this only needs to be statistically true)

The conclusion then follows:
Those traits we see after many generations are the traits that is well suited to survive the environment (survive still means: have offspring). Note also that you cannot develop a complicated trait without there being several stages of beneficial traits leading up to this complicated trait.

In morality we have a principle known as "tit for tat". Tit for tat is: "Do unto another as another has done unto you". If Sam helps you, then the next time you help Sam. If Sam steals from you, then you steal from Sam.

We will discuss how this principle (tit for tat) can follow from the Theory of Evolution and purely egoistic assumptions. First let us define our environment/setup/problem, what is known as the iterated prisoner's dilemma.

The prisoner's dilemma has its name for a reason, but for reasons of clarity of exposition I will present it differently. It goes as follows. You and Joe find some food in the jungle (think about our ancestral environment). You and Joe get two options: Fight or Cooperate. You can choose different actions. Results are as follows:
  1. Both cooperate: Both get 3 (units of) food
  2. Both fight: Both get 1 food
  3. A fights while B cooperates: A gets 4 food, B gets 0 food.
Note how this is supposed to model how food is ruined (the prey escapes etc) when someone fights (in case 2 we loose 4 food, in case 3 we loose 2 food). We assume that the choice you and Joe picks are independent (you cannot wait and see what he chooses).

How is the prisoner's dilemma soled by two egoistical entities? If you and Joe are egoistic you will always choose to fight. Does this make any sense when both could have more food if both of you cooperated? Yes, as the choice is independent. No matter which choice Joe picks, it's always better for you to fight than cooperate. In that way outcome 2) is recognized as a stable Nash equilibrium (anyone seen the film "A beautiful mind"?).

Before we go on to iterate this problem, I want to take a small digression. How can we model the solution of a single prisoner's dilemma? Well, we can have utilitarianism (everyone's welfare is important) instead of egoism, but that was not our assumptions. One common solution is "kin selection". If Joe is your brother, then his survival will bring your genes (statistically about half of them) down the line. So his survival is half as important as yours: He getting 4 food has value 2 to your genes, but you getting 1 food each has value 1.5. This makes cooperating always better for your genes. (We just assumed probability of survival is linearly dependent on amount of food.) But what about your friend, they are not of your blood, can you still have kin selection? Before modern transportation it was highly likely that your children would have kids with the children of one of your friends some day. This might (with a slightly different setting, and some more assumptions) make cooperating with your friends a good strategy. From now on we assume that the two '
'prisoners' are complete strangers with nothing in common.

What its iterated prisoner's dilemma? Well, you go hunting with Joe every week for a few years. Now, every week you make a new choice, independent of Joe, whether you should fight or cooperate. But now you are allowed to remember everything that has happened up to this point. Now you are allowed to choose "Because Joe did X last time, I will do X now", which we called the tit for tat strategy. NB: you always start by cooperating. What is special about this strategy? It is essentially unbeatable. Let us disregard small deviations (like tit for tat with forgiveness or extra randomness). What does unbeatable mean? If you have a population of 100 people, each with their own strategy; some of them with the tit for tat strategy, some of them with completely different strategies then no one will get more food than you if everyone plays the iterated prisoners dilemma against everyone else (and you are using tit for tat). (This also relies on the assumption that the world is not dominated by a big number of tit for tat haters whose strategy is discovering the 'tit for tat strategy users' and killing them.)

Now the Theory of Evolution concludes that after many generations, everyone will have something close to the tit for tat strategy. Yes, this is dependent on even more assumptions. Maybe we should model this on the computer one of these days?



Monday, June 25, 2012

Positive Morality


Are we fundamentally cooperative or egoistic?

It would be easy to argue that everything that everyone does is based on pure egoism. I disagree with that viewpoint, but let me present it.

If Eve does something that benefit her and nobody else (or has a negative impact on others), we call the action egoistic. If Eve is kind to a friend in a selfless way, it is because she wants something in return. The notion of reciprocity. If she needs help at some later time, this friend will help her out (she assumes). It can actually be proven that reciprocity is the best strategy (tit for tat) in a 'game' supposed to model real life. Let us come up with a situation where no reciprocity is expected.

If you find a drunk man lying in the gutters, and help him get a taxi home, would you expect reciprocity? Let us assume no. But then your genes are. Through millennia of evolution your genes have found the perfect way for you to behave; group-orientation, giving, reciprocity, caring, emotions, family values, etc. But in the end all your behaviours are designed for one purpose only: You (or your genes at any rate).

Let us return to my viewpoint.

Survival of the fittest. In accordance with Darwin's theory of evolution, the strongest (those most fit to survive the world and bring offspring) survive (or they genes do anyway). This is easy to believe (note how a lot of people that are pro-Darwinistic consider the theory of gravity to be just a theory, but the theory of Evolution to be given as an Axiom of The World).

Few people would argue against that kindness and selflessness are important concepts that we use to model our world. So in everyday life these models are true in some sense. Some may argue that the more fundamental model of Evolution is more true (since it is in a sense more fine-grained; this is the typical Reductionist view), but I'd say we have another way to choose what to consider 'the most true'.

What you believe changes who you are. If you disagree, for example if you do not believe in free will and purpose, reading this has no value anyway (you're just doing it as a consequence of random or deterministic happenstance). The question is: Do you want to live in an egoistic world, where you can always just interpret everything as egoism? Or do you want to live in a kind world, and learn time and again that the world is not so kind? Or do you want to live somewhere in between, thinking the best of people, that is, the best that your experience permits?

Question anyone: If I give you a present (and conscious reciprocity is not in the picture), is there any experiment that would give a different result whether it was fundamentally an egoistic or a selfless act?

Sometimes the purpose can be one thing, and the important sub goal something else. This is how I view free will vs. Darwinism. Free will is the goal, but survivability is an important bi-product.

The reason I blogged about this today was essentially

To sum up:
Return favours: Conscious egoism,
Empathy: Unconscious egoism, or something else?
Is giving away something with absolutely no future gain a "bad" Darwinian side effect of the powers of empathy, or is it a sign of our "true" nature?
Is this really a question for science or a question of some other kind?