Are we fundamentally cooperative or
egoistic?
It would be easy to argue that
everything that everyone does is based on pure egoism. I disagree
with that viewpoint, but let me present it.
If Eve does something that benefit her
and nobody else (or has a negative impact on others), we call the
action egoistic. If Eve is kind to a friend in a selfless way, it is
because she wants something in return. The notion of reciprocity.
If she needs help at some later time, this friend will help her out
(she assumes). It can actually be proven that reciprocity is the best
strategy (tit for tat) in a 'game' supposed to model real
life. Let us come up with a situation where no reciprocity is
expected.
If
you find a drunk man lying in the gutters, and help him get a taxi
home, would you expect reciprocity? Let us assume no. But then your
genes are. Through millennia of evolution your genes have found the
perfect way for you to behave; group-orientation, giving,
reciprocity, caring, emotions, family values, etc. But in the end all
your behaviours are designed for one purpose only: You (or your genes
at any rate).
Let
us return to my viewpoint.
Survival
of the fittest. In accordance with Darwin's theory of evolution, the
strongest (those most fit to survive the world and bring offspring)
survive (or they genes do anyway). This is easy to believe (note how
a lot of people that are pro-Darwinistic consider the theory of
gravity to be just a theory, but the theory of Evolution to be given
as an Axiom of The World).
Few people would argue against that kindness
and selflessness are important concepts that we use to model our
world. So in everyday life these models are true in some sense. Some
may argue that the more fundamental model of Evolution is more true
(since it is in a sense more fine-grained; this is the typical
Reductionist view), but I'd say we have another way to choose what to
consider 'the most true'.
What you believe changes who you are.
If you disagree, for example if you do not believe in free will and
purpose, reading this has no value anyway (you're just doing it as a
consequence of random or deterministic happenstance). The question
is: Do you want to live in an egoistic world, where you can always
just interpret everything as egoism? Or do you want to live in a kind
world, and learn time and again that the world is not so kind? Or do
you want to live somewhere in between, thinking the best of people,
that is, the best that your experience permits?
Question anyone: If I give you a
present (and conscious reciprocity is not in the picture), is there
any experiment that would give a different result whether it was
fundamentally an egoistic or a selfless act?
Sometimes the purpose can be one thing,
and the important sub goal something else. This is how I view free
will vs. Darwinism. Free will is the goal, but survivability is an
important bi-product.
The reason I blogged about this today
was essentially
To sum up:
Return favours: Conscious egoism,
Empathy: Unconscious egoism, or
something else?
Is giving away something with
absolutely no future gain a "bad" Darwinian side effect of
the powers of empathy, or is it a sign of our "true"
nature?
Is this really a question for science
or a question of some other kind?
You are entirely ignoring kin selection here; are you trying to keep things uncomplicated, or do you not find it relevant? Myself, I find it critical when dealing with these questions, particularly when it illuminates the ubiquitous conflicts of interest playing second fiddle in the assumed selfless relationships (husband-wife, mother-child).
ReplyDeleteAt some point in the post, the flavor changes from 'eventual confusion is just a matter of semantics and defintions' to something sort of metaphysical. Particularly, I either strongly disagree or completely mistunderstand you in the "Question anyone: ...", "Sometimes the purpose..." and "Is giving away something..." and "Is this really a question for science..." .
PS:
I think 'few people would argue' means the exact opposite of what you intended. I think 'argue' implies 'present arguments in favor of, defend (the) position' in English, but maybe I'm mistaken.
PPS:
Less Wrong is an endless source of misunderstood non-paradoxes. I explain this one away in mostly the exact same way as usual: http://lesswrong.com/lw/my/the_allais_paradox/
For the first point, I agree that kin selection is extremely important, but I was trying to keep things uncomplicated.
ReplyDeleteFor the second part, I think this is a mix of two things. The later half of the blog post is a stream of counciousness kind of writing, in addition to being on the edge of what is epsitemologically good questions. Perhaps it's also a bit of 'what I want to believe' rather than 'stuff I can actually give strong arguments for'.
PS. Agree, fixed