Author Archives: thebigqs

The Party System

Are political parties a good thing?

Do they formalise groups that will naturally emerge, and thus make politicians more accountable? Or do they entrench group identities, when individuals forming groups would be more pragmatic?

sexy_elections_by_jackbliss-d5nbwseDo they mitigate the populist effects of everyone being able to vote for individuals, and thus avoid the style of democracy which exists in Argentina today, or that which existed in Ancient Rome? Or do they entrench the status quo, resist change, and keep the elites in power?

Do they homogenize society into opposing groups, when in reality everyone has so many different identities and issues of importance that the divisions are grossly exaggerated? Or do they bring competition into the political realm, and ensure that there is always (assuming we’re not talking about a one party or dominant party system) a strong opposition group to hold the government to account?

Do they blur issues and encourage a lack of transparency? Or do they ensure that no matter how little voters know about their electoral candidates they can at least know their ideological positions based on the party they stand for?

Do they provide stability by avoiding the obvious difficulties of maintaining a government composed of independents coming from different positions? Or do they disenfranchise the minorities?

The idea of a party system is surprisingly rather modern. It comes largely from the work of nineteenth century Europeans such as Ostrogorsky and Bryce. In discussing whether such a system was a good thing or not, Ostrogorsky said that:

“As soon as a party, even if created for the noblest object perpetuates itself, it tends to degeneration”

I find this incredibly insightful, for it is how many people think today, and also one of the biggest weaknesses of the party system. Bernard Shaw, in ‘the Intelligent Woman’s Guide to Socialism and Capitalism’, suggested that parties are always seeking primarily to get themselves into power. They do 2 party systemthis, he reasoned, because they believe that their party will help the people more than the opposition party would. So noble reasons. But, Shaw argues that it is precisely for this reason that many people end up voting contrary to what they actually believe. He gives an example of a Conservative Member of Parliament being presented with a bill proposed by his own party, which he finds distasteful. The MP ends up voting for it in order to avoid the perception that his/her government is no longer in a majority, which he rather exaggeratedly reasons might lead the opposition into government. Meanwhile, he gives another example of an opposition Labour MP, who supports the bill but votes against it, for precisely the opposite reasoning. And thus the belief that being in power will help them affect positive change leads politicians to take decisions which lead to negative changes.

In practice it is usually these ‘realpolitik’ methods which move the hand of governance. The UK Party System didn’t emerge for ideological reasons so much as it did because of the necessities of war. King William III was fighting a war against the French King Louis XIV, and the House of Commons were refusing him supplies, and limiting the fighting potential of his forces. Robert Spencer therefore advised that if the King chose ministers always from the strongest party in the House of Commons, then that party would have to back him through the war. And because it worked, it stayed.

For Bernard Shaw it is the changes that emerged as a result of this system which so weakens the practicable party system. He says that the party system in place at the local level is in fact effective, for it is committee led, and doesn’t require that anybody has to resign after a failed vote or notion.

“The rigidity of the party system, as we have seen, depends on the convention that whenever the Government is defeated on a division in the House, it must ‘appeal to the country’: that is, the Cabinet Ministers must resign  their offices, and the King dissolve the Parliament and have a new one elected.”

Of course today a single defeat does not result in the dissolution of government. But the resignations often do occur. And the trend is further towards, as opposed to away from, these practices. For instance the cross-party consensus at present is to enact a right of recall, so that members of the public can recall the MP that they elected should he/she fail to please them during their elected term. This trend enhances democracy. And yet as Shaw argued, it also encourages fear-led decision-making, and populism as opposed to difficult decision taking about which the public does not have as much information.

Despite the fact that reference to the party system was first found in published print as recently as 1888, I will leave you with the fact that after only a couple of decades this system was already seen as out-dated. In fact in 1920 two famous professors of political science, Sidney and Beatrice Webb, published a Socialist Constitution for the UK. In this constitution they discarded the notion of maintaining the party system within two Houses of Parliament as completely impracticable. They described its existence then, 96 years ago, as a condition of “creeping paralysis”. They proposed in this constitution that we should have one political Parliament, like the present Cabinet style system, and a second, industrial Parliament with a municipal system.

If you had the choice, what system would you propose? Is the party system fit for purpose?

“Never believe that a few caring people can’t change the world. For, indeed, that’s all who ever have.” Is she right?

Margaret MeadMargaret Mead was a twentieth century anthropologist, whose work greatly influenced those campaigning for equal rights in the sixties and seventies. The above quote is perhaps her most famous, and in recent years this message has appeared all over popular media, and throughout much of twenty first century culture.

The 2006 music video for “If Everyone Cared” by Nickelback ends with her quote. It’s used in the TV series the West Wing. And it was essentially the central philosophy of Barrack Obama’s presidential campaign: “Yes we can. Change we can believe in. Change will not come if we wait for some other person or some other time. We are the ones we’ve been waiting for. We are the change that we seek.”

Yet when we’re thinking about these quotes, we’re not thinking about the sorts of changes that President Obama has managed to realise (don’t misunderstand me here; I’m a huge Obama fan). We’re thinking about pivotal changes in human history; the sort that historians are likely to refer back to. In this modern world, can such momentous changes still be realised by a “few caring people”?

As an example, Liberal Interventionism has been one of the hottest topics in the media throughout this century. In 1999 the UK Prime Minister Tony Blair, in his Chicago speech, outlined his doctrine of Liberal Interventionism. And a Liberal Interventionyear later the UK’s intervention in the Sierra Leone Civil War was seen as a great success. Furthermore, orders for intervention in Sierra Leone did not come from a huge collective government, but in fact from a renegade Brigadier David Richards, who saw the chance to intervene, and took it without permission. So you could even argue that a few, or even one person, really did change the world here. Subsequent interventions have also been justified on moral grounds e.g. Afghanistan, Iraq and Libya, with much less consensus as to the success of these missions. But more to the point, there has been a common thread throughout each of these interventions. And that thread of logic echoes the thought of American pragmatists, of Japanese leaders during WW2, of Napoleon during the Napoleonic Wars, and even right back to the works of Thucydides, an ancient Greek historian who wrote the History of the Peloponnesian War, and is also cited as an intellectual forbearer of ‘realpolitik’. That thread of thought is quite simply, the importance, and dominance, of power.

Hobbes’ method of reasoning provides a good example of this realist motivation for intervention. He started his argument, in his famous work ‘Leviathan’, with a kind of Cartesian thinking. Similar to the way Descartes started with his base assumption that thought proves existence, Hobbes said that as little as we can be sure of, we can at least be sure that humans are attracted to pleasure, and repelled by pain. As we can be sure of this much, said Hobbes, it goes to reason that what we all seek, and will always continue to seek, is the power to act on these attractions and repulsions. It is why he reasoned that in a state of nature life would be “nasty, brutish and short”, since without any kind of civilisation we would all be out to increase our own power.

Why do I use these examples? Because the world’s focus on Ukraine is indicative of all the above. The message from western interveners is that the Russian intervention and referendum in the Crimea was illegitimate, and abused Ukrainian sovereignty i.e. we want to help people, and we believe that we can change the world and make it more peaceful. In reality however, such intervention is both an example of power politics, and also quite frankly playground politics. The Russian intervention bears a lot of similarities to recent Western interventions. It is debatably legal in terms of international law. And although the referendum in Crimea should have been organised in different times, and under the supervision of the UN, I have not heard Westerners suggest this. Instead, they simply reject any sort of referendum, and in a blatantly childish manner, simply assume that what’s needed is a good old fashioned, gun-slinging approach of anti-appeasement i.e. if we show we’re the stronger party, we’ll win; life is a competition and we want to be the biggest bully in the playground.

It’s unlikely much of this is blatant, or even realised. The simple fact that the EU managed to achieve unanimity in deciding that they would impose sanctions on Russia goes to show that Western decision makers do believe they are in the right, and are acting morally. But our resources, and our ability to act, is finite. And what about the places where we can really help? How many children need to be decapitated in the Central African Republic before we intervene there? The UN says there is a real risk of genocide. But how many rapes are needed? How many mutations and acts of torture? How many murders are needed before we even start to think in such a way?

We can't changeThere is no power to gain in the Central African Republic. There is in re-igniting old Cold War tensions. So what would it take for us to change this much? What would it take for countries to actually intervene for moral reasons, as opposed to reasons of power? If Margaret Mead is right, then a few caring people can achieve such a change in international relations, and perhaps, depending on whether you agree with Hobbes, even a change in human nature. Do you think she was right? Are these changes really possible?

Is Freedom an Option?

“O sancta simplicitas! In what strange simplification and falsification man lives!” So begins Nietzsche’s second Nietzsche freedomchapter of “Beyond Good and Evil”. It says that humanity has always contrived to retain its ignorance, so that we might realise “an almost inconceivable freedom”. Indeed to Nietzsche even our thoughts were suspect. For those who call themselves ‘free spirits’ in a philosophical sense are in fact often “glib tongued and scribe-fingered slaves of the democratic taste and its ‘modern ideas’ […] they are not free”.

Just a quick explanation here; note the use of “almost” in Nietzsche’s work. Contrary to what a lot of people think, Nietzsche did not oppose, or dismiss, freedom. To him it was something that we strive towards, but are simply very unlikely to really obtain.

Of course you might question what freedom is. Arendt explained the difficulty of this question well when she said:

“In its simplest form, the difficulty [of defining freedom] may be summed up as the contradiction between our consciousness and conscience, telling us that we are free and hence responsible, and our everyday in the outer world, in which we orient ourselves according to the principle of causality.”

As a philosophical concept freedom is most often discussed with reference to determinism, as its opposite, and moral responsibility as its partner. For if we are not free at all then our actions must be determined. And if we are free, then we must also be morally responsible for our actions. It is from these baselines of the philosophical discussion of freedom that Arendt notices the major difficulties. We all assume that we are morally responsible for our actions, because we at least have an illusion of free will in deciding what to do. And yet analyse it a little deeper, and our thoughts themselves seem to contradict this common sense; for just as when we make excuses, so too in any sense can we say ‘I acted thusly, because X, Y and Z had happened in the past. Had they not, I would have been forced to act differently.’

So there are contradictions. And it’s difficult to define in an objective sense. But even if we have different imaginations as to what it is; we definitely have at least the illusion of freedom/free will. But if nothing comes from nothing, how can we create an illusion of that which truly doesn’t exist? The illusion of freedom must either be an image of a conceptual reality, or at very least a modified version of a very similar concept; and if it is the second, it has since caused us to create the conceptual reality of freedom. So freedom does exist. But is it a real option in our lives? Or is it more as John Dewey said, that our activity results from impulses that emerge spontaneously in response to changes in circumstances?

Illusion-of-FreedomLet’s think. In most of the world escaping into the wild; where only nature would restrict our actions; simply isn’t an option. There isn’t enough wild to escape into, particularly in Europe. So, if we can’t hunt and forage for ourselves, we’re forced to live within those societies and economies which already exist. Within these societies we are forced to go to school throughout our childhood. And within these economies we are forced to either enter the labour market in order to earn enough to survive, or live off the benefits provided by those who feel themselves to be forced into the labour market. Freedom of entry into, and exit out of, the labour market, is not free. Society presents us with commitments that restrict our geographical location, creates expectations that further limit our options in the job market, and creates the economic demand which dictates what careers are going to pay. Flexible and part time roles aren’t allows offered, and thus to take a job we are usually forced to spend most of our time in the job. The competitive economy forces employers to work their employees hard enough that during their free time, many people are too tired to properly look after themselves by exercising or cooking healthy food. In order to stay alive we need a place to live, and to pay bills that often leave us with less than enough cash to freely pursue what we want during that limited amount of free time that we have. We are even imprisoned by society’s desires. Who reading this can’t remember comparing themselves to other classmates at school, and hoping that they would earn more in later life? Those people we call ‘weird’ are in most cases those people who for some reason don’t desire what society encourages us to desire e.g. money.

We could even go so far as to question what freedom is, for usually we talk of freedom within nature. But is not nature the biggest prison of all? Kant argued that space, time and causality are categories used by the human mind to interpret experience, and so in this sense physics and biology themselves limit what we are able to think, for they provide a finite, defined number of tools with which we may think. And if you ascribe to the Newtonian view of nature as being like a machine, or Spinoza’s view of freedom as an illusion, then we could say that even the most minute of human actions is determined.

What do you think? Is everything pre-determined? Is, as Spinoza argued, the only freedom that we have the ability to see the world as it is and say yes to it? Does the probabilistic nature of reality mean that because everything is not determined, then we are free to choose between some limited, finite options? Do you think that freedom really is an option, and that it can be enhanced by politicians reforming the socio-political economic structures? Or do you think, as Jean-Paul Sartre did, that “man is condemned to be free [because…] he did not create himself and not only is he free to choose, but he must choose.”

Are Humans Labour?

Have you ever heard of an unemployed tiger? Probably not. An unemployed whale? No?

Unemployed animalWhy are the concepts of labour and employment so universal, and so evidently a part of human life, when they are completely unheard of to all of our Earthling kin? Either labour is a natural concept exclusively for humanity, or it’s a temporary attribute of the dominant economic system today. I assumed that the second was an obvious truth, and set out thinking about how we would be able to move towards a state when unemployment could be eliminated. But labour, employment and unemployment are huge parts of our modern socio-economies. And trying to solve the problem of unemployment based on conventional economic reasoning would, I knew, lead me to either incremental solutions designed to lower certain types of unemployment i.e. structural and cyclical; or it would lead me towards Milton Friedman’s conclusion that unemployment can’t be lowered beyond the ‘Non Accelerating Inflation Rate of Unemployment’ without price and/or wage controls. Furthermore, the concept of unemployment is a very modern problem.

Prior to England’s Poor Law of 1601, and to some extent prior to the Industrial Revolution, the concept of unemployment simply wasn’t recognized. In 16th century England the jobless were called “sturdy beggars”; a term that included both those with non-socially accepted employment, and also those who didn’t want to work. The Poor Law of 1531 simply assumed that there were enough jobs for everyone, and perhaps understandably so, since the first Vagrancy Law was passed in 1349, when the death toll caused by Bubonic Plague spreading across England was at its peak. Yet throughout the globe, humanity’s population boom only commenced after the Industrial Revolution was under way, and most strongly in the latter part of the twentieth century. Furthermore, the technological advances which have been utilized since the eighteenth century have meant that production today is less labour intensive than ever before. All of this leads economists to conclude that unemployment is a very much a modern concern, and problem to be addressed within our present economic system. However, I wanted to explore the concept’s roots a little further; not as a sociological investigation into when it was first used, but more as an investigation into where and when the idea of humans as labour came from.

I was immediately surprised to find reference to the word labour in theories dating as far back as Confucius (about a hundred years pre-Socrates and Plato). But I thought, surely this is a poor translation, right? So next I looked at the etymology of the word labour. I found that it comes from the Latin word ‘laborem’/’laborate’, which seems to mean a great many things, just like our modern word: work, trouble, toil, exertion, hardship, pain, fatigue, and even labour in a fairly modern sense. Etymology of Labour Going back further proved difficult, with the best guesses that I found saying the word comes either from one which means “tottering under a burden”, or from one of the Ancient Greek words of lamvano/lavo (to undertake; Gr: λαμβάνω), or laepsiros (one who runs very fast, agile, speedy; la+aepsiros; Gr: λαιψηρός, λα+αιψηρός).

In other words as far back as we can go, the verb labour i.e. to labour at a task, seems to exist. However treating humans as labour in the sense of a noun i.e. labour meaning worker, does indeed seem to be quite modern. For example when Confucius used the word, as in the quote below, he was meaning work, and not worker: “Learning without thought is labor lost; thought without learning is perilous.”

My question therefore, is this: why did we start seeing humans as labour/workers? Is it natural among humans to treat ourselves as such? And if the modern adoption of concepts such as unemployment, and labour as a noun, are indicative of the modern socio-economic system, and temporary, then might it be possible to see a point in time when we see ourselves not as labour, but rather as thinkers, players, or even something else entirely?

What is Maturity?

We are bombarded with more information today than ever before. And we are enduring a period of shock in numerous ways; societal and economic changes are affecting everyone. One of the results is diversity, and an increasing number of identities per person (take how people assume different identities online for example). Is another result immaturity? In order to answer this question we must first define maturity.

In terms of the science, the brain reaches 90% of its adult size by the age of 6. And a second wave takes place in the years before puberty. During this time grey matter (areas of the brain responsible for processing information and storing memories) increases in size, particularly in the frontal lobe of the brain – as a result of an increase in the number of synaptic connections between nerve cells. Also around puberty however, a process begins in which connections that are not used or reinforced begin to wither (hence the “use-it-or-lose-it” hypothesis). This pruning, which begins around age 11 in girls and 12 in boys, continues into the early or mid-20s, particularly in the prefrontal cortex, an area associated with “higher” functions such as planning, reasoning, judgment, and impulse control. As Dr. Jay Giedd of the National Institute of Mental Health has said, the real cognitive advances come with paring down or reducing the number of synaptic connections. During adolescence, the amount of myelin, a fatty, insulating material that coats the axons of nerve cells—similar to the way insulation coats a wire—also increases, improving the nerve cells’ ability to conduct electrical signals and to function efficiently; this too continues into adulthood and occurs later in “higher” regions of the brain, such as the prefrontal cortex.

MaturitySo this tells us why maturity is associated with puberty and adolescence. But of course we have all met immature adults. I’m not even going to claim to be particularly mature myself. I find the idea of permanent maturity to be closely associated with the word boring. Such a statement, you might think, could reflect the fact that maturity can only be judged subjectively, or relative to the society within which you live. Yet almost every major philosopher has had something to say about it, and in an objective way.

Cephalos, who discussed the ideas with Socrates, argued that decency and temperament are signs of maturity.

Kant argued that “laziness and cowardice are the reasons why so great a proportion of men, long after nature has released them from alien guidance (natura-liter maiorennes), nonetheless gladly remain in lifelong immaturity, and why it is so easy for others to establish themselves as their guardians. It is so easy to be immature. If I have a book to serve as my understanding, a pastor to serve as my conscience, a physician to determine my diet for me, and so on, I need not exert myself at all. I need not think, if only I can pay: others will readily undertake the irksome work for me.”

J.S.Mill argued that socio-economic traumas led to immaturity, which could perhaps explain the growing popularity of extremism in hard times. If a societal trauma led towards a lessened state of maturity, then to Mill the state would be justified in limiting liberties.

Neitzche argued that “a person’s maturity consists in having found again the seriousness one had as a child, at play.”

Freud argued that one’s maturities could be seen in their actions and fears, and as such said “a fear of weapons is a sign of retarded sexual and emotional maturity.”

Lord Brain argued that in the pursuit of maturity personal experiences far outweigh “any public account which science can give”.

Indeed these quotes are only the tip of the iceberg in terms of the number of different ideas that people have about the concept. Perhaps the most intriguing thing about them however, is that everyone judges maturity to be desirable. Yet as hinted at by Mill, immaturity could be argued to be a psychological coping mechanism. Indeed we often associate Multiple Personality Disorder with the word illness. But why do we do this? It’s essentially because MPD makes it more difficult to fit into modern society, where you need to retain the stability to hold a job, and family relations. But our ability to supress overly traumatised parts of our mind, and create new personalities, is one of the most fascinating and amazing capabilities of human mind. If a person with MPD had several non-mature personalities, would you think this a bad thing?

There really are a thousand questions that I could ask on this topic. How do you define maturity? Is a mature person the one who best works out how to live with the hand that’s given to them? Is the mature person one whose understanding transcends that which they experience in their own personal lives? Is a mature person one who, as Mill said, can learn from discussions with others? Is a mature person a moral person?

I would love to hear your thoughts to any aspect of this subject. But most of all, how would you define maturity? And do you agree with Mill that your right to freedom should rest on your level of maturity?

Do identities stick?

Sticky IdentityYou might be familiar with the concept of wage stickiness. It’s a theory that the pay of employed workers tends to respond slowly to the changes in a company’s or the broader economy’s performance. But what about you? Is the person you are flexible to the world around you? Let me clarify briefly as to what I’m talking about; personality is no more than a set of characteristics and traits; I am not talking about this. But identity is more fundamental, and is about who you are. My hypothesis is that identities are sticky on three levels:

  1. Personal/Social Identity
    1. The personal identity is that which is self-relevant, and the social identity is that which exists with reference to others (it’s important to note that social identity theory is different to group identity, for where one is personal identity influenced by the group, the other belongs to the group). If these identities are sticky then identity that we build for ourselves may be more powerful than analytical/rational thinking. In 2004 Lisa Bolton and Americus Reed published an article in which they argued that past components of a person’s identity have prolonged impacts on judgement. The authors examined judgements on issues that were linked to identity, such as pollution linked to environmentalist identities, legalising marijuana linked to liberal and parental identities etc. They tried to weaken these participants’ judgements using a variety of methods, but when the judgements were linked to identities they had little success. Social influence i.e. peer pressure, was the most influential method, but even this had its limitations. So effectively their message was that identity is important. Not rocket science of course; but the implications are significant, because if the effects of identity are prolonged, and perhaps sometimes irrational, then they are also open to manipulation. For example Bolton and Reed concluded that companies should try and build brand loyalty along identity lines.
  2. Group/Collective Identity
    1. A group identity is one which is held in common with a collective. And there are numerous examples of where such identities can be seen to stick. For example many have argued that ethnic conflicts arise when an ethnic group identifies itself as marginalised, oppressed and/or weakened by the dominant group. Yet when such groups find themselves involved in a shifting balance of power, their self-identification of vulnerability usually stays. One example is the growing power of the Hutus in Rwanda vis-a-vis the Tutsis prior to the Tutsi genocide. Another example is the growing power of Israel in the world, and the clear evidence of their power from military victories, together with the enduring identity of vulnerability coming from the holocaust.
  3. 3rd Person Identity (I made this concept up because I couldn’t find a label for it):
      1. One aspect of it is obvious. Does your boss think you unready for a promotion? It may be that they have identified you as young and inexperienced, or it may be that they have built an identity for you based on mistakes that you made early in the job. And it often takes a lot of persistent evidence that you have grown beyond this in order to justify your promotion. What you’re really trying to do is not only provide empirical evidence of your competence, but actually change your own identity as exists in your manager’s head. The implications of this are numerous. Should we try and change jobs and locations  as often as possible in order to ensure that others’ identities of us is always at the latest, most competent stage? Should we focus a lot more of our energy on ‘anchoring’ conversations i.e. suggesting/implying what you want the other person to believe early, so as to ensure the other person’s identification of you is as positive as possible? Or should we recognise that there is a trade-off between others’ identification of us, and the enjoyment that can be realised from a sense of enduring community?
      2. The second aspect is less obvious, for it involves a feedback loop. It is a part of human psychology that we act on guesses about what other people are thinking about us. But of our course our guesses are all based on past data and perceptions i.e. what the other group/person has done in the past, as opposed to what they’re thinking at the moment. And thus if these identifications stick, then we could not only build very obscure identifications of others, but also end up letting that influence our actions, and thus the reactions of the person we are identifying, and thus their, and again our, identities.

Do you think identities stick? And if so what do you think the implications are?

What is it to be civilised?

In fact this is an old debate from back in 2010. But I thought it could do with reviving. Feel free to check out the old debate here.

After giving a number of anthropological examples to explain what civilisation is not, Clive Bell (art critic and philosopher of art), writing in 1928, said:

“I think we must take it as settled that neither a sense of the rights of property, nor candour, nor cleanliness, nor belief in God, the future life and eternal justice, nor chivalry, nor chastity, nor patriotism even is amongst the distinguishing characteristics of civilisation, which is, nevertheless, a means to good and a potent one.”

funny-civilized-uncivilized-boat-trashIt seemed quite easy for Clive to refute the notion that one or two traits might be unique to civilised societies. And yet he found himself agreeing with a soldier who said this to him:

“I can’t tell you what civilisation is, but I can tell you when a state is said to be civilised. People who understand these things assure me that for hundreds of years Japan has had an exquisite art and a considerable literature, but the newspapers never told us that Japan was highly civilised till she had fought and beaten a first-class European power.”

This does not mean to say that power is a sign of civilisation either however. As Clive rightly said, few people would describe the eastern tribes and ‘barbarians’ who overran the Roman Empire, or the Tartars who overthrew the Sung Empire, to be civilised. Indeed we often think of fairness and civilisation as intrinsically linked. And yet in the era of Social Darwinism it was quite popular to say “leave it to nature”. They would say that true civilisation would only come when the weak are left to die, and it is formally recognised that might is right.

So what did Clive conclude about what civilisation is? He reached his conclusion by making assumptions about which societies were civilised and which were not (Periclean Athens and 18th century Paris seemed to be ranked number one and two), and then drawing a list of similarities and peculiarities. He used this assumption of the existence of both to prove that civilisation is not natural, but rather a product of education. And he did seem to think that the idea of what it is to be civilised stays constant throughout time. However he recognised that for those who don’t buy into his assumptions then agreement might not be found.

Do you agree with him? Can we distinguish what is civilised from what is not? And if so how do we do this? What is it to be civilised?

What Freedoms Should We Be Allowed?

In ‘On Liberty’ J.S. Mill asserted that: “the sole end for which mankind are warranted, individually or collectively in interfering with the liberty of action of any of their number, is self- protection.” He used this statement to argue that power can only be exercised over another, if against that other’s will, in order to prevent harm to others. So in other words preservation/protection is the key to liberty.

JS MillOn the face of it this seems reasonable, and in fact most of this essay was spent logically and rationally explaining how we judge the difficult border cases i.e. because no priestly class can judge the ‘truth’ absolutely, how do we judge where and when the action of one person might harm another?

However preservation/protection is a questionable principle upon which to base all interventions, even despite the importance that we, collectively, place on self-preservation. The support for animal welfare in zoos pales in comparison to support for species protection. The right of someone who lives in constant agony, to die, is disputed based on the importance of survival. Talking about the plight of the homeless, the downtrodden, the depressed, and those living in extreme poverty, will often earn that person rolled eyes, a joke and a change of subject. But talk about those same people dying, and all of a sudden it’s a tragedy that the state should never have allowed. Which is the most desirable end? Survival? Or positive well-being? Would you rather live a long life with lots of pain, or a short and happy life?

Mill did recognise this difficulty, for he was himself a self professed Utilitarian. Indeed later on in the essay he tried to amalgamate the concept of happiness into his ideas. For instance he said that so long as there has been “some length of time and amount of experience, after which a moral or prudential truth may be regarded as established, and it is merely desired to prevent generation after generation from falling over the same precipice which has been fatal to their predecessors”, then individuality can be restricted. In other words he used collective Utilitarian tools to measure what protection of others actually involved. Thus the principle of protection upon which his ideas were based, is not as clear as would otherwise be imagined.

But the more contentious problem of Mill’s argument was the fact that it was all based on his personal view of truth. Just like Hegel and Marx, Mill saw the history of the world to be steadily progressing from lower to higher stages in our social evolution. This meant that for Mill a society had to be ready for representative democracy and liberty. And furthermore, individuals too had to be ready. Mill made the right to liberty dependent on our level of maturity (sanity and the above principle relating to protection were a part of this argument).

To explain further, Mill argued that liberty only applies to those “in the maturity of their faculties” i.e. excluding children, the insane, and generally those unable to learn and engage productively in a discussion. Ignoring the obvious implications here, by making such an exclusion Mill was in fact simply carving out the biggest weakness within his argument. Instead of ignoring this most difficult topic, the question should be raised: why do different rules apply to some? As a parent it is not possible to always explain your reasons when you tell your children to do something. It’s something we can try, but for example with my 20 month old son he simply doesn’t have a big enough grasp of vocabulary yet to understand all explanations – sometimes I’m not even sure if he understands when I say “No wires/sockets. Danger Danger!” Am I only limiting Owen’s liberty when his safety depends upon it? Not really, no. But there are still rules. And just because they don’t always tally with those who Mill defines as being able to engage productively in a discussion, it doesn’t mean they should be excluded from the analysis.

Furthermore, what does it mean to be mature? Mill speaks of those who can be excluded below:

“We may leave out of consideration those backward states of society in which the race itself may be considered as in its nonage. […] Despotism is a legitimate mode of government in dealing with barbarians, provided the end be their improvement, and the means justified by actually effecting that end. Liberty, as a principle, has no application to any state of things anterior to the time when mankind have become capable of being improved by free and equal discussion. Until then, there is nothing for them but implicit obedience to an Akbar or a Charlemagne, if they are so fortunate as to find one.”

Mill believed in being as objective as possible in approach. And yet this argument here could not be more subjective. For following Mill’s argument, what would happen were we to contrast it to Herbert Marcuse’s ‘Repressive Tolerance’? Marcuse follows all of Mill’s conditions, but has a different opinion about how mature people are in civilised societies. In fact he claims that the modern system perpetrates a “systemic moronization of children and adults alike… the mature delinquency of a whole civilisation.” Continuing, Marcuse contended that “a false consciousness has become prevalent in national and popular behaviour. [Thus…] In a world in which the human faculties and needs are arrested and perverted, autonomous thinking leads into a ‘perverted world’ […] the pre-empting of the mind vitiates impartiality and objectivity.” Thus according to Marcuse the very freedoms that Mill advocated are at best fraudulent, and at worst, an instrument of indoctrination, manipulation and servitude in and of themselves.

FreedomWhat’s your take? What freedoms do you think we should be allowed? Are there any principles such as those discussed above, which explain how much liberty we should be allowed, and under what circumstances?

What do Management Consultants have to learn from Plato?

There are many Platos to be interpreted. There’s the Plato who advocated an elitist class of Guardians who dedicate Platotheir lives towards the craft of governance.

There’s the Plato who spoke of universals and Forms, arguing that followers need a kind of philosopher king who is able to ‘truly’ see their objective. He argued that every craft has an ultimate goal, and that for governance this is justice. Yet for Plato ideas such as justice were not subjectively held. To Plato ideals, in the shape of Forms, are more real than what we can sense and perceive. And as such the leader who can truly envisage what the ideal objective (Form) of the craft is, would be the ideal leader.

And then there’s the Plato from his post-Republic works, who implied that it may not be possible to find such a philosopher king. In these later works Plato was a little more practical. For example in ‘Laws’, Plato’s longest work, he argued that if it’s not possible to find a philosopher king who truly understands the form of justice, supported by a ruling class of elite guardians, then the next best thing would be to ensure the rule of law, to reason out fair rules, and ensure they apply universally.

These ideas can be simply applied to the management world. Using the concept of “Arete” (excellence) as a Form to be pursued, the American car industry looks like it has been steadily progressing. Prior to WWII engineers always rose to the top of car companies, and as such the engineering and manufacturing of cars gained primary focus. After the war designers began to make it to the top. Cars became flashier and better marketed, but America’s reputation for manufacturing and engineering worsened in comparison with its rivals. Next came the accountants and financiers, who focussed on balancing the books. And only from the 80s did they begin to employ generalists with an overview of all fields. GeneralistPlato would argue that this last step was significant because the generalist would be far more likely to see the Form of Arete (excellence), and be able to balance the needs of all areas in pursuit of the ideal goal of the craft.

Furthermore, as data from Jim Collin’s research into what makes successful companies confirmed, great leaders really do make a difference. Collins found that out of 1435 Fortune 500 companies only 11 managed to garner stock returns at least three times the market’s, and these all had a “level five leader” at the top.

Yet the question remains as to whether Collin’s research showed that pursuing Arete in all areas makes a great leader. These 11 leaders had two things in common: humility and a determined will/resolve. That second Generalist v Specialist 2component (fierce resolve) didn’t mean specialists like accountants for instance focussing on one area only. But neither did it mean generalists. It was about a focus on one or a few central idea(s), together with a dogged (you could even argue authoritarian in certain cases) determination to not let any amount of opposition prevent them getting there.

Is this a Platonic pursuit of Arete? Is it better to have generalists at the top? Can specialists perceive the Form of Arete just as easily? Or is it all relative? Perhaps there’s no such Form as Arete, or no such person who is able to perceive that Form. Perhaps specialists are just as able to maximise greater stock returns as are generalists.

What do you think?

Does Nothing Come From Nothing?

Nothing Worth Having Comes EasyThe idea that nothing comes from nothing has held interest for our culture for centuries. It was used in Shakespeare’s ‘King Lear’, the film a Sound of Music, and is still frequently used today in funny comics and signs – often to express the more widely known derivative: “nothing worth having comes easy”.

It originated in the work of a 5th century BCE Greek philosopher named Parmenides (founder of the Eleatics school of philosophy, which rejected the validity of sense experience). Parmenides was questioning reality, and in his one surviving poem ‘On Nature’ he explored the difference between our objective and subjective views of what exists. He concluded that because nothing comes from nothing, then in a ‘true’ sense everything has always existed, and nothing can pop into our out of existence. Changes therefore, only run as deep as our perceptions, and thus truly exist in the subjective realm where falsehoods and misinterpretations are widely spread.

Generally speaking, most people have accepted this premise that nothing comes from nothing. Indeed the law of the conservation of energy says that the total energy of an isolated system cannot change, and this is a law of science. However some have come to challenge the view, based on a) philosophical reasoning (I’m not going to really address this here as it would make the post too long), and more recently b) on the teachings of quantum mechanics. My position is that these challenges rest solely on a misunderstanding of what the concept of ‘nothing’ really is.

The popular, linguistic, and even often philosophical conceptualisations of ‘nothing’ describe it as something which can exist within the dimensions of space and time. If I have no things in my hand, then I have nothing in my hand. No things = Nothing. The popular scientific conceptualisation of nothing goes a little further. For example Lawrence Krauss describes ‘nothing’ as an unstable quantum vacuum with no particles. Hence like the previous description it can exist within space and time. But you couldn’t say that there is nothing in your hand, because there will always be particles there.

Based on such a definition Krauss wrote the book ‘A Universe from Nothing’, in which he argued that something can come from nothing. The example used is a quantum vacuum. Sealed off at time 1 with no measurable particles, Krauss said that it would be possible to return to such a vacuum later, and find that there are in fact particles!

This argument was used by Krauss as a counter-argument to those who use the apparent contradiction between the idea that nothing can come from nothing, and that the universe began at a finite time, to argue for the necessity of a divine creator. And despite the fact that critics question the validity of his self-proclaimed “proof”, his argument is sound. For in the quantum world space is, as he puts it, a “boiling, bubbling brew of virtual particles that pop into and out of existence on timescales so short, you never see them”.

However, the possibility of particles springing into existence within the quantum vacuum is in fact perfectly plausible and logically consistent if the concept of nothing is properly understood. And this is best done with Maths, where nothing is differentiated from zero.

If you sit at a table with ten other people, and suddenly realise that you have zero dollars, then you can ask one of the others to lend you some. Does that turn nothing into something? NO, for your sum total is still zero. You gain ten dollars from a friend, which can be spent. But you also gain a liability of ten dollars i.e. you owe your friend ten dollars. So you have + 10 and –10, which together equal zero. Albeit put very simplistically, this is precisely what Krauss is actually describing. Within a quantum field you have zero particles, but also the potential for new positive particles, so long as new negative particles ensure that the sum total remains the same, and thus accords with the law of the conservation of energy.

In other words zero speaks of potential. It is quantifiable, and from it you can add and subtract. Nothing is described mathematically as an empty set. It is not quantifiable, and has no potential to be added to, or subtracted from. Thus where we have dimensions such as space and time, nothing cannot exist. It is therefore my contention that nothing does not exist within this universe.

What do you think? Can something come from nothing? Does nothing even exist? What is nothing as opposed to something anyway? After all, if as Descartes once said, we know we exist because we think, then thoughts must be real. Yet by that very argument thoughts can’t exist because they don’t think (I don’t actually think this by the way – it would just be one way of interpreting Descartes). And if thoughts don’t exist then how can we? And if we don’t exist then does anything? Maybe there is only nothing as opposed to only everything. What do you think? Can something come from nothing?

« Older Entries Recent Entries »