King Alfred Press

Home » Posts tagged 'law'

Tag Archives: law

A Few Reflections on Adolf Hitler

I have just finished reading, Hitler: Ian Kershaw’s brilliant, two-volume biography on Adolf Hitler. Over the course of 1432 pages, Kershaw uncovers why Hitler, a man not all too dissimilar from other tyrants in history, has become synonymous with evil.

Kershaw also reveals the gap between Hitler’s public image and private personality. He reveals the difference between the rabble rouser capable of captivating the masses by exploiting their fears, prejudices, and desires, and the lacklustre reality. Kershaw shows how Hitler transformed Nazism into a national religion – complete with its own songs, fables, and rituals – and how he transformed himself into its demagogue.  

Hitler projected a persona that embodied all the ideals of German nationalism. He presented himself as the archetype of German pride, efficiency, and self-discipline. In Hitler, the German people found the living embodiment of their fears and aspirations.

Furthermore, Hitler presented himself as the saviour of a nation on the brink of ruin. This was not entirely his doing, by the early-thirties things had grown so dire in Germany that people were willing to throw their lot in with anyone promising to restore law, order, and honour. Hitler promised all that and more. Utilising what we today would recognise as identity politics, Hitler promised to restore national pride and wreak vengeance on Germany’s enemies. He divided the world into victims (the German people), perpetrators (international Jewry and Marxists), and saviours (the Nazis).

It would be far too simplistic, however, to conclude that Hitler brainwashed the German people. Rather, Hitler and the German people became intertwined in the same unconscious conspiracy. Hitler may have been the one to espouse the kind of murderous ideas that led to Auschwitz and Stalingrad, but it was the German people who gave those ideas their full, unconscious support. As time marched on, Hitler’s sycophancy was taken as political genius.

By telling the German people what they wanted to hear, Hitler was able to present himself as a national saviour. The reality was far different. He was a man with virtually no personality. He had no connection whatsoever with ordinary people. He never held an ordinary job, never had children, and only married his mistress, Eva Braun, the day before his suicide. Albert Speer, the Nazi architect and one of the few men Hitler counted as a friend, described him as a duplicitous, insecure individual who surrounded himself with shallow and incompetent people, laughed at the misfortunes of others, and retreated into “fantastic mis-readings” or reality.

Furthermore, whilst Hitler presented himself as the hardworking political demagogue of unmatched genius, he was, in reality, a lazy, egotistical man whose rise to power rested on the cynical manipulation of national institutions. Far from being the tireless worker he presented himself to be, Hitler actually proved unable to deal with numerous major crises during the War because he was still asleep. He saw his role as being the creator of Nazi ideology. The actual running of Germany he left to his functionaries.

When Hitler toured Paris following the fall of France in 1940, he made a special visit to the tomb of Napoleon Bonaparte. Saluting the Emperor’s marble tomb, Hitler commented, in typical egotistical style that like Napoleon his tomb would only bear the name “Adolf” because “the German people would know who it was.”

He was not entirely wrong. The name Adolf Hitler is remembered today. However, far from being remembered as the founder of a thousand-year Reich, he is remembered as a genocidal fruitcake whose legacy is as inglorious as his life. Hitler and Napoleon may have been similar in many ways (both were foreigners to the countries they would end up ruling, both reigned for a short period of time, and both significantly altered the course of history), but where Napoleon left a legacy that is still very much with today, Hitler failed to leave anything of lasting significance

But perhaps that is precisely what Hitler wanted. Carl Jung has a dictum: if you want to understand someone’s motivations for doing something, look at the outcome and infer the motivation. In his brief twelve-years in power, Hitler led the German people into a war that cost fifty-million lives, presided over a Holocaust that murdered eleven million people, and oversaw the destruction of the old Europe. If Adolf Hitler could be summarised in a single quote, the line from the ancient Hindu text, Bhagavad Gita would prove sufficient: “Now I am become death, the destroyer of worlds.”

JURIES ARE WORTH KEEPING

12angrymen

The Jury System is a cornerstone of justice and liberty. However, they are also controversial. On the one hand, there are those who see the jury system as an integral part of a free and impartial justice system. On the other hand, there are those who doubt the jury’s ability to deliver fair and honest verdicts.

Let’s start with the obvious fact that juries are far from perfect. They are imperfect because the people who make them up are imperfect. Ignorance is one major problem. Opponents of the jury system argue, with some justification, that it is too dangerous to place the fate of another human being in the hands of people incapable of understanding the complexities of the cases they are judging. Often those tasked with deciding the outcome of cases lack the technical or legal knowledge to adequately interpret the evidence and testimony being presented to them. It has been suggested that in these cases individual jurors will often resort to pre-conceived beliefs or allow themselves to be influenced by jurors with more knowledge – whether real or perceived – than they have.

Ignorance, however, is an easily solved problem. Why not select jury members based on their familiarity with the subject matters under discussion? Someone who works in the finance industry – bankers, financial advisors, accountants, and so forth – would be more equipped to judge financial-based crimes than the layperson.

Then there’s the question of who can sit on a jury. In the United Kingdom an individual needs to be aged between eighteen and seventy, have been a resident of the UK for at least five years since the age of thirteen, and must be mentally stable to serve on a jury. It would more than reasonable to suggest that qualifications for jury duty ought to be more stringent than they are. It is more than reasonable to suggest that the age limit ought to be raised from eighteen to perhaps twenty-five (if not older) and that jurors under the age of forty ought to have certain intellectual qualifications. This would ensure that those tasked with determining guilt or innocence would have the wisdom and/or intelligence to comprehend the grave nature of the responsibility they have been burdened with.

Those who criticise juries also argue that they are prone to bias and prejudice. In one shocking case, Kasim Davey was jailed for contempt when he boasted: “I wasn’t expecting to be in a jury deciding a paedophile’s fate. I’ve always wanted to fuck up a paedophile and now I’m within the law.” (Seemingly it never occurred to Mr. Davey that the man he was judging may have been innocent). Likewise, it is well known that many African American defendants were condemned by all-white juries in the Jim Crow South.

However, much of this is a red-herring. Professor Cheryl Thomas, the director of the Jury Program at University College of London, spent ten years analysing every jury verdict in England and Wales taking into account the race and gender of both defendants and jurors. Professor Thomas concluded that:

“There’s no evidence of systematic bias, for instance, against members of ethnic minorities, or that men are treated differently than women, that if you live in a particular part of the country or you have a certain background that you’re more likely to be convicted than others.”

Besides, those who criticise the jury system forget that juries reflect the values and principles of their society. If juries repeatedly deliver unjust verdicts it is because there is a sickness in that society. The fact that all-white juries tended to convict African American defendants merely because they were black is a reflection on the virulently racist nature of that society, not of the jury system itself. Today, the legal system is careful to disqualify those jurors who may harbour prejudices that will inhibit their ability to judge the facts impartially. Courts are very quick to disqualify jurors who may know the defendant or alleged victim, those with emotional links to the case (i.e. a victim of rape sitting on the jury of a rape trial), and so forth.

Lord Devlin, the second-youngest man to be appointed to the English High Court in the 20th century, once described the jury system as “the lamp which shows where freedom lives.” The principle behind juries is that the individual ought to be judged by his peers based on community standards, not by the politically elite. Without juries, our legal system would be dominated by judges and lawyers. What lies at the centre of the debate over juries is the question of whether the whole of society or just the elite should be involved in the dispensation of justice.

The Presumption of Innocence is Worth Protecting No Matter What the Cost

jemma-beale-rape

Jemma Beale was sentenced to ten years imprisonment after it was found she had made repeated false rape allegations. 

In February 2013, Vassar College student, Xialou “Peter” Yu was accused of sexual assault by fellow student, Mary Claire Walker. The accusation stemmed from an incident occurring twelve months previously in which Walker had accompanied Yu back to his dorm room after a party and initiated consensual sex. Walker herself broke off the coitus early. She had decided that it was too soon after ending her relationship with her boyfriend to embark on a sexual relationship with another man. She even expressed remorse for having “lead Yu on” and insisted that he had done nothing wrong.

Nevertheless, at some point, Walker decided that she had been sexually assaulted and Yu was mandated to stand before a college tribunal. At this tribunal, Yu was refused legal representation, had his attempts at cross-examining his accuser repeatedly stymied, and potential eyewitness testimonies from both Yu and Walker’s roommates were suppressed by the campus gender equality compliance officer. Supposedly because they had “nothing useful to offer.” In what can only be described as a gross miscarriage of justice, Yu was found guilty and summarily expelled.

Unfortunately, the kind of show trials that condemned Yu is not entirely uncommon in American colleges and universities (and, like many social diseases, are starting to infect Australian campuses, as well). They are the result of years of unchallenged feminist influence on upper education. These institutions have swallowed, hook, line, and sinker, the feminist lie that every single woman who claims to be sexually assaulted must be telling the truth.

The problem begins with those who make public policy. The US Department of Education has been seduced by the ludicrous idea that modern, western societies are a “rape culture.” They have brought into the lie that one-in-five women are sexually assaulted on college campuses, despite the fact that this statistic (which conveniently seems to come up with exactly the same ratio no matter where it’s used) comes from an easily disproven web-based survey.

This survey, which was conducted at two universities in 2006, took only fifteen minutes to complete and had a response rate of just 5466 undergraduate women aged between eighteen and twenty-five. Furthermore, it was poorly formulated with researchers asking women about their experiences and then deciding how many of them had been victims of sexual misconduct.

Regardless, the lack of credibility that this survey possessed did not stop the US Department of Education’s Office of Civil Rights from laying out guidelines for handling reports of sexual misconduct. Among these recommendations was that reports of sexual misconduct should be evaluated on the “preponderance of evidence” rather than the more traditional “clear and convincing evidence.” This radical shift in standards of proof means that accuser only has to prove that there is a reasonable chance that a sexual assault occurred rather than having to prove it beyond a reasonable doubt.

It would be an understatement to say the college and university rape tribunals – and the policies that inform them – violate every legal principle and tradition of western law. American colleges and universities have created an environment in which male students can be stigmatised as sexual deviants with little to no evidence aside from an accusation. These tribunals not only violate standards of proof but the presumption of innocence, as well.

That these tribunals have decided to do away with the presumption of innocence should hardly come as a surprise. After all, the mere idea of the presumption of innocence is antithetical to human nature. It is natural for human-beings to presume that someone is guilty just because they have been accused of something. As the Roman jurist, Ulpian pointed out: the presumption of innocence flies in the face of that seductive belief that a person’s actions always result in fair and fit consequences. People like to believe that someone who has been accused of a crime must have done something to deserve it.

The presumption of innocence is the greatest legal protection the individual has against the state. It means that the state cannot convict anyone unless they can prove their guilt beyond any reasonable doubt. We should be willing to pay any price to preserve it. And we certainly shouldn’t allow extra-legal tribunals to do away with it just to satisfy their ideological proclivities.

On Constitutional Monarchy

i12510

I would like to begin this essay by reciting a poem by the English Romantic poet, William Wordsworth (1770 – 1850):

 

     Milton! thou shouldst be living at this hour:

            England hath need for thee: she is a fen

            Of stagnant waters: altar, sword, and pen,

            Fireside, the heroic wealth of hall and bower,

            Have forfeited their ancient English dower

            Of inward happiness. We are selfish men;

            Oh! raise us up, return to us again;

            And give us manners, virtue, freedom, power.

            Thy soul was like a star, and dwelt apart:

            Thou hadst a voice whose sound was like the sea:

            Pure as the naked heavens, majestic, free

            So didst thou travel on life’s common way,

            In cheerful godliness; and yet thy heart

            The lowliest duties on herself did lay.

 

The poem, entitled London 1802, is Wordsworth’s ode to an older, nobler time. In it he attempts to conjure up the spirit of John Milton (1608 – 1674), the writer and civil servant immortalised for all time as the writer of Paradise Lost.

Milton acts as the embodiment for a nobler form of humanity. He symbolises a time when honour and duty played far greater a role in the human soul than it did in Wordsworth’s time, or even today. It is these themes of honour, duty, and nobility that will provide the spiritual basis for constitutional monarchy.

It is a subject that I will return to much later in this essay. But, to begin, it would perhaps be more prudent to begin this essay in earnest by examining those aspects of English history that allowed both constitutional monarchy and English liberty to be borne.

The English monarchy has existed for over eleven-hundred years. Stretching from King Alfred the Great in the 9th century to Elizabeth II in the 21st, the English people have seen more than their fair share of heroes and villains, wise kings and despotic tyrants. Through their historical and political evolution, the British have developed, and championed, ideals of liberty, justice, and good governance. The English have gifted these ideals to most of the Western World through the importation of their culture to most of the former colonies.

It is a sad reality that there are many people, particularly left-wing intellectuals, who need to reminded of the contributions the English have made to world culture. The journalist, Peter Hitchens (1951 – ) noted in his book, The Abolition of Britain that abhorrence for one’s own country was a unique trait of the English intellectual. Similarly, George Orwell (1903 – 1950) once observed, an English intellectual would sooner be seen stealing from the poor box than standing for “God Save the King.”

However, these intellectuals fail to notice, in their arrogance, that “God save the King” is actually a celebration of constitutional monarchy and not symbolic reverence to an archaic and rather powerless royal family. It is intended to celebrate the nation as embodied in the form of a single person or family and the fact that the common man and woman can live in freedom because there are constitutional restraints placed on the monarch’s power.

If one’s understanding of history has come from films like Braveheart, it is easy to believe that all people in all times have yearned to be free. A real understanding of history, one that comes from books, however, reveals that this has not always been the case. For most of history, people lived under the subjugation of one ruler or another. They lived as feudal serfs, subjects of a king or emperor, or in some other such arrangement. They had little reason to expect such arrangements to change and little motivation to try and change them.

At the turn of the 17th century, the monarchs of Europe began establishing absolute rule by undermining the traditional feudal institutions that had been in place for centuries. These monarchs became all-powerful wielding their jurisdiction over all forms of authority: political, social, economic, and so forth.

To justify their mad dash for power, Europe’s monarchs required a philosophical argument that vindicated their actions. They found it in a political doctrine known as ‘the divine rights of kings.’ This doctrine, formulated by the Catholic Bishop, Jacques Bossuet (1627 – 1704) in his book, Politics Derived from Sacred Scripture, argued that monarchs were ordained by God and therefore represented His will. It was the duty of the people to obey that individual without question. As such, no limitations could be put on a monarch’s power.

What Bossuet was suggesting was hardly a new, but it did provide the justification many monarchs needed to centralise power in themselves. King James I (1566 – 1625) of England and Scotland saw monarchs as God’s lieutenants and believed that their actions should be tempered by the fear of God since they would be called to account at the Last Judgement. On the basis of this belief, King James felt perfectly justified in proclaiming laws without the consent of parliament and involving himself in cases being tried before the court.

When King James died in 1625, he was succeeded by his second-eldest son, Charles (1600 – 1649). King Charles I assumed the throne during a time of political change. He was an ardent believer in the divine rights of kings, a belief that caused friction between the monarch and parliament from whom he had to get approval to raise funds.

In 1629, Charles outraged much of the population, as well as many nobles, when he elected to raise funds for his rule using outdated taxes and fines, and stopped calling parliament altogether. Charles had been frustrated by Parliament’s constant attacks on him and their refusal to furnish him with money. The ensuing period would become known as the eleven years tyranny.

By November 1640, Charles had become so bereft of funds that he was forced to recall Parliament. The newly assembled Parliament immediately began clamouring for change. They asserted the need for a regular parliament and sought changes that would make it illegal for the King to dissolve the political body without the consent of its members. In addition, the Parliament ordered the king to execute his friend and advisor, Thomas Wentworth (1593 – 1641), the 1st Earl of Stafford, for treason.

The result was a succession of civil wars that pitted King Charles against the forces of Parliament, led by the country gentlemen, Oliver Cromwell (1599 – 1658). Hailing from Huntingdon, Cromwell was a descendant of Henry VIII’s (1491 – 1547) chief minister, Thomas Cromwell (1485 – 1550). In the end, it would decimate the English population and forever alter England’s political character.

The English Civil War began in January 1642 when King Charles marched on Parliament with a force of four-hundred-thousand men. He withdrew to Oxford after being denied entry. Trouble was brewing. Throughout the summer, people aligned themselves with either the monarchists or the Parliamentarians.

The forces of King Charles and the forces of Parliament would meet at the Battle of Edgehill in October. What would follow is several years of bitter and bloody conflict.

Ultimately, it was Parliament that prevailed. Charles was captured, tried for treason, and beheaded on January 30th, 1642. England was transformed into a republic or “commonwealth.” The English Civil War had claimed the lives of two-hundred-thousand peoples, divided families, and facilitated enormous social and political change. Most importantly, however, it set the precedent that a monarch could not rule without the consent of parliament.

The powers of parliament had been steadily increasing since the conclusion of the English Civil War. However, total Parliamentary supremacy had proven unpopular. The Commonwealth created in the wake of the Civil War had collapsed shortly after Oliver Cromwell’s death. When this happened, it was decided to restore the Stuart dynasty.

The exiled Prince Charles returned to France and was crowned King Charles II (1630 – 1685). Like his father and grandfather, Charles was an ardent believer in the divine rights of kings. This view put him at odds with those of the Enlightenment which challenged the validity of absolute monarchy, questioned traditional authority, and idealised liberty.

By the third quarter of the 17th century, Protestantism had triumphed in both England and Scotland. Ninety-percent of the British population was Protestant. The Catholic minority was seen as odd, sinister, and, in extreme cases, outright dangerous. People equated Catholicism with tyranny linking French-Style autocracy with popery.

It should come as no surprise, then, that Catholics became the target of persecution. Parliament barred them from holding offices of state and banned Catholic forms of worship. Catholics were barred from becoming members of Parliament, justices of the peace, officers in the army, or hold any other position in Parliament unless they were granted a special dispensation by the King.

It is believed that Charles II may have been a closet Catholic. He was known for pardoning Catholics for crimes (controversial considering Great Britain was a protestant country) and ignoring Parliament.

However, Charles’ brother and successor, James (1633 – 1701) was a Catholic beyond any shadow of a doubt. He had secretly converted in 1669 and was forthright in his faith. After his first wife, Anne Hyde (1637 – 1671) died, James had even married the Italian Catholic, Mary of Modena (1658 – 1718). A decision that hardly endeared him to the populace.

The English people became alarmed when it became obvious that Charles II’s wife, Catherine of Braganza (1638 – 1705) would not produce a Protestant heir. It meant that Charles’ Catholic brother, James was almost certainly guaranteed to succeed him on the throne. So incensed was Parliament at having a Catholic on the throne, they attempted to pass the Crown onto one of Charles’ Anglican relatives.

Their concern was understandable, too. The English people had suffered the disastrous effects of religious intolerance since Henry VIII had broken away from the Catholic Church and established the Church of England. The result had been over a hundred years of religious conflict and persecution. Mary I (1516 – 1558), a devout Catholic, had earnt the moniker “bloody Mary” for burning Protestants the stake. During the reign of King James, Guy Fawkes (1570 – 1606), along with a group of Catholic terrorists, had attempted to blow up Parliament in the infamous “gunpowder plot.”

Unlike Charles II, James made his faith publicly known. He desired greater tolerance for Catholics and non-Anglican dissenters like Quakers and Baptists. The official documents he issued, designed to bring about the end of religious persecution, were met with considerable objection from both Bishops and Europe’s protestant monarchs.

Following the passing of the Test Act in 1672, James had briefly been forced to abandon his royal titles. The Act required officers and members of the nobility to take the Holy Communion as spelt out by the Church of England. It was designed to prevent Catholics from taking public office.

Now, as King, James was attempting to repeal the Test Act by placing Catholics in positions of power. His Court featured many Catholics and he became infamous for approaching hundreds of men – justices, wealthy merchants, and minor landowners – to stand as future MPs and, in a process known as ‘closeting’, attempting to persuade them to support his legal reforms. Most refused.

That was not the limits of James’ activities, either. He passed two Declarations of Indulgences to be read from every stage for two Sundays, and put those who opposed it on trial for seditious libel. Additionally, he had imprisoned seven Bishops for opposing him, made sweeping changes to the Church of England, and built an army comprising mainly of Catholics.

The people permitted James II to rule as long as his daughter, the Protestant Prince Mary (1662 – 1694) remained his heir. All this changed, however, when Mary Modena produced a Catholic heir: James Francis Edward Stuart (1688 – 1766). When James declared that the infant would be raised Catholic, it immediately became apparent that a Catholic dynasty was about to be established. Riots broke out. Conspiracy theorists posited that the child was a pawn in a Popish plot. The child, the theory went, was not the King’s son but rather a substitute who had been smuggled into the birthing chamber in a bed-warming pan.

In reality, it was the officers of the Army and Navy who were beginning to plot and scheme in their taverns and drinking clubs. They were annoyed that James had introduced Papist officers into the military. The Irish Army, for example, had seen much of its Protestant officer corps dismissed and replaced with Catholics who had little to no military experience.

James dissolved Parliament in July 1688. Around this time, a Bishop and six prominent politicians wrote to Mary and her Dutch husband, William of Orange (1650 – 1702) and invited them to raise an army, invade London, and seize the throne. They accepted.

William landed in Dorset on Guy Fawkes’ day accompanied by an army of fifteen-thousand Dutchmen and other Protestant Europeans. He quickly seized Exeter before marching eastward towards London. James II called for troops to confront William.

Things were not looking good for James, however. Large parts of his officer corps were defecting to the enemy and taking their soldiers with them. Without the leadership of their officers, many soldiers simply went home. English magnates started declaring for William. And his own daughter, Princess Anne (1665 – 1714) left Whitehall to join the rebels in Yorkshire. James, abandoned by everyone, fled to exile in France. He would die there twelve-years-later.

On January 22nd, 1689, William called the first ‘convention parliament.’ At this ‘convention’, Parliament passed two resolutions. First, it was decided that James’ flight into exile constituted an act of abdication. And second, it was declared a war against public policy for the throne to be occupied by a Catholic. As such, the throne was passed over James Francis Edward Stuart, and William and Mary were invited to take the Crown as co-monarchs.

They would be constrained, however, by the 1689 Bill of Rights and, later, by the 1701 Act of Settlement. The 1689 Bill of Rights made Great Britain a constitutional monarchy as opposed to an absolute one. It established Parliament, not the crown, as the supreme source of law. And it set out the most basic rights of the people.

Likewise, the 1701 Act of Settlement helped to strengthen the Parliamentary system of governance and secured a Protestant line of succession. Not only did it prevent Catholics from assuming the throne, but it also gave Parliament the ability to dictate who could ascend to the throne and who could not.

The Glorious Revolution was one of the most important events in Britain’s political evolution. It made William and Mary, and all monarchs after them, elected monarchs. It established the concept of Parliamentary sovereignty granting that political body the power to make or unmake any law it chose to. The establishment of Parliamentary sovereignty brought with it the ideas of responsible and representative government.

The British philosopher, Roger Scruton (1944 – ) described British constitutional monarchy as a “light above politics which shines down [on] the human bustle from a calmer and more exalted sphere.” A constitutional monarchy unites the people for a nation under a monarch who symbolises their shared history, culture, and traditions.

Constitutional monarchy is a compromise between autocracy and democracy. Power is shared between the monarch and the government, both of whom have their powers restricted by a written, or unwritten, constitution. This arrangement separates the theatre of power from the realities of power. The monarch is able to represent the nation whilst the politician is able to represent his constituency (or, more accurately, his party).

In the Need for Roots, the French philosopher, Simone Weils (1909 – 1943) wrote that Britain had managed to maintain a “centuries-old tradition of liberty guaranteed by the authorities.” Weils was astounded to find that chief power in the British constitution lay in the hands of a lifelong, unelected monarch. For Weils, it was this arrangement that allowed the British to retain its tradition of liberty when other countries – Russia, France, and Germany, among others – lost theirs when they abolished their monarchies.

sir_isaac_isaacs_and_lady_isaacs

Great Britain’s great legacy is not their once vast and now non-existent Empire, but the ideas of liberty and governance that they have gifted to most of their former colonies. Even the United States, who separated themselves from the British by means of war, inherited most of their ideas about “life, liberty, and the pursuit of happiness” from their English forebears.

The word “Commonwealth” was adopted at the Sixth Imperial Conference held between October 19th and November 26th, 1926. The Conference, which brought together the Prime Ministers of the various dominions of the British Empire, led to the formation of the Inter-Imperial Relations Committee. The Committee, headed for former British Prime Minister, Arthur Balfour (1848 – 1930), was designed to look into future constitutional arrangements within the commonwealth.

Four years later, at the Seventh Imperial Conference, the committee delivered the Balfour Report. It stated:

“We refer to the group of self-governing communities composed of Great Britain and the Dominions. Their position and mutual relation may be readily defined. They are autonomous Communities within the British Empire, equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by a common allegiance to the Crown, and freely associated as members of the British Commonwealth of Nations.”

It continued:

“Every self-governing member of the Empire is now the master of its destiny. In fact, if not always in form, it is subject to no compulsion whatsoever.”

Then, in 1931, the Parliament of the United Kingdom passed the Statute of Westminster. It became one of two laws that would secure Australia’s political and legal independence from Great Britain.

The Statute of Westminster gave legal recognition to the de-facto independence of the British dominions. Under the law, Australia, Canada, the Irish Free State, Newfoundland (which would relinquish its dominion status and be absorbed into Canada in 1949), New Zealand and South Africa were granted legal independence.

Furthermore, the law abolished the Colonial Validity Act 1865. A law which had been enacted with the intention of removing “doubts as to the validity of colonial laws.” According to the act, a Colonial Law was void when it “is or shall be in any respect repugnant to the provisions of any Act of Parliament extending to the colony to which such laws may relate, or repugnant to any order or regulation under authority of such act of Parliament or having in the colony the force and effect of such act, shall be read subject to such act, or regulation, and shall, to the extent of such repugnancy, but not otherwise, be and remain absolutely void and inoperative.”

The Statute of Westminster was quickly adopted by Canada, South Africa, and the Irish Free State. Australia, on the other hand, did not adopt it until 1942, and New Zealand did not adopt it until 1947.

More than forty-years-later, the Hawke Labor government passed the Australia Act 1986. This law effectively made the Australian legal system independent from Great Britain. It had three major achievements. First, it ended appeals to the Privy Council thereby establishing the High Court as the highest court in the land. Second, it ended the influence the British government had over the states of Australia. And third, it allowed Australia to update or repeal those imperial laws that applied to them by ending British legislative restrictions.

What the law did not do, however, was withdraw the Queen’s status as Australia’s Head of State:

“Her Majesty’s Representative in each State shall be the Governor.

Subject to subsections (3) and (4) below, all powers and functions of Her Majesty in respect of a State are exercisable only by the Governor of the State.

Subsection (2) above does not apply in relation to the power to appoint, and the power to terminate the appointment of, the Governor of a State.

While her Majesty is personally present in a State, Her Majesty is not precluded from exercising any of Her powers and functions in respect of the State that are the subject of subsection (2) above.

The advice of Her Majesty in relation to the exercise of powers and functions of Her Majesty in respect of a State shall be tendered by the Premier of the State.”

These two laws reveal an important miscomprehension that is often exploited by Australian Republicans. That myth is the idea that Australia does not have legal and political independence because its Head of State is the British monarch. The passage of the Statute of Westminster in 1931 and the Australia Act in 1986 effectively ended any real political or legal power the British government had over Australia.

In Australia, the monarch (who is our head of state by law) is represented by a Governor General. This individual – who has been an Australian since 1965 – is required to take an oath of allegiance and an oath of office that is administered by a Justice (typically the Chief Justice) of the High Court. The Governor-General holds his or her position at the Crown’s pleasure with appointments typically lasting five years.

The monarch issues letters patent to appoint the Governor General based on the advice of Australian ministers. Prior to 1924, Governor Generals were appointed on the advice of both the British government and the Australian government. This is because the Governor General at that time represented both the monarch and the British government. This arrangement changed, however, at the Imperial Conferences of 1926 and 1930. The Balfour Report produced by these conferences stated that the Governor General should only be the representative of the crown.

The Governor General’s role is almost entirely ceremonial. It has been argued that such an arrangement could work with an elected Head of State. However, such an arrangement would have the effect of politicising and thereby corrupting the Head of State. A Presidential candidate in the United States, for example, is required to raise millions of dollars for his campaign and often finds himself beholden to those donors who made his ascent possible. The beauty of having an unelected Head of State, aside from the fact that it prevents the government from assuming total power, is that they can avoid the snares that trap other political actors.

image-20151106-16263-1t48s2d

The 1975 Constitutional Crisis is a perfect example of the importance of having an independent and impartial Head of State. The crises stemmed from the Loans Affair which forced Dr. Jim Cairns (1914 – 2003), Deputy Prime Minister, Treasurer, and intellectual leader of the political left, and Rex Connor (1907 – 1977) out of the cabinet. As a consequence of the constitutional crisis, Gough Whitlam (1916 – 2014) was dismissed as Prime Minister and the 24th federal parliament was dissolved.

The Loan’s affair began when Rex Connor attempted to borrow money, up to US$4b, to fund a series of proposed national development projects. Connor deliberately flouted the rules of the Australian Constitution which required him to take such non-temporary government borrowing to the Loan Council (a ministerial council consisting of both Commonwealth and state elements which existed to coordinate public sector borrowing) for approval. Instead, on December 13th, 1974, Gough Whitlam, Attorney-General Lionel Murphy (1922 – 1986), and Dr. Jim Cairns authorised Connor to seek a loan without the council’s approval.

When news of the Loans Affair was leaked, the Liberal Party, led by Malcolm Fraser (1930 – 2015), began questioning the government. Whitlam attempted to brush the scandal aside by claiming that the loans had merely been “matters of energy” and claiming that the Loans Council would only be advised once a loan had been made. Then, on May 21st, Whitlam informed Fraser that the authority for the plan had been revoked.

Despite this, Connor continued to liaise with the Pakistani financial broker, Tirath Khemlani (1920 – 1991). Khemlani was tracked down and interviewed by Herald Journalist, Peter Game (1927 – ) in mid-to-late 1975. Khemlani claimed that Connor had asked for a twenty-year loan with an interest of 7.7% and a 2.5% commission for Khemlani. The claim threw serious doubt on Dr. Jim Cairn’s claim that the government had not offered Khemlani a commission on a loan. Game also revealed that Connor and Khemlani were still in contact, something Connor denied in the Sydney Morning Herald.

Unfortunately, Khemlani had stalled on the loan, most notably when he had been asked to go to Zurich with Australian Reserve Bank officials to prove the funds were in the Union Bank of Switzerland. When it became apparent that Khemlani would never deliver Whitlam was forced to secure the loan through a major American investment bank. As a condition of that loan, the Australian government was required to cease all other loans activities. Consequentially, Connor had his loan raising authority revoked on May 20th, 1975.

The combination of existing economic difficulties with the political impact of the Loan’s Affair severely damaged to the Whitlam government. At a special one day sitting of the Parliament held on July 9th, Whitlam attempted to defend the actions of his government and tabled evidence concerning the loan. It was an exercise in futility, however. Malcolm Fraser authorised Liberal party senators – who held the majority in the upper house at the time – to force a general election by blocking supply.

And things were only about to get worse. In October 1975, Khemlani flew to Australia and provided Peter Game with telexes and statutory declarations Connor had sent him as proof that he and Connor had been in frequent contact between December 1974 and May 1975. When a copy of this incriminating evidence found its way to Whitlam, the Prime Minister had no other choice but to dismiss Connor and Cairns (though he did briefly make Cairns Minister for the Environment).

By mid-October, every metropolitan newspaper in Australia was calling on the government to resign. Encouraged by this support, the Liberals in the Senate deferred the Whitlam budget on October 16th. Whitlam warned Fraser that the Liberal party would be “responsible for bills not being paid, for salaries not being paid, for utter financial chaos.” Whitlam was alluding to the fact that blocking supply threatened essential services, Medicare rebates, the budgets of government departments and the salaries of public servants. Fraser responded by accusing Whitlam of bringing his own government to ruin by engaging in “massive illegalities.”

On October 21st, Australian’s longest-serving Prime Minister, Sir Robert Menzies (1894 – 1978) signalled his support for Fraser and the Liberals. The next day, Treasurer, Bill Hayden (1933 – ) reintroduced the budget bills and warned that further delay would increase unemployment and deepen a recession that had blighted the western world since 1973.

The crisis would come to a head on Remembrance Day 1975. Whitlam had asserted for weeks that the Senate could not force him into an election by claiming that the House of Representatives had an independence and an authority separate from the Senate.

Whitlam had decided that he would end the stalemate by seeking a half-senate election. Little did he know, however, that the Governor-General, Sir John Kerr (1914 – 1991) had been seeking legal advice from the Chief Justice of the High Court on how he could use his Constitutional Powers to end the deadlock. Kerr had come to the conclusion that should Whitlam refuse to call a general election, he would have no other alternative but to dismiss him.

And this is precisely what happened. With the necessary documents drafted, Whitlam arranged to meet Kerr during the lunch recess. When Whitlam refused to call a general election, Kerr dismissed him and, shortly after, swore in Malcolm Fraser as caretaker Prime Minister. Fraser assured Kerr that he would immediately pass the supply bills and dissolve both houses in preparation for a general election.

Whitlam returned to the Lodge to eat lunch and plan his next movie. He informed his advisors that he had been dismissed. It was decided that Whitlam’s best option was to assert Labor’s legitimacy as the largest party in the House of Representatives. However, fate was already moving against Whitlam. The Senate had already passed the supply bills and Fraser was drafting documents that would dissolve the Parliament.

At 2pm, Deputy Prime Minister, Frank Crean (1916 – 2008) defended the government against a censure motion started by the opposition. “What would happen, for argument’s sake, if someone else were to come here today and say he was now the Prime Minister of this country”, Crean asked. In fact, Crean was stalling for time while Whitlam prepared his response.

At 3pm, Whitlam made a last-ditch effort to save his government by addressing the House. Removing references to the Queen, he asked that the “House expresses its want of confidence in the Prime Minister and requests, Mr. Speaker, forthwith to advice His Excellency, the Governor-General to call the member of Wannon to form a government.” Whitlam’s motion was passed with a majority of ten.

The speaker, Gordon Scholes (1931 – 2018) expressed his intention to “convey the message of the House to His Excellency at the first opportunity.” It was a race that Whitlam was not supposed to win. Scholes was unable to arrange an appointment until quarter-to-five in the afternoon.

Behind the scenes, departmental officials were working to provide Fraser with the paperwork he needed to proclaim a double dissolution. By ten-to-four, Fraser left for government house. Ten minutes later, Sir John Kerr had signed the proclamation dissolving both Houses of Parliament and set the date for the upcoming election for December 13th, 1975. Shortly after, Kerr’s official secretary, David Smith (1933) drove to Parliament House and, with Whitlam looming behind him, read the Governor General’s proclamation.

The combination of economic strife, political scandal, and Whitlam’s dismissal signed the death warrant for Whitlam’s government. At the 1975 Federal Election, the Liberal-National coalition won by a landslide, gaining a majority of ninety-one seats and obtaining a popular vote of 4,102,078. In the final analysis, it seems that the Australian people had agreed with Kerr’s decision and had voted to remove Whitlam’s failed government from power once and for all.

23163929155_9f41dc691d_h

Most of the arguments levelled against constitutional monarchies can be described as petty, childish, and ignorant. The biggest faux pas those who oppose constitutional monarchies make is a failure to separate the royal family (who are certainly not above reproach) from the institution of monarchy itself. Dislike for the Windsor family is not a sufficient reason to disagree with constitutional monarchy. It would be as if I decided to argue for the abolition of the office of Prime Minister just because I didn’t like the person who held that office.

One accusation frequently levelled against the monarchy is that they are an undue financial burden on the British taxpaying public. This is a hollow argument, however. It is certainly true that the monarchy costs the British taxpayer £299.4 million every year. And it is certainly true that the German Presidency costs only £26 million every year. However, it is not true that all monarchies are necessarily more expensive than Presidencies. The Spanish monarchy costs only £8 million per year, less than the Presidencies of Germany, Finland, and Portugal.

Australia has always had a small but vocal republican movement. The National Director of the Republican Movement, Michael Cooney has stated: “no one thinks it ain’t broken, that we should fix it. And no one thinks we have enough say over our future, and so, no matter what people think about in the sense of the immediate of the republic everyone knows that something is not quite working.”

History, however, suggests that the Australian people do not necessarily agree with Cooney’s assessment. The Republican referendum of 1999 was designed to facilitate two constitutional changes: first, the establishment of a republic, and, second, the insertion of a preamble in the Constitution.

The Referendum was held on November 6th, 1999. Around 99.14%, or 11,683,811 people, of the Australian voting public participated. 45.13%, or 5,273,024 voted yes. However, 54.87%, or 6,410,787 voted no. The Australian people had decided to maintain Australia’s constitutional monarchy.

All things considered, it was probably a wise decision. The chaos caused by establishing a republic would pose a greater threat to our liberties than a relatively powerless old lady. Several problems would need to be addressed. How often should elections occur? How would these elections be held? What powers should a President have? Will a President be just the head of state, or will he be the head of the government as well? Australian republicans appear unwilling to answer these questions.

Margaret Tavits of Washington University in St. Louis once observed that: “monarchs can truly be above politics. They usually have no party connections and have not been involved in daily politics before assuming the post of Head of State.” It is the job of the monarch to become the human embodiment of the nation. It is the monarch who becomes the centrepiece of pageantry and spectacle. And it the monarch who symbolises a nation’s history, tradition, and values.

Countries with elected, or even unelected, Presidents can be quite monarchical in style. Americans, for example, often regard their President (who is both the Head of State and the head of the government) with an almost monarchical reverence. A constitutional monarch might be a lifelong, unelected Head of State, but unlike a President, that is generally where their power ends. It is rather ironic that the Oxford political scientists, Petra Schleiter and Edward Morgan-Jones have noted that allow governments to change without democratic input like elections than monarchs are. Furthermore, by occupying his or her position as Head of State, the monarch is able to prevent other, less desirable people from doing so.

The second great advantage of constitutional monarchies is that they provide their nation with stability and continuity. It is an effective means to bridging the past and future. A successful monarchy must evolve with the times whilst simultaneously keeping itself rooted in tradition. All three of my surviving grandparents have lived through the reign of King George VI, Queen Elizabeth II, and may possibly live to see the coronation of King Charles III. I know that I will live through the reigns of Charles, King William V, and possibly survive to see the coronation of King George VII (though he will certainly outlive me).

It would be easy to dismiss stability and continuity as manifestations of mere sentimentality, but such things also have a positive effect on the economy, as well. In a study entitled Symbolic Unity, Dynastic Continuity, and Countervailing Power: Monarchies, Republics and the Economy Mauro F. Guillén found that monarchies had a positive impact on economies and living standards over the long term. The study, which examined data from one-hundred-and-thirty-seven countries including different kinds of republics and dictatorships, found that individuals and businesses felt more confident that the government was not going to interfere with their property in constitutional monarchies than in republics. As a consequence, they are more willing to invest in their respective economies.

When Wordsworth wrote his ode to Milton, he was mourning the loss of chivalry he felt had pervaded English society. Today, the West is once again in serious danger of losing those two entities that is giving them a connection to the chivalry of the past: a belief in God and a submission to a higher authority.

Western culture is balanced between an adherence to reason and freedom on the one hand and a submission to God and authority on the other. It has been this delicate balance that has allowed the West to become what it is. Without it, we become like Shakespeare’s Hamlet: doomed to a life of moral and philosophical uncertainty.

It is here that the special relationship between freedom and authority that constitutional monarchy implies becomes so important. It satisfies the desire for personal autonomy and the need for submission simultaneously.

The Christian apologist and novelist, C.S. Lewis (1898 – 1964) once argued that most people no more deserved a share in governing a hen-roost than they do in governing a nation:

“I am a democrat because I believe in the fall of man. I think most people are democrats for the opposite reason. A great deal of democratic enthusiasm descends from the idea of people like Rousseau who believed in democracy because they thought mankind so wise and good that everyone deserved a share in the government. The danger of defending democracy on those grounds is that they’re not true and whenever their weakness is exposed the people who prefer tyranny make capital out of the exposure.”

The necessity for limited government, much like the necessity for authority, comes from our fallen nature. Democracy did not arise because people are so naturally good (which they are not) that they ought to be given unchecked power over their fellows. Aristotle (384BC – 322BC) may have been right when he stated that some people are only fit to be slaves, but unlimited power is wrong because there is no one person who is perfect enough to be a master.

Legal and economic equality are necessary bulwarks against corruption and cruelty. (Economic equality, of course, refers to the freedom to engage in lawful economic activity, not to socialist policies of redistributing wealth that inevitably lead to tyranny). Legal and economic equality, however, does not provide spiritual sustenance. The ability to vote, buy a mobile phone, or work a job without being discriminated against may increase the joy in your life, but it is not a pathway to genuine meaning in life.

Equality serves the same purpose that clothing does. We are required to wear clothing because we are no longer innocent. The necessity of clothes, however, does not mean that we do not sometimes desire the naked body. Likewise, just because we adhere to the idea that God made all people equal does not mean that there is not a part of us that does not wish for inequality to present itself in certain situations.

Chivalry symbolises the best human beings can be. It helps us realise the best in ourselves by reconciling fealty and command, inferiority and superiority. However, the ideal of chivalry is a paradox. When the veil of innocence has been lifted from our eyes, we are forced to reconcile ourselves to the fact that bullies are not always cowards and heroes are not always modest. Chivalry, then, is not a natural state, but an ideal to be aimed for.

The chivalric ideal marries the virtues of humility and meekness with those of valour, bravery, and firmness. “Thou wert the meekest man who ever ate in hall among ladies”, said Sir Ector to the dead Lancelot. “And thou wert the sternest knight to thy mortal foe that ever-put spear in the rest.”

Constitutional monarchy, like chivalry, makes a two-fold demand on the human spirit. Its democratic element, which upholds liberty, demands civil participation from all its citizens. And its monarchical element, which champions tradition and authority, demands that the individual subjugate himself to that tradition.

It has been my aim in this essay to provide a historical, practical, and spiritual justification for constitutional monarchy. I have demonstrated that the British have developed ideals of liberty, justice, and good governance. The two revolutions of the 17th century – the English Civil War and the Glorious Revolution – established Great Britain as a constitutional monarchy. It meant that the monarch could not rule without the consent of parliament, established parliament as the supreme source of law, and allowed them to determine the line of succession. I have demonstrated that constitutional monarchs are more likely to uphold democratic principles and that the stability they produce encourages robust economies. And I have demonstrated that monarchies enrich our souls because it awakens in us the need for both freedom and obedience.

Our world has become so very vulgar. We have turned our backs on God, truth, beauty, and virtue. Perhaps we, like Wordsworth before us, should seek virtue, manners, freedom, and power. We can begin to do this by retaining the monarchy.

A Man For All Seasons

amanforallseasons1

It is a rare occurrence to see a film that is so memorable that it implants itself on the human psyche. A film that contains such a captivating story, compelling characters, and profound themes occurs so rarely it becomes etched into our collective unconscious. A Man for All Seasons is one of those films.

Set in Tudor England during the reign of King Henry VIII (1491 – 1547), A Man for All Seasons tells the story of Henry’s divorce from Catherine of Aragon (1485 – 1536), the birth of the Church of England, and the man who stood opposed to it.

During the 1530s, King Henry VIII broke away from the Catholic Church, passed the Act of Succession (which declared Princess Mary (1516 – 1558), the King’s daughter with Catherine, illegitimate) and the Act of Supremacy (which gave Henry supreme command over the Church in England), and made himself the Supreme Head of the Church of England.

In A Man for All Seasons, Henry asks Sir Thomas More (1478 – 1535) to disregard his own principles and express his approval of the King’s desire to divorce his wife and establish an English Church separate from Rome. Henry believes that More’s support will legitimise his actions because More is a man known for his moral integrity. Initially, Henry uses friendship and dodgy logic to convince his friend. It fails, and the so-called “defender of the faith” tries using religious arguments to justify his adultery.  When this fails, he merely resorts to threats. Again, More refuses to endorse Henry’s actions.

A Man for All Seasons is really about the relationship between the law (representing the majesty of the state) and individual consciousness. In the film, Sir Thomas More is depicted as a man with an almost religious reverence for the law because he sees it as the only barrier between an ordered society and anarchy. In one scene, when William Roper the Younger (1496 – 1578) tells him he would gladly lay waste to every law in order to get at the devil, More replies that he would “give the devil benefit of law for my own safety’s sake.”

More’s reverence goes far beyond mere man-made law, however. He also shows a deep reverence for the laws of God, as well. After being sentenced to death, More finally breaks his silence and refers to the Act of Succession, which required people to recognise Henry’s supremacy in the Church and his divorce from Catherine of Aragon, as “directly repugnant to the law of God and His Holy Church, the Supreme Government of which no temporal person may be any law presume to take upon him.” More argues that the authority to enforce the law of God was granted to Saint Peter by Christ himself and remained the prerogative of the Bishop of Rome.

Furthermore, More argues that the Catholic Church had been guaranteed immunity from interference in both the King’s coronation oath and in Magna Carta. In his coronation oath, Henry had promised to “preserve to God and Holy Church, and to the people and clergy, entire peace and concord before God.” Similarly, the Magna Carta stated that the English people had “granted to God, and by this present charter confirmed for us and our heirs in perpetuity, that the English Church shall be free, and shall have its rights undiminished, and its liberties unimpaired.”

The central problem of the film is that the legal and political system in England is incapable of allowing More to hold a contradictory, private opinion. Even before he is appointed Chancellor, More expresses no desire to get involved with the debate surrounding the King’s marriage. He will not, however, swear an oath accepting the King’s marriage or his position as the head of the Church of England. More believes that it is the Pope who is the head of the Church, not the King, and he is perfectly willing to sacrifice his wealth, family, position, freedom, and, ultimately, his life to retain his integrity.

The relationship between the law and an individual’s conscience is an important one. What A Man for All Seasons illustrates is just how important this relationship is, and what happens when this relationship is violated. Modern proponents of social justice, identity politics, and political correctness would do well to watch A Man for All Seasons.

TRANSGENDERISM IS NO BASIS FOR PUBLIC POLICY

transgender-star-jumbo-v2

It has been over fourteen-year since David Reimer, the victim of an insane and evil scientific experiment, committed suicide. After his penis had been burnt off in a botched circumcision, David’s parents had turned to the infamous sexologist and social constructionist, Dr. John Money for help. Following Dr. Money’s advice, David’s parents agreed to allow a sex change operation to be performed on their young son and raised him as a girl.

Despite Dr. Money’s boasting that his experiment had been a success, however, David Reimer did not settle comfortably into his female identity. David tore up his dresses at three, asked if he could have his head shaved like his father, and engaged in all manner of boyish behaviour. David was bullied at school and, upon hitting puberty, decided that he was a homosexual (in reality, of course, he was heterosexual).

Finally, when he was fourteen David’s parents revealed the truth about his gender identity. David reverted to his masculine identity, broke off contact with Dr. Money whom he described as an abusive brainwasher, and received a non-functioning penis through phalloplasty. Unable to handle the immense psychological damage that had been inflicted upon him, David Reimer blew his brains out with a shotgun at the age of thirty-eight.

For all of human history, boy has meant boy and girl has meant girl. Traditionally, sex was used to refer to the biological markers of gender. If you were born with a penis and an XY chromosome, you were a man. If you were born with a vagina and an XX chromosome, you were a woman. One’s gender expression was thought to compliment one’s biological sex. A biological man would have masculine personality traits and a biological female would have feminine personality traits. These complimentary characteristics, among them body shape, dress, mannerisms, and personality, were thought to be produced by a mixture of natural and environmental forces.

Recently, however, gender theorists have begun to question the relationship between biological sex and gender identity. They argue that gender, which they see as distinctive from sex, is a social construct. Since gender refers to the expression of masculinity and femininity, gender is something that a person acquires. (Needless to say, this movement is driven by a pernicious post-modern, Neo-Marxist worldview). Under this philosophy, gender expression is the manner in which a person expresses their gender identity. Gender identity is expressed through dress, behaviour, speech, and nothing else besides.

Neuroplasticity provides the gender theorist with perhaps his greatest argument. If underlying brain processes are theoretically strengthened through repetitive use, it follows that gender identity comes from a narrowing down of potential gender categories through the repetitive use of certain brain processes. However, it also reveals a fatal flaw in the gender theorist’s (and social constructionist’s) philosophy. If the human brain is so malleable that an individual’s gender identity is constructed, then why can’t the brain of a transgender person be adapted out of its transgenderism?

The primary problem with gender theory is that it just plain wrong. The idea that gender is distinct from sex has absolutely no basis in science whatsoever. As Jordan Peterson, the Canadian psychology/philosopher, has stated: “the idea that gender identity is independent of biological sex is insane. It’s wrong. The scientific data is clear beyond dispute. It’s as bad as claiming that the world is flat.” Men and women differ both at the cellular and the temperamental level. Unlike men, for example, women menstruate, they can have babies, and they show a slew of personality characteristics that mark them as different from men. David C. Page, the Director of the Whitehead Institution at the Massachusetts Institute of Technology, has even claimed that genetic differences exist at the cellular level asserting that “throughout human bodies, the cells of males and females are biochemically different.” These differences even affect how men and women contract and fight diseases.

The philosopher Alain de Benoist has also strongly criticised gender theory. De Benoist argued against the scientific errors and philosophical absurdities in his work Non à la théorie de genre (No to Gender Theory).

First, De Benoist points out that the gender theorists have used the fact that some gender characteristics are socially constructed to argue that all characteristics are socially constructed.

Second, De Benoist argued that the “hormonal impregnation of the foetus” (as De Benoist puts it) causes the brain to become genderised because it has a “direct effect on the organisation of neural circuits, creating a masculine brain and a feminine brain, which can be distinguished by a variety of anatomical, physiological, and biochemical markers.”

Third, De Benoist argued that biological sex has a profound effect on the way people think, act, and feel. In order to support their theory, gender theorists are forced to deny the natural differences between men and women. De Benoist wrote:

“From the first days of life, boys look primarily at mechanized objects or objects in movement while girls most often search for visual contact with human faces. Only a few hours after birth, a girl responds to the cries of other infants while a boy shows no interest. The tendency to show empathy is stronger in girls than in boys long before any external influence (or “social expectations”) have been able to assert themselves. At all ages and stages of development, girls are more sensitive to their emotional states and to those of others than boys … From a young age, boys resort to physical strategies where girls turn to verbal ones … From the age of two, boys are more aggressive and take more risks than girls.”

Furthermore, gender theory cheapens what it means to be a man or a woman. And, by extension, it denigrates the contributions that each gender has to make to civil society. Gender values give people ideals to strive for and helps them determine the rules that govern human interactions. The idea that men and women ought to be treated the same is ludicrous beyond belief. No parent would like to see their son treat a woman the same way they treat their male friends. Men have been taught to be gentlemen and women have been taught to be ladies for a reason.

All of this is not to say, however, that those pushing transgender rights do not have a case. They are right when they claim that the transgender peoples of the world face discrimination, prejudice, and violence. Some countries treat transgenderism as a crime, and it is certainly true that transgender people are more likely to be victims of violence, including murder. A reasonable transgender rights argument would be that transgender people cannot help their affliction and that society ought to treat them with kindness, tolerance, and compassion.

Unfortunately, that is not the argument that gender activists like to make. Rather than focusing on promoting tolerance, gender activists have instead sought to do away with gender distinctions altogether (which is, more likely than not, their actual aim). Using a very tiny minority of the population as their moral basis, the gender activists are attempting to force society to sacrifice its traditional classifications of male and female.

Transgenderism is clearly a mental health disorder. In the past, it was referred to as “gender dysphoria”, considered a mental illness, and treated as such. To assert the fact that transgenderism is a mental health disorder is not a denial of an individual’s integral worth as a human being. It is merely the acknowledgement of the existence of an objective reality in which gender is both binary and distinct. Unfortunately, this is not the attitude of those who influence public opinion. Consequently, programs for LGBTQ youth have seen an increase in youth who identify as transgender. The transgender journalist, Libby Down Under, has blamed instances of rapid-onset gender dysphoria on the normalisation of transgenderism in the culture. With a slew of celebrities coming out as transgender (former Olympian Bruce Jenner being a primary example), and with transgender characters being featured on numerous television shows, many teens and tweens have suddenly decided that they are transgender despite having no prior history of gender confusion.

Transgender youth increasingly feel that it is their right to express themselves however they please. And they feel that it is their right to silence all who dare to criticise or disagree with that expression. Cross-living, hormone therapy, and sex reassignment surgery are seen as part of this self-expression. Alarmingly, the mainstream response of psychotherapists to these children and adolescents is the “immediate affirmation of [their] self-diagnosis, which often leads to support for social and even medical transition.”

It is a classic case of political posturing overshadowing the pursuit of truth. Most youth suffering from gender dysphoria grow out of their predilection. Dr. James Cantor of the University of Toronto has cited three large-scale studies, along with other smaller studies, to show that transgender children eventually grow out of their gender dysphoria. The Diagnostic and Statistics Manual 5th Edition claims that desistance rates for gender dysphoria is seventy to ninety percent in “natal males” and fifty to eighty-eight percent in “natal females.” Similarly, the American Psychological Association’s Handbook of Sexuality and Psychology concludes that the vast majority of gender dysphoria-afflicted children learn to accept their gender by the time they have reached adolescence or adulthood.

It is not a secret that transgenderism lends itself to other mental health problems. Forty-one percent of transgender people have either self-harmed or experienced suicidal ideation (this percentage, of course, does not reveal at what stage of transition suicidal ideation or attempts occur). The postmodern, neo-Marxist answer to this problem is that transgender people are an oppressed minority and that they are driven to mental illness as a result of transphobia, social exclusion, bullying, and discrimination.

It is typical of the left to presume that society is to blame for an individual’s suffering. And to a certain extent, they are right. Transgender people are the victims of discrimination, prejudice, and violence. But it is more than likely that these abuses exacerbate their problems rather than causing them. One in eight transgender people, for example, rely on sex and drug work to survive. Is that the fault of society or the fault of the individual? The National Center for Transgender Equality claims that it is common for transgender people to have their privacy violated, to experience harassment, physical and sexuality violence, and to face discrimination when it comes to employment. They claim that a quarter of all transgender people have lost their jobs and three-quarters have faced workplace discrimination because of their transgender status.

In Australia, there has been a move to allow transgender children access to hormone-blocking drugs and sex-change surgeries. Australian gender activists – surprise, surprise – support the idea of as a way to reduce the rates of suicide among transgender people. The Medical Journal of Australia has approved the use of hormone therapy on thirteen-year-olds despite the fact that the scientific community remains, as of 2018, undecided on whether or not puberty-blocking drugs are either safe or reversible.

In the United States, a great deal of debate has occurred over transgender rights. In particular, there have been debates over what bathroom they should be allowed to use, how they should be recognised on official documents, and whether they should be allowed to serve in the military. In 2016, former President Barack Obama ordered state schools to allow transgender students to use whatever bathroom they desire. Similar ordinances have been passed in hundreds of cities and counties across the United States. Seventeen states and the District of Columbia are subject to ‘non-discrimination’ laws which include gender identity and gender expression. These include restrooms, locker rooms, and change rooms.

In March of 2016, North Carolina passed a law which required people in government buildings to use the bathroom appropriate to their biological gender. The US Federal Government decried the decision as bigotry and accused the government of North Carolina of violating the Civil Rights Act. The Federal Government threatened to withhold over US$4 billion in education funding. The government of North Carolina responded by filing suit against the government of the United States. The US government responded by filing suit against North Carolina. North Carolina received support from Mississippi, Tennessee, and Texas whilst Washington received support from most of the northern states.

Pro-transgender bathroom policies are not limited to government, however. Many businesses in the United States have similar bathroom policies. Many large corporations, among them Target, allow transgender people to use the bathroom of their choice. And they are perfectly prepared to enforce these policies, as well. A Macy’s employee in Texas was fired after he refused to allow a man dressed as a woman to use the female change rooms. Similarly, Planet Fitness revoked the membership of a woman who complained that a transgender man was in the female change rooms.

The most alarming trend of the gender theory movement is the attempt to indoctrinate children through changes to the education system. In 2013, France unleashed the ABCD de l’égalité (the ABCs of Equality) on six hundred elementary schools. In their own words, the program was designed to teach students that gender was a social construct:

“Gender is a sociological concept that is based on the fact that relations between men and women are socially and culturally constructed. The theory of gender holds that there is a socially constructed sex based on differentiated social roles and stereotypes in addition to anatomical, biological sex, which is innate.”

The creators of the program are smart enough to include the disclaimer: “biological differences should not be denied, of course, but those differences should not be fate.”

Fortunately, it would seem that many people are not taken in by this race to fantasyland. They are not taken in by the idea that the program merely exists to combat gender stereotypes and teach respect, and have protested. The French Minister of Education dismissed the protestors by saying that they “have allowed themselves to be fooled by a completely false rumour… at school we are teaching little boys to become little girls. That is absolutely false, and it needs to stop.” In America, The Boston Globe dismissed the protests against the program as being motivated by fear. Judith Butler event went as far as to say that France’s financial instability was the true cause of the protests.

And such a profound misuse of the education system isn’t limited to France, either. In Scotland, teachers are given guidance by LGBT Youth Scotland, children are expected to demonstrate “understanding of diversity in sexuality and gender identity”, and children are allowed to identify as either a girl or boy, or neither. The government of the United Kingdom has mandated that transgender issues be taught as part of the sex and relationships curriculum in primary and secondary school. Justine Greening, the education secretary, said: “it is unacceptable that relationships and sex education guidance has not been updated for almost twenty years especially given the online risks, such as sexting and cyberbullying, our children and young people face.”

It is in Australia, however, that there is the most shocking case of gender theory indoctrination. A great deal of controversy has been generated over the Safe Schools program. The program, which was established by the Victorian government in 2010, is supposedly designed to provide a safe, supportive, and inclusive environment for LGBTI students. It states that schools have the responsibility to challenge “all forms of homophobia, biphobia, transphobia, intersexism to prevent discrimination and bullying.”

The Safe Schools program promotes itself as an anti-bullying resource supporting “sexual diversity, intersex and gender diversity in schools.” It requires Victorian schools to eliminate discrimination based on gender identity, intersex, and sexual orientation, including in terms of an inclusive school environment.

The program addresses the issues of sleeping and bathroom arrangements and dress code. In terms of dress code, the program states:

“An inflexible dress code policy that requires a person to wear a uniform (or assume characteristics) of the sex that they do not identify with is likely to be in breach of anti-discrimination legislation including under the Equal Opportunity Act (1984) SA”

Likewise, the program states on the issue of bathrooms and change rooms that “transgender and diverse students should have the choice of accessing a toilet/changeroom that matches their gender identity.” In addition, the program states:

“Schools may also have unisex/gender neutral facilities. While this is a helpful strategy for creating an inclusive school environment for gender diverse students broadly, it is not appropriate to insist that any student, including a transgender student, use this toilet if they are not comfortable doing so.”

The idea that a transgender boy or girl should be allowed to sleep, shower, and defecate in the same place as a group of boys or girls ought to ring alarm bells for everyone. It increases the risk of sexual activity, sexual assault, pregnancy, and the transmission of sexually-transmitted-diseases. There is a reason why schools segregate changerooms, toilets, and dormitories.

The tragedy of David Reimer reveals just how dangerous it is to ignore the truth in favour of a false and malevolent social philosophy. It is one thing to seek tolerance and compassion for those in the community who may be struggling with their identity. It is something else entirely to use the plight of transgender peoples as a means of cording society to change the way it categorises gender. And it is completely insane to allow a false philosophy like gender theory to be used as the basis of public policy. If we don’t want more tragedies like David Reimer’s, we should put gender theory out in the trash where it belongs.

SORRY PRO-CHOICERS, ABORTION IS OBVIOUSLY WRONG

baby_girl_aborted_by_gosnell

In March of 2015, a Coloradan woman, Michelle Wilkins, was lured to a meet-up arranged on Craigslist and brutally attacked.  During the attack, Wilkins, who was seven months pregnant, had her unborn child cut from her body. Wilkins survived the attack but, sadly, her child did not. And, as if to add insult to injury, Wilkin’s unborn child was not recognised as human under Coloradan law.

Legal abortion – which I will define as the state approved murder of an innocent life – is a barbarity no civilised society should tolerate. As the Canadian clinical psychologist and YouTube sensation, Jordan Peterson (1962 – ) commented, “abortion is clearly wrong. You wouldn’t recommend someone you love have one.”

However, this is not to say that abortion isn’t a deeply complex and emotive issue. On the one hand, it is a procedure often used by desperate or easily persuaded women who feel that aborting their unborn child is the only option open to them (which it very rarely is). On the other hand, it is a form of murder cynically exploited by feminist extremists for political purposes.

Pro-choice proponents have several arguments in favour of total and free access to abortion.

The first argument, and the one that carries the greatest degree of credibility, concerns the health of the mother and her ability to safely carry a child to term. The Washington Post, for example, reported a story about an Indian girl who had been repeatedly raped and eventually impregnated by her uncle. An abortion was performed when it was decided she was too young to carry her child to term.

In all honesty, this is a sentiment which I have a great deal of sympathy for. It is very difficult for a woman to be a mother if she is dead, and it would be as wrong to sacrifice the life of the mother for the child as it would be to sacrifice the life of the child for the sake of the mother.

But the argument that abortion is necessary when the health of the mother is in jeopardy does not necessarily translate into the full, absolute, and unquestionable right to abortion. It is merely an argument for the preservation of the life of the mother.

The second argument concerns the health of vitality of the child itself. Often, however, this kind of argument is often used as a disguise for a desire to engage in eugenics. Claiming that a child with down syndrome should be aborted, for example, is the same as saying that people afflicted with certain maladies should not be afforded the same right to life as everybody else.

The third argument concerns instances where pregnancy has been instigated through an act of rape or incest. Whether or not rape should be sufficient grounds for an abortion is a tricky one to grapple with. On the one hand, the mother did not choose to be placed in the situation she has found herself in. And, by extension, birthing, and most probably raising, a child borne of rape may prove to be an insurmountable emotional turmoil for the mother. On the other hand, however, the child did not choose to be conceived through rape, and it is immoral to punish an innocent person for the crimes of another.

In reality, however, the rape justification for abortion is merely a red herring. It is a backdoor method for justifying the total, absolute, and unquestionable access to abortion.

The fourth argument concerns the idea that a woman has the right to abort her unborn child because she has the absolute right to bodily autonomy. In Texas last year, Judge Earl Leroy Yeakel III (1945 – ) overturned Senate Bill Eight which prevented doctors from performing evacuation and dilation abortions by mandating that a child’s heart must stop beating before the procedure can be performed. Yeakel claimed that the decision to abort a child outside the womb is “solely and exclusively the woman’s decision.”

This is the easiest argument to refute. An abortion does not only affect a woman’s body, it also destroys the life of a separate, innocent human being. Furthermore, the right to choose when to have a family is one shared by all people up to a point. A man has the right to wear a condom, he can have a vasectomy, and so forth. Likewise, a woman has every right to use contraceptive birth control, a diaphragm, a female condom, a cervical cap, an intrauterine device, and more. Couples can even refrain from having sex. But the right to family planning ends the moment a child has been conceived.

The fifth argument, and the one that is the most egregious, is the argument that an unborn child does not count as a human life. Much of this is the result of language. We use Latin words like “foetus” and “embryo” to fool ourselves into believing an unborn child is not a human being.

Therein lies the rub. People have always justified evil and immorality by altering the parameters of their morals to suit themselves. People have always justified murder by claiming that the person they are killing is not human. They may argue, for example, that murder is wrong, but that they are justified in aborting their unborn child because they do not see that child as human.

And the biological and physiological question of whether the unborn child is a human being is, without any shadow of a doubt, yes.

This is the case right down to the genetic level. Virtually every cell in our bodies contains thirty thousand or more different genes that are spread out on long strands of DNA known as chromosomes. Now DNA is very special. It is the chemical building block that makes us who we are. It determines whether or not we will go bald, what our eye and hair colour will be, how tall we will be, and much more besides.

If there is anything that DNA is good at it’s replicating itself. This can occur in two ways. At the most basic level, DNA replicates itself through cloning. At the most complex, one set of DNA merges with another set of DNA through sexual intercourse. And in doing so it creates an entirely unique individual.

But how can it do this safely? The answer lies in a process known as meiosis. When the human body makes sex gametes – sperm and ovum – it does so by making a copy of a previous cell. When it does this it keeps itself attached at one point and then condenses to make an ‘X’ shape. The four chromosomes then embrace and transfer some of their genetic material to each other. Finally, the cell split twice to create new sperm or ovum that carries a unique genetic package.

In other words, every sperm cell and every ovum carry a set of chromosomes that has never existed before and will never exist again.

Human beings have a grand total of forty-six chromosomes or twenty-three pairs. The moment a child has been conceived a full set of these chromosomes, known as a diploid, is established. It will receive twenty-three chromosomes from its father and twenty-three chromosomes from its mother.

The average pregnancy lasts between thirty-seven and forty-two weeks. During this time the child growing inside a woman’s body will go through all kinds of wonderful and miraculous changes. At three weeks, it’s brain, heart, gastrointestinal tract, and spinal cord have begun to form. By the fourth and fifth weeks, the heart is pumping rudimentary blood through the child’s veins with a steady rhythm. By the sixth week, the child’s fingers and toes have begun to form, and the child’s heartbeat can now be detected. By the end of the second month, all the child’s essential organs have begun to form.

And there’s still another seven months to go! By the fourteenth to sixteenth weeks, the child will begin to move around, its liver and pancreas will have begun to secrete fluid, and its fingerprints will begin to form. By the seventeenth to the twentieth week, the mother will be able to feel her child moving around inside her, it’s heartbeat will be detectable via a stethoscope, and its fingernails, toenails, eyebrows, and eyelashes will have started to grow.

By the twenty-fourth through to the twenty-sixth week, the child’s brain will be rapidly developing, the nervous system will be developed to a sufficient enough degree to give the child some control, albeit minutely, over its own movements, it will have developed a startle reflex, and its sleeping cycles will be perceptible to the mother. A child born at this stage can survive outside the womb with the assistance of modern medical technology. By the thirty-third to thirty-sixth week, the child will shift into the birthing position and will rapidly put on weight. Within weeks, a fully formed human being will be born.

Any discussion about abortion must begin with the scientific truth that an unborn child is a human life. Only after that truth has been acknowledged can factors like the health of the mother, the vitality of the child, cases of rape and incest, and bodily autonomy can be considered. The preservation of innocent life is the most important responsibility for every person living in a free society. The way we respond to this issue will define us for decades to come.

WHY I AGREE WITH THE DEATH PENALTY

e56b4163cc30589f640cd9d13f64881d

February 3rd last year marked the fiftieth anniversary of the execution of Ronald Ryan (1925 – 1967), the last man to be hanged in Australia. Since then, the general consensus has been that the death penalty constitutes a cruel and unusual punishment. Contrarily, however, it is the opinion of this author that the death penalty is not only just, but a key part of any justice system.

There are two main arguments against the death penalty. First, that it is an exceptionally expensive form of punishment. And second, that the death penalty leaves no room for non-posthumous exoneration.

The first argument is one of economics, not of morality or of justice. It does not argue that the death penalty is immoral, only that it is expensive. What this argument suggests is that a price tag can be placed on justice. That the most important factor determining a case is not whether justice is served, but how much money it will cost.

The way a society punishes murder is reflective of the value that society places on a human life. The life of a human being is not something that can have a time-based value placed upon it. It is something that has immeasurable value and purpose. The Norwegian mass-murderer, Anders Breivik, a man responsible for the death of seventy-seven people, received a sentence of just twenty-one years for his heinous crimes. A society that decides that the value of an individual’s life amounts to only one-hundred days is one that has no respect for the sanctity of life.

The second argument carries a great deal more weight. It is an undeniable fact that innocent people have, and continue to be, executed for crimes they did not commit. In the United States, prejudice against African Americans, Jews, Catholics, homosexuals, and other people often meant that justice was not as blind as it should have been. Furthermore, in an era before DNA evidence, convictions were based upon less reliable physical evidence and eyewitness testimony. And such evidence naturally carried a higher rate of false convictions.

There are two problems with the innocence argument. First, the advent of DNA along with other advances in forensic science has meant that the possibility of executing an innocent person is very low. DNA may not be foolproof, but when combined with eyewitness testimony and additional physical evidence, it makes a guilty verdict all the more concrete.

Second, the innocence argument is not an argument against the death penalty. Rather, it is an argument against executing an innocent person. It only applies when the condemned man is not actually guilty of the crime he has been convicted of. What it does not address is how a person whose guilt is certain beyond all possible reasonable doubt ought to be treated. When an individual’s guilt is that certain the innocence argument no longer carries any weight.

There are two primary arguments for the death penalty. First, that there are crimes so heinous and criminals so depraved that the only appropriate response is the imposition of the death penalty. And second, that the death penalty is an essential aspect of a just and moral justice system.

That there are crimes so heinous, and criminals so depraved, that they deserve the death penalty is self-evident. Carl Panzram (1892 – 1930), a thief, burglar, arsonist, rapist, sodomite, and murderer, told his executioner: “hurt it up, you Hoosier bastard, I could kill a dozen men while you’re screwing around.” Peter Kürten (1883 – 1931), also known as the Vampire of Düsseldorf, told his executioner that to hear the sound of his own blood gushing from his neck would be “the pleasure to end all pleasures.” Finally, John Wayne Gacy, Jr. (1942 – 1994) was convicted of forcibly sodomising, torturing, and strangling thirty-three boys and young men. The question, then, is not whether or not any individual deserves the death penalty, it is whether or not the state should have the power to execute someone.

The answer to this question is undoubtedly yes. It is frequently forgotten, especially by humanitarians, that the key aspect of a criminal penalty is not rehabilitation or deterrence, but punishment.

In other words, what makes a justice system just is that it can convict a person fairly and impose on them a penalty that is commensurate with the nature and severity of the crime that person has committed. What separates the death penalty from extra-judicial murder is that the condemned person has been afforded all the rights and protections of law, including due process, a fair and speedy trial, the right to trial by jury, the presumption of innocence, and so forth, regardless of their race, religion, sexuality, or gender. When a sentence of death is imposed upon a murderer, it is not a case of an individual or group of individuals taking vengeance, but of a legitimate court of justice imposing a penalty in accordance with the law.

What makes the death penalty an integral part of any justice system is not that it constitutes a form of revenge (which it does not) or that it may deter other individuals from committing similar crimes (which it also does not). What makes it just is that constitutes a punishment that fits the crime that has been committed.

Legal Fling App Represents All That Is Wrong With Modern Sexual Politics

legalfling

According to Business Insider, an app announced last Wednesday will allow users to use blockchain technology to give explicit and formal consent to sex.

The ‘Legal Fling’ app is the brainchild of Legal Things, a Dutch start-up that boasts over thirty-thousand users. According to the Legal Fling website, the app, which was created in response to the #MeToo movement”, is based on the philosophy that:

“Sex should be fun and safe, but nowadays a lot of things can go wrong. Think of unwanted videos, withholding about STDs and offensive porn re-enactment. While you’re protected by law, litigating any offences through court is nearly impossible in reality. Legal Fling creates a legally binding agreement, which means any offense is a breach of contract. By using Live Contracts protocol, your private agreement is verifiable using the blockchain and enforceable with a single click.”

The Legal Fling app uses a user-friendly interface to allow couples to enter into legally binding ‘live contracts’ that will “make the do’s and don’ts clear to both parties.” The app then uses blockchain technology (originally used to track Bitcoin transactions and the movement of goods within corporate supply chains) to create a permanent record that is saved in multiple places

An application like Legal Fling is indicative of all that is wrong with modern sexual politics.

What the Legal Fling app represents is a culture that desperately wants to have its cake and eat it, too. By creating a moral standard that proclaims all sex permissible as long as it is consensual, modern hook-up culture has created a confusing and treacherous environment for intimate relationships.

The answer to modern sexual woes is not the creation of an app, but a return to traditional, family-based values. Sexual conduct must be governed by strong moral standards, not an app.

IN DEFENCE OF CHRISTIANITY

afbeb38a66c65270d37b74cc7fb86fbf

In 2017, the online video subscription service, Hulu, embarked on the production of Margaret Atwood’s (1939 – ) 1985 novel, The Handmaid’s Tale. The story is set in the fictional, totalitarian state of Gilead: a society run by fundamentalist Christians who overthrew the previous secular state and set up a theocracy in its wake. For years, influential thought leaders and other arbiters of popular opinion have espoused the opinion that broader society would greatly benefit from the abolition of Christianity. It is my belief that such an occurrence would have precisely the opposite effect.

No group has criticised Christianity more than the New Atheists. Frequently deriding it as nothing more than “science for stupid people”, prominent New Atheists have ridiculed Christianity and dismissed its positive effects. Atheists and anti-Christians turn Christianity into a straw man by reducing it down to his most basic elements (they are helped, unfortunately, by those fundamentalist Christians who still assert that the earth is literally six-thousand years old). They then use this straw man to discredit the idea of faith. The philosopher, Sam Harris (1967 – ) argued in his book, The End of Faith that religious belief constituted a mental illness. More alarmingly, the British Scientist, Richard Dawkins (1941 – ) took things one step further by claiming that religious instruction constituted a form of child abuse.

The basis for much of Christianity’s negative portrayal finds its roots in the philosophies of the political left. A central tenet of the left-wing worldview is an adherence to secularism, which appears set to replace Christianity as the prevailing cultural belief system. (This is not to be confused with atheism, which denies the existence of a creator). On the one hand, secularism promotes both religious liberty and the separation of church and state (both of which are good things). On the other hand, however, proponents of secularism reject the knowledge and wisdom religious institutions can impart on the world. In a secular society, God can be believed to exist, but not in any sort of a productive way. God is something to be confined the private home or the sanctuary of one’s local Church. God is something to be worshipped behind closed doors where no one can see you.

Of course, anti-Christian rhetoric has been a facet of popular culture since the 1960s. Today, finding a positively-portrayed devout Christian family is about as likely as finding a virgin in the maternity ward. Christians are routinely depicted as stupid, backwards, hateful, and extreme. By contrast, atheists are routinely depicted as witty, intelligent, and tolerant. In short, Atheism is deemed as good and Christianity is deemed as bad. And, of course, this attitude has filled some with a kind of arrogant grandiosity. During an interview in 1966, John Lennon (1940 – 1980) opined: “Christianity will go. It will vanish and shrink. I needn’t argue with that; I’m right and I will be proved right. We’re more popular than Jesus now; I don’t know which will go first, rock and roll or Christianity.”

The mainstream media rarely discusses the persecution of Christians. Indeed, prejudice and discrimination against Christianity is treated with a type of permissiveness that prejudice and discrimination against other religions, Islam being a primary example, is not.

Christians are estimated to be the victims of four out of five discriminatory acts around the world, and face persecutions in one-hundred-and-thirty-nine countries. Churches have been firebombed in Nigeria. North Koreans caught with Bibles are summarily shot. In Egypt, Coptic Christians have faced mob violence, forced removals, and, in the wake of the Arab spring, the abduction of their females who are forced to marry Muslim men.

In China, Christian villagers were instructed to remove pictures of Christ, the Crucifix, and Gospel passages by Communist Party officials who wished to “transform believers in religion into believers in the party.” According to the South China Morning Post, the purpose behind the drive was the alleviation of poverty. The Chinese Communist Party believed that it was religious faith that was responsible for poverty in the region and wanted the villagers to look to their political leaders for help, rather than a saviour. (Wouldn’t it be wonderful if the Chinese Communist Party looked at their own evil and ineffective political ideology as the true cause of poverty in their country rather than blaming it on religion?). As a result, around six-hundred people in China’s Yugan county – where about ten percent of the population is Christian – removed Christian symbology from their living rooms.

Popular culture and thought in the West has attempted, with a great deal of success, to paint Christianity as stupid, backwards, dogmatic, and immoral. It is the presence religion that is to blame for holding the human race back. It is religion that is to blame for racism, sexism, and all manner of social injustices. It is religion that is the cause of all wars. So, on and so forth.

cat3

I strongly disagree with this argument. Indeed, it is my belief that the abolishment of Christianity from public life would have the effect of increasing intolerance and immorality. Christianity’s abolishment will have precisely this effect because it will abolish those metaphysical doctrines – divine judgement, universal and absolute morality, and the divinity of the human soul – that has made those things possible.

Christianity and Western Civilisation are inextricably linked. In the field of philosophy, virtually all Western thinkers have grappled with the concepts of God, faith, morality, and more. As the writer, Dinesh D’Souza (1961 – ) wrote in his book, What’s So Great About Christianity:

“Christianity is responsible for the way our society is organised and for the way we currently live. So extensive is Christian contribution to our laws, our economics, our politics, our art, our calendar, our holidays, and our moral and cultural priorities that J.M. Robers writes in Triumph of the West: ‘We could none one of us today be what we are if a handful of Jews nearly two thousand years ago had not believed that they had known a great teacher, seen him crucified, died, and buried, and then rise again’.”

The primary contribution of Christianity to Western civilisation has been to act as a stabilising force, providing society with an overarching metaphysical structure as well as rules and guidelines that act as a moral foundation. This shared metaphysical structure and moral foundation, combined with traditions and cultural customs, has the effect of bringing a country, a township, even a school or parish, together.

When Christianity lost its supremacy in society it was replaced by smaller, less transcendent and more ideological, belief systems. Where people had once been unified by a common belief, they have now become more divided along ideological lines. Religious belief has not been replaced by rationalism or logic, as the New Atheists supposed. Rather, people have found outlets for their need to believe in other places: social activism, political ideologies, and so forth.

The most prevalent contribution that Christianity has made to the Western world comes under the guise of human rights. Stories like The Parable of the Good Samaritan have had a remarkable influence on its conception. Human rights stem, in part, from the belief that human beings were created in the image of God and hold a divine place in the cosmos.  Christianity has played a positive role in ending numerous brutal and archaic practices, including slavery, human sacrifice, polygamy, and infanticide. Furthermore, it has condemned incest, abortion, adultery, and divorce. (Remarkably, there are some secularists who wish to bring back some of these antiquated practices).

Christianity placed an intrinsic value on human life that had not been present in pre-Christian society. As the American Pastor, Tim Keller (1950 – ) wrote in Reasons for God: “It was extremely common in the Greco-Roman world to throw out new female infants to die from exposure, because of the low status of women in society.” Roman culture was well known for its brutality and callousness. Practices of regicide, gladiatorial combat, infanticide, and crucifixion were all common. Seneca (4BC – AD65), Nero’s (AD37 – AD68) chief advisor, once stated that it was Roman practice to “drown children who, at birth, are weakly and abnormal.”

Christian morality has had a notable effect on our views on human sexuality and has helped to provide women with far greater rights and protections than its pagan predecessors. Christianity helped to end the hypocritical pagan practice of allowing men to have extra-marital affairs and keep mistresses. It formulated rules against the cohabitation of couples prior to marriage, adultery, and divorce. Unlike the Ancient Greeks and Ancient Romans, Christians do not force widows to remarry, and even allowed widows to keep their husband’s estates.

The Christian faith has been instrumental in the enactment and promotion of public works. The instigator of the Protestant Reformation, Martin Luther (1483 – 1546) championed the idea of compulsory education and state-funded schools. Similarly, the Lutheran layman, Johann Sturm (1507 – 1589) pioneered graded education. Christianity has been the source of numerous social services including health-care, schooling, charity, and so forth. Christianity’s positive belief in charity and compassion has lead to many orphanages, old-age homes, and groups like the Sisters of Charity and Missionaries of the Poor, the YMCA and YWCA, Teen Challenge, the Red Cross, and numerous hospitals and mental health institutions being founded by the faithful.

One of the frequent criticisms levelled at the Christian faith, particularly the Catholic Church, has been that it has stymied scientific and technological development. In truth, Western science and technology have been able to flourish because of the influence of Christianity, not in spite of it. This is because the Christian belief that God created everything lends itself to the idea that everything is worth contemplating. It is certainly true that the Catholic Church has been hostile to those discoveries that do not conform to its doctrine. Galileo, for example, was forced to retract his claim of heliocentrism because it challenged the Church’s doctrine that the earth acted as the centre of the solar system. For the most part, however, Christianity has been largely supportive of scientific endeavour. Christian scientists have included Gregor Mendel (1822 – 1884), Nicolaus Copernicus (1473 – 1543), Johannes Kepler (1571 – 1630), Galileo Galilei (1564 – 1642), Arthur Eddington (1882 – 1944), Isaac Newton (1643 – 1727), Blaise Pascal (1623 – 1662), Andre Ampere (1775 – 1836), James Joule (1818 – 1889), Lord Kelvin (1824 – 1907), Robert Boyle (1627 – 1691), George Washington Carver (1860s – 1943), Louis Pasteur (1822 – 1895), Joseph Lister (1827 – 1912), Francis Collins (1950 – ), William Phillips (1914 – 1975), and Sir John Houghton (1931 – ), and more.

The forces behind the stratospheric success of Western civilisation has not been its art or music or architecture, but the ideas it has built itself upon. It is notions like the rule of law, property rights, free markets, a preference for reason and logic, and Christian theology that are responsible for making Western society the freest and most prosperous civilisation that has ever existed. It cannot survive with one of its central tenents removed.