King Alfred Press

Home » Posts tagged 'Great Britain'

Tag Archives: Great Britain

The Problem with Actors and Actresses

960x0

Like many people, I was cynically amused to learn that the Duke and Duchess of Cambridge were “leaving” the Royal Family. According to an agreement they reached with the Palace on January 18th, they would be free to pursue business opportunities around the world and would “no longer be working members” of the British royal family, though they would lose the right to be referred to as His or Her Majesty.

It’s hardly surprising. Acting, like many art forms, has always attracted the insecure, sociopathic, and the just plain crazy. And Meghan Markle is an actress. One psychological study found that actors showed significantly higher rates of disordered personality traits than non-actors. The study, which compared 214 professional actors to a cohort of North American non-actors, also found that there was a high prevalence of anti-social personality, borderline, narcissistic, schizotypal, and obsessive-compulsive traits among actors than there was among the general population.

People become actors because they like being the centre of attention. They crave the spotlight because it makes them feel validated. The Royal Family, by contrast, performs public service by diverting attention away from themselves and onto the British nation and people. (A fact greatly contradicted by a news media who treat them as news stories in and of themselves). Poor Meghan Markle has found herself in a situation where she is not the centre of attention, and she doesn’t like it.

So, what does someone like Meghan Markle do when the spotlight is not on her? Well, the answer in Meghan’s case seems to be: leave the royal family. I will not at all be surprised in Meghan announces some kind of return to acting over the coming year. You cannot turn an actress into a princess anymore than you can make a leopard change its spots.

The Presumption of Innocence is Worth Protecting No Matter What the Cost

jemma-beale-rape

Jemma Beale was sentenced to ten years imprisonment after it was found she had made repeated false rape allegations. 

In February 2013, Vassar College student, Xialou “Peter” Yu was accused of sexual assault by fellow student, Mary Claire Walker. The accusation stemmed from an incident occurring twelve months previously in which Walker had accompanied Yu back to his dorm room after a party and initiated consensual sex. Walker herself broke off the coitus early. She had decided that it was too soon after ending her relationship with her boyfriend to embark on a sexual relationship with another man. She even expressed remorse for having “lead Yu on” and insisted that he had done nothing wrong.

Nevertheless, at some point, Walker decided that she had been sexually assaulted and Yu was mandated to stand before a college tribunal. At this tribunal, Yu was refused legal representation, had his attempts at cross-examining his accuser repeatedly stymied, and potential eyewitness testimonies from both Yu and Walker’s roommates were suppressed by the campus gender equality compliance officer. Supposedly because they had “nothing useful to offer.” In what can only be described as a gross miscarriage of justice, Yu was found guilty and summarily expelled.

Unfortunately, the kind of show trials that condemned Yu is not entirely uncommon in American colleges and universities (and, like many social diseases, are starting to infect Australian campuses, as well). They are the result of years of unchallenged feminist influence on upper education. These institutions have swallowed, hook, line, and sinker, the feminist lie that every single woman who claims to be sexually assaulted must be telling the truth.

The problem begins with those who make public policy. The US Department of Education has been seduced by the ludicrous idea that modern, western societies are a “rape culture.” They have brought into the lie that one-in-five women are sexually assaulted on college campuses, despite the fact that this statistic (which conveniently seems to come up with exactly the same ratio no matter where it’s used) comes from an easily disproven web-based survey.

This survey, which was conducted at two universities in 2006, took only fifteen minutes to complete and had a response rate of just 5466 undergraduate women aged between eighteen and twenty-five. Furthermore, it was poorly formulated with researchers asking women about their experiences and then deciding how many of them had been victims of sexual misconduct.

Regardless, the lack of credibility that this survey possessed did not stop the US Department of Education’s Office of Civil Rights from laying out guidelines for handling reports of sexual misconduct. Among these recommendations was that reports of sexual misconduct should be evaluated on the “preponderance of evidence” rather than the more traditional “clear and convincing evidence.” This radical shift in standards of proof means that accuser only has to prove that there is a reasonable chance that a sexual assault occurred rather than having to prove it beyond a reasonable doubt.

It would be an understatement to say the college and university rape tribunals – and the policies that inform them – violate every legal principle and tradition of western law. American colleges and universities have created an environment in which male students can be stigmatised as sexual deviants with little to no evidence aside from an accusation. These tribunals not only violate standards of proof but the presumption of innocence, as well.

That these tribunals have decided to do away with the presumption of innocence should hardly come as a surprise. After all, the mere idea of the presumption of innocence is antithetical to human nature. It is natural for human-beings to presume that someone is guilty just because they have been accused of something. As the Roman jurist, Ulpian pointed out: the presumption of innocence flies in the face of that seductive belief that a person’s actions always result in fair and fit consequences. People like to believe that someone who has been accused of a crime must have done something to deserve it.

The presumption of innocence is the greatest legal protection the individual has against the state. It means that the state cannot convict anyone unless they can prove their guilt beyond any reasonable doubt. We should be willing to pay any price to preserve it. And we certainly shouldn’t allow extra-legal tribunals to do away with it just to satisfy their ideological proclivities.

On Constitutional Monarchy

i12510

I would like to begin this essay by reciting a poem by the English Romantic poet, William Wordsworth (1770 – 1850):

 

     Milton! thou shouldst be living at this hour:

            England hath need for thee: she is a fen

            Of stagnant waters: altar, sword, and pen,

            Fireside, the heroic wealth of hall and bower,

            Have forfeited their ancient English dower

            Of inward happiness. We are selfish men;

            Oh! raise us up, return to us again;

            And give us manners, virtue, freedom, power.

            Thy soul was like a star, and dwelt apart:

            Thou hadst a voice whose sound was like the sea:

            Pure as the naked heavens, majestic, free

            So didst thou travel on life’s common way,

            In cheerful godliness; and yet thy heart

            The lowliest duties on herself did lay.

 

The poem, entitled London 1802, is Wordsworth’s ode to an older, nobler time. In it he attempts to conjure up the spirit of John Milton (1608 – 1674), the writer and civil servant immortalised for all time as the writer of Paradise Lost.

Milton acts as the embodiment for a nobler form of humanity. He symbolises a time when honour and duty played far greater a role in the human soul than it did in Wordsworth’s time, or even today. It is these themes of honour, duty, and nobility that will provide the spiritual basis for constitutional monarchy.

It is a subject that I will return to much later in this essay. But, to begin, it would perhaps be more prudent to begin this essay in earnest by examining those aspects of English history that allowed both constitutional monarchy and English liberty to be borne.

The English monarchy has existed for over eleven-hundred years. Stretching from King Alfred the Great in the 9th century to Elizabeth II in the 21st, the English people have seen more than their fair share of heroes and villains, wise kings and despotic tyrants. Through their historical and political evolution, the British have developed, and championed, ideals of liberty, justice, and good governance. The English have gifted these ideals to most of the Western World through the importation of their culture to most of the former colonies.

It is a sad reality that there are many people, particularly left-wing intellectuals, who need to reminded of the contributions the English have made to world culture. The journalist, Peter Hitchens (1951 – ) noted in his book, The Abolition of Britain that abhorrence for one’s own country was a unique trait of the English intellectual. Similarly, George Orwell (1903 – 1950) once observed, an English intellectual would sooner be seen stealing from the poor box than standing for “God Save the King.”

However, these intellectuals fail to notice, in their arrogance, that “God save the King” is actually a celebration of constitutional monarchy and not symbolic reverence to an archaic and rather powerless royal family. It is intended to celebrate the nation as embodied in the form of a single person or family and the fact that the common man and woman can live in freedom because there are constitutional restraints placed on the monarch’s power.

If one’s understanding of history has come from films like Braveheart, it is easy to believe that all people in all times have yearned to be free. A real understanding of history, one that comes from books, however, reveals that this has not always been the case. For most of history, people lived under the subjugation of one ruler or another. They lived as feudal serfs, subjects of a king or emperor, or in some other such arrangement. They had little reason to expect such arrangements to change and little motivation to try and change them.

At the turn of the 17th century, the monarchs of Europe began establishing absolute rule by undermining the traditional feudal institutions that had been in place for centuries. These monarchs became all-powerful wielding their jurisdiction over all forms of authority: political, social, economic, and so forth.

To justify their mad dash for power, Europe’s monarchs required a philosophical argument that vindicated their actions. They found it in a political doctrine known as ‘the divine rights of kings.’ This doctrine, formulated by the Catholic Bishop, Jacques Bossuet (1627 – 1704) in his book, Politics Derived from Sacred Scripture, argued that monarchs were ordained by God and therefore represented His will. It was the duty of the people to obey that individual without question. As such, no limitations could be put on a monarch’s power.

What Bossuet was suggesting was hardly a new, but it did provide the justification many monarchs needed to centralise power in themselves. King James I (1566 – 1625) of England and Scotland saw monarchs as God’s lieutenants and believed that their actions should be tempered by the fear of God since they would be called to account at the Last Judgement. On the basis of this belief, King James felt perfectly justified in proclaiming laws without the consent of parliament and involving himself in cases being tried before the court.

When King James died in 1625, he was succeeded by his second-eldest son, Charles (1600 – 1649). King Charles I assumed the throne during a time of political change. He was an ardent believer in the divine rights of kings, a belief that caused friction between the monarch and parliament from whom he had to get approval to raise funds.

In 1629, Charles outraged much of the population, as well as many nobles, when he elected to raise funds for his rule using outdated taxes and fines, and stopped calling parliament altogether. Charles had been frustrated by Parliament’s constant attacks on him and their refusal to furnish him with money. The ensuing period would become known as the eleven years tyranny.

By November 1640, Charles had become so bereft of funds that he was forced to recall Parliament. The newly assembled Parliament immediately began clamouring for change. They asserted the need for a regular parliament and sought changes that would make it illegal for the King to dissolve the political body without the consent of its members. In addition, the Parliament ordered the king to execute his friend and advisor, Thomas Wentworth (1593 – 1641), the 1st Earl of Stafford, for treason.

The result was a succession of civil wars that pitted King Charles against the forces of Parliament, led by the country gentlemen, Oliver Cromwell (1599 – 1658). Hailing from Huntingdon, Cromwell was a descendant of Henry VIII’s (1491 – 1547) chief minister, Thomas Cromwell (1485 – 1550). In the end, it would decimate the English population and forever alter England’s political character.

The English Civil War began in January 1642 when King Charles marched on Parliament with a force of four-hundred-thousand men. He withdrew to Oxford after being denied entry. Trouble was brewing. Throughout the summer, people aligned themselves with either the monarchists or the Parliamentarians.

The forces of King Charles and the forces of Parliament would meet at the Battle of Edgehill in October. What would follow is several years of bitter and bloody conflict.

Ultimately, it was Parliament that prevailed. Charles was captured, tried for treason, and beheaded on January 30th, 1642. England was transformed into a republic or “commonwealth.” The English Civil War had claimed the lives of two-hundred-thousand peoples, divided families, and facilitated enormous social and political change. Most importantly, however, it set the precedent that a monarch could not rule without the consent of parliament.

The powers of parliament had been steadily increasing since the conclusion of the English Civil War. However, total Parliamentary supremacy had proven unpopular. The Commonwealth created in the wake of the Civil War had collapsed shortly after Oliver Cromwell’s death. When this happened, it was decided to restore the Stuart dynasty.

The exiled Prince Charles returned to France and was crowned King Charles II (1630 – 1685). Like his father and grandfather, Charles was an ardent believer in the divine rights of kings. This view put him at odds with those of the Enlightenment which challenged the validity of absolute monarchy, questioned traditional authority, and idealised liberty.

By the third quarter of the 17th century, Protestantism had triumphed in both England and Scotland. Ninety-percent of the British population was Protestant. The Catholic minority was seen as odd, sinister, and, in extreme cases, outright dangerous. People equated Catholicism with tyranny linking French-Style autocracy with popery.

It should come as no surprise, then, that Catholics became the target of persecution. Parliament barred them from holding offices of state and banned Catholic forms of worship. Catholics were barred from becoming members of Parliament, justices of the peace, officers in the army, or hold any other position in Parliament unless they were granted a special dispensation by the King.

It is believed that Charles II may have been a closet Catholic. He was known for pardoning Catholics for crimes (controversial considering Great Britain was a protestant country) and ignoring Parliament.

However, Charles’ brother and successor, James (1633 – 1701) was a Catholic beyond any shadow of a doubt. He had secretly converted in 1669 and was forthright in his faith. After his first wife, Anne Hyde (1637 – 1671) died, James had even married the Italian Catholic, Mary of Modena (1658 – 1718). A decision that hardly endeared him to the populace.

The English people became alarmed when it became obvious that Charles II’s wife, Catherine of Braganza (1638 – 1705) would not produce a Protestant heir. It meant that Charles’ Catholic brother, James was almost certainly guaranteed to succeed him on the throne. So incensed was Parliament at having a Catholic on the throne, they attempted to pass the Crown onto one of Charles’ Anglican relatives.

Their concern was understandable, too. The English people had suffered the disastrous effects of religious intolerance since Henry VIII had broken away from the Catholic Church and established the Church of England. The result had been over a hundred years of religious conflict and persecution. Mary I (1516 – 1558), a devout Catholic, had earnt the moniker “bloody Mary” for burning Protestants the stake. During the reign of King James, Guy Fawkes (1570 – 1606), along with a group of Catholic terrorists, had attempted to blow up Parliament in the infamous “gunpowder plot.”

Unlike Charles II, James made his faith publicly known. He desired greater tolerance for Catholics and non-Anglican dissenters like Quakers and Baptists. The official documents he issued, designed to bring about the end of religious persecution, were met with considerable objection from both Bishops and Europe’s protestant monarchs.

Following the passing of the Test Act in 1672, James had briefly been forced to abandon his royal titles. The Act required officers and members of the nobility to take the Holy Communion as spelt out by the Church of England. It was designed to prevent Catholics from taking public office.

Now, as King, James was attempting to repeal the Test Act by placing Catholics in positions of power. His Court featured many Catholics and he became infamous for approaching hundreds of men – justices, wealthy merchants, and minor landowners – to stand as future MPs and, in a process known as ‘closeting’, attempting to persuade them to support his legal reforms. Most refused.

That was not the limits of James’ activities, either. He passed two Declarations of Indulgences to be read from every stage for two Sundays, and put those who opposed it on trial for seditious libel. Additionally, he had imprisoned seven Bishops for opposing him, made sweeping changes to the Church of England, and built an army comprising mainly of Catholics.

The people permitted James II to rule as long as his daughter, the Protestant Prince Mary (1662 – 1694) remained his heir. All this changed, however, when Mary Modena produced a Catholic heir: James Francis Edward Stuart (1688 – 1766). When James declared that the infant would be raised Catholic, it immediately became apparent that a Catholic dynasty was about to be established. Riots broke out. Conspiracy theorists posited that the child was a pawn in a Popish plot. The child, the theory went, was not the King’s son but rather a substitute who had been smuggled into the birthing chamber in a bed-warming pan.

In reality, it was the officers of the Army and Navy who were beginning to plot and scheme in their taverns and drinking clubs. They were annoyed that James had introduced Papist officers into the military. The Irish Army, for example, had seen much of its Protestant officer corps dismissed and replaced with Catholics who had little to no military experience.

James dissolved Parliament in July 1688. Around this time, a Bishop and six prominent politicians wrote to Mary and her Dutch husband, William of Orange (1650 – 1702) and invited them to raise an army, invade London, and seize the throne. They accepted.

William landed in Dorset on Guy Fawkes’ day accompanied by an army of fifteen-thousand Dutchmen and other Protestant Europeans. He quickly seized Exeter before marching eastward towards London. James II called for troops to confront William.

Things were not looking good for James, however. Large parts of his officer corps were defecting to the enemy and taking their soldiers with them. Without the leadership of their officers, many soldiers simply went home. English magnates started declaring for William. And his own daughter, Princess Anne (1665 – 1714) left Whitehall to join the rebels in Yorkshire. James, abandoned by everyone, fled to exile in France. He would die there twelve-years-later.

On January 22nd, 1689, William called the first ‘convention parliament.’ At this ‘convention’, Parliament passed two resolutions. First, it was decided that James’ flight into exile constituted an act of abdication. And second, it was declared a war against public policy for the throne to be occupied by a Catholic. As such, the throne was passed over James Francis Edward Stuart, and William and Mary were invited to take the Crown as co-monarchs.

They would be constrained, however, by the 1689 Bill of Rights and, later, by the 1701 Act of Settlement. The 1689 Bill of Rights made Great Britain a constitutional monarchy as opposed to an absolute one. It established Parliament, not the crown, as the supreme source of law. And it set out the most basic rights of the people.

Likewise, the 1701 Act of Settlement helped to strengthen the Parliamentary system of governance and secured a Protestant line of succession. Not only did it prevent Catholics from assuming the throne, but it also gave Parliament the ability to dictate who could ascend to the throne and who could not.

The Glorious Revolution was one of the most important events in Britain’s political evolution. It made William and Mary, and all monarchs after them, elected monarchs. It established the concept of Parliamentary sovereignty granting that political body the power to make or unmake any law it chose to. The establishment of Parliamentary sovereignty brought with it the ideas of responsible and representative government.

The British philosopher, Roger Scruton (1944 – ) described British constitutional monarchy as a “light above politics which shines down [on] the human bustle from a calmer and more exalted sphere.” A constitutional monarchy unites the people for a nation under a monarch who symbolises their shared history, culture, and traditions.

Constitutional monarchy is a compromise between autocracy and democracy. Power is shared between the monarch and the government, both of whom have their powers restricted by a written, or unwritten, constitution. This arrangement separates the theatre of power from the realities of power. The monarch is able to represent the nation whilst the politician is able to represent his constituency (or, more accurately, his party).

In the Need for Roots, the French philosopher, Simone Weils (1909 – 1943) wrote that Britain had managed to maintain a “centuries-old tradition of liberty guaranteed by the authorities.” Weils was astounded to find that chief power in the British constitution lay in the hands of a lifelong, unelected monarch. For Weils, it was this arrangement that allowed the British to retain its tradition of liberty when other countries – Russia, France, and Germany, among others – lost theirs when they abolished their monarchies.

sir_isaac_isaacs_and_lady_isaacs

Great Britain’s great legacy is not their once vast and now non-existent Empire, but the ideas of liberty and governance that they have gifted to most of their former colonies. Even the United States, who separated themselves from the British by means of war, inherited most of their ideas about “life, liberty, and the pursuit of happiness” from their English forebears.

The word “Commonwealth” was adopted at the Sixth Imperial Conference held between October 19th and November 26th, 1926. The Conference, which brought together the Prime Ministers of the various dominions of the British Empire, led to the formation of the Inter-Imperial Relations Committee. The Committee, headed for former British Prime Minister, Arthur Balfour (1848 – 1930), was designed to look into future constitutional arrangements within the commonwealth.

Four years later, at the Seventh Imperial Conference, the committee delivered the Balfour Report. It stated:

“We refer to the group of self-governing communities composed of Great Britain and the Dominions. Their position and mutual relation may be readily defined. They are autonomous Communities within the British Empire, equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by a common allegiance to the Crown, and freely associated as members of the British Commonwealth of Nations.”

It continued:

“Every self-governing member of the Empire is now the master of its destiny. In fact, if not always in form, it is subject to no compulsion whatsoever.”

Then, in 1931, the Parliament of the United Kingdom passed the Statute of Westminster. It became one of two laws that would secure Australia’s political and legal independence from Great Britain.

The Statute of Westminster gave legal recognition to the de-facto independence of the British dominions. Under the law, Australia, Canada, the Irish Free State, Newfoundland (which would relinquish its dominion status and be absorbed into Canada in 1949), New Zealand and South Africa were granted legal independence.

Furthermore, the law abolished the Colonial Validity Act 1865. A law which had been enacted with the intention of removing “doubts as to the validity of colonial laws.” According to the act, a Colonial Law was void when it “is or shall be in any respect repugnant to the provisions of any Act of Parliament extending to the colony to which such laws may relate, or repugnant to any order or regulation under authority of such act of Parliament or having in the colony the force and effect of such act, shall be read subject to such act, or regulation, and shall, to the extent of such repugnancy, but not otherwise, be and remain absolutely void and inoperative.”

The Statute of Westminster was quickly adopted by Canada, South Africa, and the Irish Free State. Australia, on the other hand, did not adopt it until 1942, and New Zealand did not adopt it until 1947.

More than forty-years-later, the Hawke Labor government passed the Australia Act 1986. This law effectively made the Australian legal system independent from Great Britain. It had three major achievements. First, it ended appeals to the Privy Council thereby establishing the High Court as the highest court in the land. Second, it ended the influence the British government had over the states of Australia. And third, it allowed Australia to update or repeal those imperial laws that applied to them by ending British legislative restrictions.

What the law did not do, however, was withdraw the Queen’s status as Australia’s Head of State:

“Her Majesty’s Representative in each State shall be the Governor.

Subject to subsections (3) and (4) below, all powers and functions of Her Majesty in respect of a State are exercisable only by the Governor of the State.

Subsection (2) above does not apply in relation to the power to appoint, and the power to terminate the appointment of, the Governor of a State.

While her Majesty is personally present in a State, Her Majesty is not precluded from exercising any of Her powers and functions in respect of the State that are the subject of subsection (2) above.

The advice of Her Majesty in relation to the exercise of powers and functions of Her Majesty in respect of a State shall be tendered by the Premier of the State.”

These two laws reveal an important miscomprehension that is often exploited by Australian Republicans. That myth is the idea that Australia does not have legal and political independence because its Head of State is the British monarch. The passage of the Statute of Westminster in 1931 and the Australia Act in 1986 effectively ended any real political or legal power the British government had over Australia.

In Australia, the monarch (who is our head of state by law) is represented by a Governor General. This individual – who has been an Australian since 1965 – is required to take an oath of allegiance and an oath of office that is administered by a Justice (typically the Chief Justice) of the High Court. The Governor-General holds his or her position at the Crown’s pleasure with appointments typically lasting five years.

The monarch issues letters patent to appoint the Governor General based on the advice of Australian ministers. Prior to 1924, Governor Generals were appointed on the advice of both the British government and the Australian government. This is because the Governor General at that time represented both the monarch and the British government. This arrangement changed, however, at the Imperial Conferences of 1926 and 1930. The Balfour Report produced by these conferences stated that the Governor General should only be the representative of the crown.

The Governor General’s role is almost entirely ceremonial. It has been argued that such an arrangement could work with an elected Head of State. However, such an arrangement would have the effect of politicising and thereby corrupting the Head of State. A Presidential candidate in the United States, for example, is required to raise millions of dollars for his campaign and often finds himself beholden to those donors who made his ascent possible. The beauty of having an unelected Head of State, aside from the fact that it prevents the government from assuming total power, is that they can avoid the snares that trap other political actors.

image-20151106-16263-1t48s2d

The 1975 Constitutional Crisis is a perfect example of the importance of having an independent and impartial Head of State. The crises stemmed from the Loans Affair which forced Dr. Jim Cairns (1914 – 2003), Deputy Prime Minister, Treasurer, and intellectual leader of the political left, and Rex Connor (1907 – 1977) out of the cabinet. As a consequence of the constitutional crisis, Gough Whitlam (1916 – 2014) was dismissed as Prime Minister and the 24th federal parliament was dissolved.

The Loan’s affair began when Rex Connor attempted to borrow money, up to US$4b, to fund a series of proposed national development projects. Connor deliberately flouted the rules of the Australian Constitution which required him to take such non-temporary government borrowing to the Loan Council (a ministerial council consisting of both Commonwealth and state elements which existed to coordinate public sector borrowing) for approval. Instead, on December 13th, 1974, Gough Whitlam, Attorney-General Lionel Murphy (1922 – 1986), and Dr. Jim Cairns authorised Connor to seek a loan without the council’s approval.

When news of the Loans Affair was leaked, the Liberal Party, led by Malcolm Fraser (1930 – 2015), began questioning the government. Whitlam attempted to brush the scandal aside by claiming that the loans had merely been “matters of energy” and claiming that the Loans Council would only be advised once a loan had been made. Then, on May 21st, Whitlam informed Fraser that the authority for the plan had been revoked.

Despite this, Connor continued to liaise with the Pakistani financial broker, Tirath Khemlani (1920 – 1991). Khemlani was tracked down and interviewed by Herald Journalist, Peter Game (1927 – ) in mid-to-late 1975. Khemlani claimed that Connor had asked for a twenty-year loan with an interest of 7.7% and a 2.5% commission for Khemlani. The claim threw serious doubt on Dr. Jim Cairn’s claim that the government had not offered Khemlani a commission on a loan. Game also revealed that Connor and Khemlani were still in contact, something Connor denied in the Sydney Morning Herald.

Unfortunately, Khemlani had stalled on the loan, most notably when he had been asked to go to Zurich with Australian Reserve Bank officials to prove the funds were in the Union Bank of Switzerland. When it became apparent that Khemlani would never deliver Whitlam was forced to secure the loan through a major American investment bank. As a condition of that loan, the Australian government was required to cease all other loans activities. Consequentially, Connor had his loan raising authority revoked on May 20th, 1975.

The combination of existing economic difficulties with the political impact of the Loan’s Affair severely damaged to the Whitlam government. At a special one day sitting of the Parliament held on July 9th, Whitlam attempted to defend the actions of his government and tabled evidence concerning the loan. It was an exercise in futility, however. Malcolm Fraser authorised Liberal party senators – who held the majority in the upper house at the time – to force a general election by blocking supply.

And things were only about to get worse. In October 1975, Khemlani flew to Australia and provided Peter Game with telexes and statutory declarations Connor had sent him as proof that he and Connor had been in frequent contact between December 1974 and May 1975. When a copy of this incriminating evidence found its way to Whitlam, the Prime Minister had no other choice but to dismiss Connor and Cairns (though he did briefly make Cairns Minister for the Environment).

By mid-October, every metropolitan newspaper in Australia was calling on the government to resign. Encouraged by this support, the Liberals in the Senate deferred the Whitlam budget on October 16th. Whitlam warned Fraser that the Liberal party would be “responsible for bills not being paid, for salaries not being paid, for utter financial chaos.” Whitlam was alluding to the fact that blocking supply threatened essential services, Medicare rebates, the budgets of government departments and the salaries of public servants. Fraser responded by accusing Whitlam of bringing his own government to ruin by engaging in “massive illegalities.”

On October 21st, Australian’s longest-serving Prime Minister, Sir Robert Menzies (1894 – 1978) signalled his support for Fraser and the Liberals. The next day, Treasurer, Bill Hayden (1933 – ) reintroduced the budget bills and warned that further delay would increase unemployment and deepen a recession that had blighted the western world since 1973.

The crisis would come to a head on Remembrance Day 1975. Whitlam had asserted for weeks that the Senate could not force him into an election by claiming that the House of Representatives had an independence and an authority separate from the Senate.

Whitlam had decided that he would end the stalemate by seeking a half-senate election. Little did he know, however, that the Governor-General, Sir John Kerr (1914 – 1991) had been seeking legal advice from the Chief Justice of the High Court on how he could use his Constitutional Powers to end the deadlock. Kerr had come to the conclusion that should Whitlam refuse to call a general election, he would have no other alternative but to dismiss him.

And this is precisely what happened. With the necessary documents drafted, Whitlam arranged to meet Kerr during the lunch recess. When Whitlam refused to call a general election, Kerr dismissed him and, shortly after, swore in Malcolm Fraser as caretaker Prime Minister. Fraser assured Kerr that he would immediately pass the supply bills and dissolve both houses in preparation for a general election.

Whitlam returned to the Lodge to eat lunch and plan his next movie. He informed his advisors that he had been dismissed. It was decided that Whitlam’s best option was to assert Labor’s legitimacy as the largest party in the House of Representatives. However, fate was already moving against Whitlam. The Senate had already passed the supply bills and Fraser was drafting documents that would dissolve the Parliament.

At 2pm, Deputy Prime Minister, Frank Crean (1916 – 2008) defended the government against a censure motion started by the opposition. “What would happen, for argument’s sake, if someone else were to come here today and say he was now the Prime Minister of this country”, Crean asked. In fact, Crean was stalling for time while Whitlam prepared his response.

At 3pm, Whitlam made a last-ditch effort to save his government by addressing the House. Removing references to the Queen, he asked that the “House expresses its want of confidence in the Prime Minister and requests, Mr. Speaker, forthwith to advice His Excellency, the Governor-General to call the member of Wannon to form a government.” Whitlam’s motion was passed with a majority of ten.

The speaker, Gordon Scholes (1931 – 2018) expressed his intention to “convey the message of the House to His Excellency at the first opportunity.” It was a race that Whitlam was not supposed to win. Scholes was unable to arrange an appointment until quarter-to-five in the afternoon.

Behind the scenes, departmental officials were working to provide Fraser with the paperwork he needed to proclaim a double dissolution. By ten-to-four, Fraser left for government house. Ten minutes later, Sir John Kerr had signed the proclamation dissolving both Houses of Parliament and set the date for the upcoming election for December 13th, 1975. Shortly after, Kerr’s official secretary, David Smith (1933) drove to Parliament House and, with Whitlam looming behind him, read the Governor General’s proclamation.

The combination of economic strife, political scandal, and Whitlam’s dismissal signed the death warrant for Whitlam’s government. At the 1975 Federal Election, the Liberal-National coalition won by a landslide, gaining a majority of ninety-one seats and obtaining a popular vote of 4,102,078. In the final analysis, it seems that the Australian people had agreed with Kerr’s decision and had voted to remove Whitlam’s failed government from power once and for all.

23163929155_9f41dc691d_h

Most of the arguments levelled against constitutional monarchies can be described as petty, childish, and ignorant. The biggest faux pas those who oppose constitutional monarchies make is a failure to separate the royal family (who are certainly not above reproach) from the institution of monarchy itself. Dislike for the Windsor family is not a sufficient reason to disagree with constitutional monarchy. It would be as if I decided to argue for the abolition of the office of Prime Minister just because I didn’t like the person who held that office.

One accusation frequently levelled against the monarchy is that they are an undue financial burden on the British taxpaying public. This is a hollow argument, however. It is certainly true that the monarchy costs the British taxpayer £299.4 million every year. And it is certainly true that the German Presidency costs only £26 million every year. However, it is not true that all monarchies are necessarily more expensive than Presidencies. The Spanish monarchy costs only £8 million per year, less than the Presidencies of Germany, Finland, and Portugal.

Australia has always had a small but vocal republican movement. The National Director of the Republican Movement, Michael Cooney has stated: “no one thinks it ain’t broken, that we should fix it. And no one thinks we have enough say over our future, and so, no matter what people think about in the sense of the immediate of the republic everyone knows that something is not quite working.”

History, however, suggests that the Australian people do not necessarily agree with Cooney’s assessment. The Republican referendum of 1999 was designed to facilitate two constitutional changes: first, the establishment of a republic, and, second, the insertion of a preamble in the Constitution.

The Referendum was held on November 6th, 1999. Around 99.14%, or 11,683,811 people, of the Australian voting public participated. 45.13%, or 5,273,024 voted yes. However, 54.87%, or 6,410,787 voted no. The Australian people had decided to maintain Australia’s constitutional monarchy.

All things considered, it was probably a wise decision. The chaos caused by establishing a republic would pose a greater threat to our liberties than a relatively powerless old lady. Several problems would need to be addressed. How often should elections occur? How would these elections be held? What powers should a President have? Will a President be just the head of state, or will he be the head of the government as well? Australian republicans appear unwilling to answer these questions.

Margaret Tavits of Washington University in St. Louis once observed that: “monarchs can truly be above politics. They usually have no party connections and have not been involved in daily politics before assuming the post of Head of State.” It is the job of the monarch to become the human embodiment of the nation. It is the monarch who becomes the centrepiece of pageantry and spectacle. And it the monarch who symbolises a nation’s history, tradition, and values.

Countries with elected, or even unelected, Presidents can be quite monarchical in style. Americans, for example, often regard their President (who is both the Head of State and the head of the government) with an almost monarchical reverence. A constitutional monarch might be a lifelong, unelected Head of State, but unlike a President, that is generally where their power ends. It is rather ironic that the Oxford political scientists, Petra Schleiter and Edward Morgan-Jones have noted that allow governments to change without democratic input like elections than monarchs are. Furthermore, by occupying his or her position as Head of State, the monarch is able to prevent other, less desirable people from doing so.

The second great advantage of constitutional monarchies is that they provide their nation with stability and continuity. It is an effective means to bridging the past and future. A successful monarchy must evolve with the times whilst simultaneously keeping itself rooted in tradition. All three of my surviving grandparents have lived through the reign of King George VI, Queen Elizabeth II, and may possibly live to see the coronation of King Charles III. I know that I will live through the reigns of Charles, King William V, and possibly survive to see the coronation of King George VII (though he will certainly outlive me).

It would be easy to dismiss stability and continuity as manifestations of mere sentimentality, but such things also have a positive effect on the economy, as well. In a study entitled Symbolic Unity, Dynastic Continuity, and Countervailing Power: Monarchies, Republics and the Economy Mauro F. Guillén found that monarchies had a positive impact on economies and living standards over the long term. The study, which examined data from one-hundred-and-thirty-seven countries including different kinds of republics and dictatorships, found that individuals and businesses felt more confident that the government was not going to interfere with their property in constitutional monarchies than in republics. As a consequence, they are more willing to invest in their respective economies.

When Wordsworth wrote his ode to Milton, he was mourning the loss of chivalry he felt had pervaded English society. Today, the West is once again in serious danger of losing those two entities that is giving them a connection to the chivalry of the past: a belief in God and a submission to a higher authority.

Western culture is balanced between an adherence to reason and freedom on the one hand and a submission to God and authority on the other. It has been this delicate balance that has allowed the West to become what it is. Without it, we become like Shakespeare’s Hamlet: doomed to a life of moral and philosophical uncertainty.

It is here that the special relationship between freedom and authority that constitutional monarchy implies becomes so important. It satisfies the desire for personal autonomy and the need for submission simultaneously.

The Christian apologist and novelist, C.S. Lewis (1898 – 1964) once argued that most people no more deserved a share in governing a hen-roost than they do in governing a nation:

“I am a democrat because I believe in the fall of man. I think most people are democrats for the opposite reason. A great deal of democratic enthusiasm descends from the idea of people like Rousseau who believed in democracy because they thought mankind so wise and good that everyone deserved a share in the government. The danger of defending democracy on those grounds is that they’re not true and whenever their weakness is exposed the people who prefer tyranny make capital out of the exposure.”

The necessity for limited government, much like the necessity for authority, comes from our fallen nature. Democracy did not arise because people are so naturally good (which they are not) that they ought to be given unchecked power over their fellows. Aristotle (384BC – 322BC) may have been right when he stated that some people are only fit to be slaves, but unlimited power is wrong because there is no one person who is perfect enough to be a master.

Legal and economic equality are necessary bulwarks against corruption and cruelty. (Economic equality, of course, refers to the freedom to engage in lawful economic activity, not to socialist policies of redistributing wealth that inevitably lead to tyranny). Legal and economic equality, however, does not provide spiritual sustenance. The ability to vote, buy a mobile phone, or work a job without being discriminated against may increase the joy in your life, but it is not a pathway to genuine meaning in life.

Equality serves the same purpose that clothing does. We are required to wear clothing because we are no longer innocent. The necessity of clothes, however, does not mean that we do not sometimes desire the naked body. Likewise, just because we adhere to the idea that God made all people equal does not mean that there is not a part of us that does not wish for inequality to present itself in certain situations.

Chivalry symbolises the best human beings can be. It helps us realise the best in ourselves by reconciling fealty and command, inferiority and superiority. However, the ideal of chivalry is a paradox. When the veil of innocence has been lifted from our eyes, we are forced to reconcile ourselves to the fact that bullies are not always cowards and heroes are not always modest. Chivalry, then, is not a natural state, but an ideal to be aimed for.

The chivalric ideal marries the virtues of humility and meekness with those of valour, bravery, and firmness. “Thou wert the meekest man who ever ate in hall among ladies”, said Sir Ector to the dead Lancelot. “And thou wert the sternest knight to thy mortal foe that ever-put spear in the rest.”

Constitutional monarchy, like chivalry, makes a two-fold demand on the human spirit. Its democratic element, which upholds liberty, demands civil participation from all its citizens. And its monarchical element, which champions tradition and authority, demands that the individual subjugate himself to that tradition.

It has been my aim in this essay to provide a historical, practical, and spiritual justification for constitutional monarchy. I have demonstrated that the British have developed ideals of liberty, justice, and good governance. The two revolutions of the 17th century – the English Civil War and the Glorious Revolution – established Great Britain as a constitutional monarchy. It meant that the monarch could not rule without the consent of parliament, established parliament as the supreme source of law, and allowed them to determine the line of succession. I have demonstrated that constitutional monarchs are more likely to uphold democratic principles and that the stability they produce encourages robust economies. And I have demonstrated that monarchies enrich our souls because it awakens in us the need for both freedom and obedience.

Our world has become so very vulgar. We have turned our backs on God, truth, beauty, and virtue. Perhaps we, like Wordsworth before us, should seek virtue, manners, freedom, and power. We can begin to do this by retaining the monarchy.

IDENTITY POLITICS IS A DANGEROUS GAME

1555828080-stsebastians2-960x540

Ever noticed that the establishment’s reaction to malevolence and suffering has more to do with the victim’s group identity than any other factor?

During Easter, Islamic state detonated bombs in Sri Lanka that were clearly intended to target Christians. The bomb blasted destroyed Churches and luxury hotels and left three-hundred dead.

The violence was clearly a targeted attack against Christians on the Holiest feast of the Christian calendar. However, where they were only too eager to talk about the Muslim identities of those targeted in the Christchurch shootings, those in the establishment were conspicuously silent about the Christian faith of those being attacked in Sri Lanka. Neither the former US President, Barack Obama, nor the Democrat candidate for the 2016 election, Hilary Clinton bothered to use the word “Christian” in their response to the attacks.

Barack Obama tweeted:

“The attacks on tourists and Easter worshippers in Sri Lanka are an attack on humanity. One a day devoted to love, redemption, and renewal, we pray for the victims and stand with the people of Sri Lanka.”

Likewise, Hilary Clinton tweeted:

“On this holy weekend for many faiths, we stand united against hatred and violence. I’m praying for everyone affected by today’s horrific attack on Easter worshippers and travellers to Sri Lanka.”

Following the New Zealand Mosque shooting, both Obama and Clinton were quick to assert their compassion, support, and solidarity with the “global Muslim community.” However, after the Sri Lankan bombings, they became rather reluctant to signal their support for Sri Lankan Christians or even to identify the victims as such.

Barack Obama referred to the attack as one perpetrated against “humanity” rather than one against Christians. Likewise, Hilary Clinton urged people to stand “united against hatred and violence”, but failed to specify who, in this case, was perpetrating the violence or the people they were perpetrating it against. More disturbing, perhaps, is the use of the term “Easter worshippers” as a euphemism for Christians. Easter isn’t holy for “many faiths”, it is Holy for Christians.

Contrast the responses to the Sri Lankan bombings to those of the Mosque massacres in Christchurch, New Zealand. After that attack, the Muslim identity of the victims were clearly and repeatedly stated. Marches in the street professed love over hatred and peace over violence. Political leaders like New Zealand’s Prime Minister, Jacinda Ardern, made symbolic, and rather shallow, gestures of solidarity and acceptance towards the Muslim community. And the public were forced to listen, ad nauseum, to left-wing pundits prattle on about the supposed Islamophobia of Western society.

As I pointed out before, the way the establishment responds to hatred and violence depends largely upon who is perpetrating it and who its target is. It is because of identity politics that our cultural standard-bearers ignore attacks on Christians but go out of their way to illustrate attacks on Muslims.

Identity politics blinds us to reality. It allows us to feel hatred and resentment towards others by reducing them to their group identity. As a consequence, the violence, prejudice, and discrimination Christians and Jews have faced in many parts of the world has largely gone unnoticed.

Identity politics blinds us to the fact that the Christian population in Africa and the Middle East has declined from twenty-percent to four-percent in just over a century (with much of the reduction occurring after 2000). It blinds us to the fact that Christians in the Middle East and Africa are the most persecuted minority in the world.

More alarmingly, Middle Eastern and African Christians are not even to be granted help from Western countries. Instead, they are to be sacrificed on the altar of racial and religious diversity. When the Pakistani Christian, Asia Bibi sought asylum in Great Britain, her request was refused because her presence risked “inflaming community tensions.” (Asia Bibi, of course, was imprisoned for several years after she was accused of blasphemy against Islam).

It would seem that no criticism may be levelled against Muslims or any other non-white, non-Christian group. And it would equally seem that it is perfectly acceptable to criticise Christians and white people for their group identities.

It all boils down to Islamophobia: just one out of a whole batch of ultimately meaningless accusations designed to silence critics and stifle debate. The English Labour Party and the Liberal Democrats refer to it is as a “type of racism that targets expressions of Muslimness and perceived Muslimness.” Gallup referred to it is as a “specific phobia” gripping Western society. Amnesty International refers to Islamophobes as “racists and bigots [who] believe that diverse societies don’t work.”

Most of Australia’s media – save perhaps for a few conservative newspapers and some talkback radio – is left-leaning. News and current affairs shows have left-wing biases, panel discussions are strongly tilted to favour left-wing views, and the ABC, Australia’s national broadcaster, is so resolutely left-wing it almost beggars’ belief.

Such a configuration naturally creates biases. It is a commonly accepted fact of psychology that when a person associates only with those who agree with them the result is groupthink and confirmation bias. Australian media has become an echo chamber for left-wing beliefs.

As the author and podcast host, Andrew Klavan pointed out: it is a rule of mainstream media to treat events that confirm left-wing biases as representative but to regard events that contradict them as isolated. Therefore, the attacks in Christchurch are indicative of the racism and Islamophobia that has supposedly infected western society. However, a terrorist attack committed by a Muslim is treated as an isolated incident which does not reflect a trend in which countless terrorist attacks and attacks on Jews and Christians – both in the West and outside of it – have been committed by Muslims every year.

The dichotomy between the reactions towards the attacks on Muslim’s in Christchurch and those against Sri Lanka is telling. Identity politics is a curse upon our society. It divides us by manipulating us into seeing everyone we see as members of their social, economics, or racial group rather than as individuals. Such a game can only lead to disaster.

A Man For All Seasons

amanforallseasons1

It is a rare occurrence to see a film that is so memorable that it implants itself on the human psyche. A film that contains such a captivating story, compelling characters, and profound themes occurs so rarely it becomes etched into our collective unconscious. A Man for All Seasons is one of those films.

Set in Tudor England during the reign of King Henry VIII (1491 – 1547), A Man for All Seasons tells the story of Henry’s divorce from Catherine of Aragon (1485 – 1536), the birth of the Church of England, and the man who stood opposed to it.

During the 1530s, King Henry VIII broke away from the Catholic Church, passed the Act of Succession (which declared Princess Mary (1516 – 1558), the King’s daughter with Catherine, illegitimate) and the Act of Supremacy (which gave Henry supreme command over the Church in England), and made himself the Supreme Head of the Church of England.

In A Man for All Seasons, Henry asks Sir Thomas More (1478 – 1535) to disregard his own principles and express his approval of the King’s desire to divorce his wife and establish an English Church separate from Rome. Henry believes that More’s support will legitimise his actions because More is a man known for his moral integrity. Initially, Henry uses friendship and dodgy logic to convince his friend. It fails, and the so-called “defender of the faith” tries using religious arguments to justify his adultery.  When this fails, he merely resorts to threats. Again, More refuses to endorse Henry’s actions.

A Man for All Seasons is really about the relationship between the law (representing the majesty of the state) and individual consciousness. In the film, Sir Thomas More is depicted as a man with an almost religious reverence for the law because he sees it as the only barrier between an ordered society and anarchy. In one scene, when William Roper the Younger (1496 – 1578) tells him he would gladly lay waste to every law in order to get at the devil, More replies that he would “give the devil benefit of law for my own safety’s sake.”

More’s reverence goes far beyond mere man-made law, however. He also shows a deep reverence for the laws of God, as well. After being sentenced to death, More finally breaks his silence and refers to the Act of Succession, which required people to recognise Henry’s supremacy in the Church and his divorce from Catherine of Aragon, as “directly repugnant to the law of God and His Holy Church, the Supreme Government of which no temporal person may be any law presume to take upon him.” More argues that the authority to enforce the law of God was granted to Saint Peter by Christ himself and remained the prerogative of the Bishop of Rome.

Furthermore, More argues that the Catholic Church had been guaranteed immunity from interference in both the King’s coronation oath and in Magna Carta. In his coronation oath, Henry had promised to “preserve to God and Holy Church, and to the people and clergy, entire peace and concord before God.” Similarly, the Magna Carta stated that the English people had “granted to God, and by this present charter confirmed for us and our heirs in perpetuity, that the English Church shall be free, and shall have its rights undiminished, and its liberties unimpaired.”

The central problem of the film is that the legal and political system in England is incapable of allowing More to hold a contradictory, private opinion. Even before he is appointed Chancellor, More expresses no desire to get involved with the debate surrounding the King’s marriage. He will not, however, swear an oath accepting the King’s marriage or his position as the head of the Church of England. More believes that it is the Pope who is the head of the Church, not the King, and he is perfectly willing to sacrifice his wealth, family, position, freedom, and, ultimately, his life to retain his integrity.

The relationship between the law and an individual’s conscience is an important one. What A Man for All Seasons illustrates is just how important this relationship is, and what happens when this relationship is violated. Modern proponents of social justice, identity politics, and political correctness would do well to watch A Man for All Seasons.

OUR OBSESSION OVER FOOD IS RIDICULOUS

20091117-koodies

Sometimes a civilisation can become so sophisticated that it believes it can overcome truth. We have become one of those civilisations. As a consequence of our arrogance, we have come to believe that we can circumvent some of the most fundamental truths about reality. We blame inequality on the social structure even though most social animals live in hierarchies. We believe that primitive people are noble even though mankind in its primitive state is more violent than at any other stage. And we believe that we can change the way human beings eat despite the fact that it is making us unhappy.

It is our modern obsession over diet and exercise that I would like to focus on. This obsession has arisen from a society that is too safe, too free, and too prosperous for its own good. This is not to say that safety, freedom, and prosperity are bad things. Indeed, we should get down on our knees and thank God every day that we live in a country that has these things. However, it is also true that too much safety, freedom, and prosperity breeds passivity and complacency. The hardships our ancestors faced – war, poverty, disease – are no longer problems for us. Therefore, we lack the meaning that these hardships bring to our life. As a result, we have come to invent problems. Among these has been a tendency to render the consumption of certain food as something unhealthy, unethical, or both.

Our modern obsession with food is causing significant personal problems. On the one hand, the ease in which food, especially that which is laden with sugar, is causing a rise in cases of obesity. (Note: I am using the word ‘obesity’ as a blanket term for people who are overweight). It is a uniquely modern problem. Our ancestors never battled weight gain because they were only able to find or afford enough food to keep them and their families from starving. Now the quantity, cheapness, and, in many cases, poor quality of food means that the fattest amongst are also often the poorest. But obesity is less a problem that arises out of food and more of a problem arising from laziness and gluttony. (Naturally, I am excluding health problems and genetic disorders from this conclusion).

On the other hand, however, our obsession over being skinny or muscle-bound is also causing problems. I have seen plenty of people who are clearly overweight. In rare cases, I have even seen people who are so morbidly obese that it can only be described as breathtaking. However, I have also seen women (and it primarily women, by the way) who can only be described as unnaturally thin. It is as though our society, having realised that being overweight is healthy, has decided that its opposite must be good. It isn’t. Just right is just right.

And it’s not just individuals who are subjecting themselves to this kind of self-imposed torture. And it’s not limited to people in the here and now, either. In 1998, The Independent reported that many doctors in the United Kingdom were concerned that well-meaning parents were unintentionally starving their children to death by feeding them low fat, low sugar diets. These children were said to be suffering from the effects of “muesli-belt nutrition.” They had become malnourished because either they or their parents had maintained had become obsessed with maintaining a low-fat, low-sugar, low-salt diet. The article reported: “Malnutrition, once associated with slums, is said to have become an increasing problem for middle-class families in the past fifteen years. The victim of so-called ‘muesli-belt nutrition’ are at risk of stunted growth, anaemia, learning difficulties, heart disease and diabetes.”

Our obsession over diet is really a sign of how well-off our society is. Our ancestors had neither the time nor the resources to adhere to the kind of crazy-strict diets that modern people, in their infinite stupidity, decide to subject themselves to. It is high time we stopped obsessing over food and got a grip.

The Death of Comedy

qhbazd8sfbemimkhmx7n

In March of this year, the vlogger Mark Meechan was convicted in a Scottish Court of violating the Communications Act 2003 for a video he had uploaded to YouTube in April 2016. The video, which Meechan claimed had been produced for comedic purpose (he claimed he wanted to annoy his girlfriend), featured a pug dog making Hitler salutes with its paw, responding to the command “gas the Jews” by tilting its head, and watching a Nazi rally at the 1936 Berlin Olympics.

The Scottish Court that convicted Meechan (who is much better known as ‘Count Dankula’) concluded that he had been motivated to produce the video by religious prejudice. Perhaps without realising it, by convicting Meechan, the Scottish legal system has illustrated the importance of free speech and the threat that political correctness poses to it.

Unfortunately, legally and politically incited attacks against both free speech and comedy are not limited to the United Kingdom. In Canada, politically correct inspired attempts to silence comedians have been instantiated into law. In one alarming case, the Quebec Human Rights Commission awarded Jeremy Gabriel, a disabled former child star, $35,000 in damages after he was ridiculed in a comedy routine by Mike Ward.

It is little wonder, then, that some comedians have seen cause for alarm. Some, like Chris Rock, now refuse to perform on college campuses because of the oversensitivity of some of the students. Others, like legendary Monty Python star John Cleese, have warned that comedians face an “Orwellian nightmare.”

Political correctness is the antithesis of comedy. It is not that comedians have been prevented from practising their craft, but that the pressures political correctness place on them makes it difficult to do so. The comedian feels himself pressured to self-censor himself because of the way words are categorised by their supposed offensive or inoffensiveness. And he finds himself fearful of having his words twisted and misinterpreted to mean something other than what he meant it to mean.

Much of the problem arises from a culture that has elevated politics to something approximating religion. And, like all zealots, the fanatics of this new religion have attempted to conform every aspect of society to their new faith. It is the job of the comedian to make me laugh. It is not his job, as some would have you believe, to play the role of political activist.

Unfortunately, that view is not one held by many on the radical left. In an article for the Sydney Morning Herald, Judith Lucy opined that people wanted to “hear people talk about politics or race.” And it seems that there are people who agree with Lucy. Comedy is not to be used to bring joy to people, but as a platform to espouse politics. Comedy has become a form of propaganda. And it is the liberal agenda that determines what is considered funny and what isn’t.

What the politically correct offer instead of genuinely funny comedy is comedy as a form of political activism. Comedy is to be used to spread progressive ideas and political correctness is to be used to silence that which opposes those ideas. Take, for example, Tim Allen’s sitcom Last Man Standing, which revolved around a conservative protagonist, which was cancelled by the American Broadcasting Company despite its popularity.

And nowhere can this trend of comedy as political activism can be seen more readily than in the current incarnations of late-night television. Legendary comics like Johnny Carson and David Letterman established late-night television as a form of entertainment that provided light-hearted entertainment before sending its audience off to bed. It was not afraid of offending people in order to do so, either. Today, however, this willingness to offend others seems only to be targeted towards those on the right of the political spectrum. It is as though the late-night comedian has decided to use his position to preach progressive politics to its audience rather than using their talent to make insightful and hilarious observations about the world around us. The result is that late-night host places commenting on political or social matters above entertaining his audience.

It is as though the late-night host has replaced humour for indignation. The “jokes” (in reality they are tirades) contain more than a modicum of vitriol and resentment. Samantha Bee referred to Ivanka Trump as a “feckless cunt”, Stephen Colbert accused President Trump of being Vladimir Putin’s “cock holster”, so on and so forth.

While it may seem alarming, it is precisely what happens when comedians see themselves as activists rather than entertainers. As Danna Young, Associate Professor of Communication at the University of Delaware, commented:

“When comics abandon humour and go with anger instead, they come just another ‘outrage’ host. Now, if that’s cool with them, great. But if they are looking to capitalise on the special sauce of humour, then they’ll need to take their anger and use it to inform their craft, but not have it become their craft.”

Fortunately, there is a litany of comedians who refuse to conform their comedy to the morays of political correctness and progressive politics. Numerous comedians have denigrated political correctness as the “elevation of sensitivity over truth” (Bill Maher) and “America’s newest form of intolerance” (George Carlin). Jerry Seinfeld, a man whose comedy routines are considered among the least offensive in comedy, referred to political correctness as “creepy” on Late Night with Seth Meyers. Bill Burr accused social justice warriors of being bullies. Likewise, Ricky Gervais has tweeted “if you don’t believe in a person’s right to say things you find ‘grossly offensive’, you don’t believe in free speech.”

And all of this is not to say that political correctness has destroyed genuinely funny comedy, either. Netflix has spent a great deal of money producing comedy specials that are, in many cases, far for inoffensive. Ricky Gervais comedy special Humanity has featured jokes about rape, cancer, transgenderism, AIDS, and the Holocaust.

Comedy has been threatened by both progressive politics and political correctness. Mark Meechan may have found himself running afoul of the politically correct left, but as long as their people who stand committed to free speech and comedians prepared to make offensive jokes, the laughter will continue.

WHY TRUMP WON

161109031839-donald-trump-november-9-2016-new-york-exlarge-169

Not even Cassandra, cursed to prophesise but never be believed, could have predicted the tumultuous change that occurred in 2016. In June, just over half of the British public (51.89%) voted to leave the European Union. Then, in November, Donald Trump defeated Hillary Clinton to become the President of the United States.

And not only did Trump defeat Clinton, winning thirty of America’s fifty states (though Clinton did win the popular vote), the Republican Party utterly decimated the Democrats. Trump won thirty of America’s fifty states (Clinton, admittedly, did win the popular vote). The Republicans have taken control of the House of Representatives, have a majority in the Senate, hold thirty-three state governorships, and control thirty-two state legislatures.

Brexit’s victory and Trump’s triumph comes off the back of a deeper cultural movement. It is a movement that rejects the doctrines of political correctness, identity politics, diversity, and equality in favour of greater intellectual rigour and personal freedom. Trump’s gift to this movement has been to expand the Overton Window. As an indirect consequence of his uncouthness, the boundaries of public discourse have been expanded exponentially.

Throughout his campaign, the media treated Trump as a joke. He hasn’t got a hope in Hades, they claimed. In the end, however, they were proven wrong. Trump won through a mixture of hard-line policies on immigration and a rejection of political correctness and far-left politics. And he won through his astounding ability to market himself to the American people.

The first thing to note is that Trump thrives on scandal. Much of this ability emanates from his already tarnished reputation as a rude, uncouth, bully and womaniser. Trump has never denied these facets of his personality (in some cases he has even emphasised them). What this means is that those who voted for Trump did so despite the significant faults in his character. Consequentially, accusations involving sex or money (the two things people truly care about) has little effect on him.

Then there is his skill as an emotional manipulator. Trump appeals directly to the emotional sensibilities of the people by using fear-mongering rhetoric to circumvent the mind’s critical faculties. Rather than emphasising the importance of maintaining the integrity of immigration law, Trump chooses to emphasise the crimes – rapes, murders, drug offences – committed by some illegal immigrants. After this, Trump promotes anger by setting up an out-group as the enemy. As a result, Trump implies not only that he is the best man to solve these issues, but that anyone who opposes him is somehow anti-American.

Finally, there is Trump’s use of simplicity and repetition as persuasive tools. Nuanced and boring statements can be taken out of context. By contrast, simple and heavily repetitive statements are harder to take out of context. But, more importantly, such statements are also more likely to be believed.

Much of Trump’s use of simplicity has its basis in his relationship with language. Trump speaks at a fourth-grade level and averages one syllable per word. While it would be easy to dismiss this as unsophisticated or low brow, it is important to remember that small words have a stronger and more immediate emotional impact, are more accessible to a wider audience, and are considered more believable. Cognitive fluency bias means that that the easier it is to understand something, the more likely it is to be believed. As a consequence, Trump’s use of small, simple words means he is more likely to be understood and, therefore, is more likely to be believed.

Perhaps the most important aspect of Trump’s magnetism is his ability to bypass the traditional mediums of communication and appeal directly to the American people. Unlike Hillary Clinton, who relied upon celebrity support and the mainstream media, Trump and his supporters used social media to appeal directly to voters. The lesson is clear: voters like for politicians to speak to them as equals, not preach to them from on high.

DEMAND-SIDE ECONOMICS VERSUS SUPPLY-SIDE ECONOMICS

bn-uq280_feduci_gr_20170810142213

On May 9th, 2018, the YouTube Channel, Juice Media uploaded a video entitled “Honest Government Ad: Trickle Down Economics.” In the video, the rather obnoxious and condescending female presenter tells the audience that the reason Australia has “one of the fastest growing inequality rates in the world” is trickle-down economics, which she defines as “when we [the government] piss on you and tell you it’s raining.”

According to the video, tax cuts for investors, entrepreneurs, and business are directly correlated with poverty and the lack of wage growth in Australia. The presenter argues that the government cuts taxes on the rich while simultaneously claiming that they don’t have enough money for healthcare (which would be a lot more effective if people took responsibility for their own health), renewable energy (which is really an excuse to take control of the energy market), and the ABC (which doesn’t deserve a cent of anyone’s money).

The primary problem with the video is that the premise of its argument does not actually exist. There is not a single economic theory that can be identified as trickle-down economics (also known as trickle-down theory). No reputable economist has ever used the term, nor have they ever presented an argument that could be said to conform to the idea of what it is supposed to be. As Thomas Sowell (1930 – ) wrote in his book, Basic Economics:

“There have been many economic theories over the centuries accompanies by controversies among different schools and economists, but one of the most politically prominent economic theories today is one that has never existed among economists: the trickle-down theory. People who are politically committed to policies of redistributing income and who tend to emphasise the conflicts between business and labour rather than their mutual interdependence often accuse those opposed to them of believing that benefits must be given wealthy in general, or to business in particular that these benefits will eventually trickle down to the masses of ordinary people. But no recognised economist of any school of thought has ever had any such theory or made any such proposal.”

The key to understanding why political players disparage pro-capitalist and pro-free market economic policies as trickle-down economics is understanding how economics is used to deceive and manipulate. Political players understand that simple and emotionally-charged arguments tend to be more effective because very few people understand actual economics. Anti-capitalists and anti-free marketeers, therefore, use the term trickle-down economics to disparage economic policy that disproportionately benefits the wealthy in the short term, and increases the standards of living for all peoples in the long-term

The economic theory championed by liberals (read: leftists) is demand-side economics. Classical economics rejected demand-side economic theory for two reasons. First, manipulating demands is futile because demand is the result of product, not its cause. Second, it is (supposedly) impossible to over-produce something. The French economist, Jean-Baptiste Say (1767 – 1832) demonstrated the irrelevance of demand-side economics by pointing out that demand is derived from the supply of goods and services to the market. As a consequence of the works of Jean-Baptiste Say, the British economist, David Ricardo (1772 – 1823), and other classical economists, demand-side economic theory lay dormant for more than a century.

One classical economist, however, was prepared to challenge the classical economic view of demand-side economics. The English economist, Thomas Robert Malthus (1766 – 1834) challenged the anti-demand view of classical economics by arguing that the recession Great Britain experienced in the aftermath Napoleonic Wars (1803 – 1815) was caused by a failure of demand. In other words, purchasing power fell below the number of goods and services in the market. Malthus wrote:

“A nation must certainly have the power of purchasing all that it produces, but I can easily conceive it not to have the will… You have never I think taken sufficiently into consideration the wants and tastes of mankind. It is not merely the proportion of commodities to each other but their proportion to the wants and tastes of mankind that determines prices.”

Using this as his basis, Malthus argued that goods and services on the market could outstrip demand if consumers choose not to spend their money. Malthus believed that while production could increase demand, it was powerless to create the will to consume among individuals.

Demand-side economics works on the theory that economic growth can be stimulated by increasing the demand for goods and services. The American economist, J.D. Foster, the Norman B. Ture Fellow in the Economics of Fiscal Policy at the Heritage Foundation, argued that demand-side works on the theory that the economy is underperforming because the total demand is low, and, as a consequence, the supply needed to meet this demand is likewise low.

The American economist, Paul Krugman (1953 – ), and other economists believe that recessions and depressions are the results of a decrease in demand and that the most effective method of revivifying the economy is to stimulate that demand. The way to do this is to engage in large-scale infrastructure projects such as the building of bridges, railways, and highways. These projects create a greater demand for things like steel, asphalt, and so forth. And, furthermore, it provides people with a wage which they can spend on things like food, housing, clothing, entertainment, so on and so forth.

Policies based on demand-side economics aims to change the aggregate demand in the economy. Aggregate demand is consumer spending + investment + net import/export. Demand-side economics policies are either expansive or contractive. Expansive demand-side policies aim at stimulating spending during a recession. By contrast, contractive demand-side policies aim at reducing expenditure during an inflationary economy.

Demand-side policy can be split into fiscal policy and monetary policy. The purpose of fiscal policy in this regard is to increase aggregate demand. Demand-side based fiscal policy can help close the deflationary gap but is often not sustainable over the long-term and can have the effect of increasing the national debt. When such policies aim at cutting spending and increasing taxes, they tend to be politically unpopular. But when such policies that involve lowering taxes and increasing spending, they tend to be politically popular and therefore easy to execute (of course they never bother to explain where they plan to get the money from).

In terms of monetary policy, expansive demand-side economic aims at increasing aggregate demand while contractive monetary policy in demand-side economics aims at decreasing it. Monetary expansive policies are less efficient because it is less predictable and efficient than contractive policies.

Needless to say, demand-side economics has plenty of critics. According to D.W. McKenzie of the Mises Institute, demand-side economics works on the idea that “there are times when total spending in the economy will not be enough to provide employment to all want to and should be working.” McKenzie argued that the “notion that economics as a whole, sometimes lacks sufficient drive derives from a faulty set of economic doctrines that focus on the demand side of the aggregate economy.” Likewise, Thomas Sowell argued in Supply-Side Politics that there is too much emphasis placed on demand-side economics to the detriment of supply-side economics. He wrote in an article for Forbes:

“If Keynesian economics stressed the supposed benefit of having government manipulate aggregate demand, supply-side economics stressed what the marketplace could accomplish, one it was freed from government control and taxes.”

blog-689-1024x614

John Maynard Keynes

The man who greatly popularised demand-side economics was the British economist, John Maynard Keynes (1883 – 1946). Keynes, along with many other economists, analysed the arguments of the classical economists against the realities of the Great Depression. Their analysis led many economists to question the arguments of the classical economists. They noted that classical economics failed to answer how financial disasters like the Great Depression could happen.

Keynesian economics challenged the views of the classical economists. In his 1936 book, The General Theory of Employment, Interest and Money (one of the foundational texts on the subject of modern macroeconomics) Keynes revivified demand-side economics. According to Keynes, output is determined by the level of aggregate demand. Keynes argued that resources are not scarce in many cases, but that they are underutilised due to a lack of demand. Therefore, an increase in production requires an increase in demand. Keynes’ concluded that when this occurs it is the duty of the government to raise output and total employment by stimulating aggregate demand through fiscal and monetary policy.

The Great Depression is often seen as a failure of capitalism. It popularised Keynesian economics and monetary central planning which, together, “eroded and eventually destroyed the great policy barrier – that is, the old-time religion of balanced budgets – that had kept America relatively peaceful Republic until 1914.”

David Stockman of the Mises Institute argues that the Great Depression was the result of the delayed consequences of the Great War (1914 – 1918) and financial deformations created by modern central banking. However, the view that the Great Depression was a failure of capitalism is not one shared by every economist. The American economist, Milton Friedman (1912 – 2006), for example, argued that the Great Depression was a failure of monetary policy. Friedman pointed out that the total quantity of money in the United States – currency, bank deposits, and so forth – between 1929 and 1933 declined by one-third. He argued that the Federal Reserve had failed to prevent the decline of the quantity of money despite having the power and obligation to do so. According to Friedman, had the Federal Reserve acted to prevent the decline in the quantity of money, the United States (and subsequently, the world) would only have suffered a “garden variety recession” rather than a prolonged economic depression.

It is not possible to determine the exact dimensions of the Great Depression using quantitative data. What is known, however, is that it caused a great deal of misery and despair among the peoples of the world. Failed macroeconomic policies combined with negative shocks caused the economic output of several countries to fall between twenty-five and thirty-percent between 1929 and 1932/33. In America between 1929 and 1933, production in mines, factories, and utilities fell by more than fifty-percent, stock prices collapsed to 1/10th of what they had been prior to the Wall Street crash, real disposable income fell by twenty-eight percent, and unemployment rose from 1.6 to 12.8 million.

According to an article for the Foundation for Economic Education, What Caused the Great Depression, the Great Depression occurred in three phases. First, the rise of “easy money policies” caused an economic boom followed by a subsequent crash. Second, following the crash, President Herbert Hoover (1874 – 1964) attempted to suppress the self-adjusting aspect of the market by engaging in interventionist policies. This caused a prolonged recession and prevented recovery. Hourly rates dropped by fifty-percent, millions lost their jobs (a reality made worse by the absence of unemployment insurance), prices on agricultural products dropped to their lowest point since the Civil War (1861 – 1865), more than thirty-thousand businesses failed, and hundreds of banks failed. Third, in 1933, the lowest point of the Depression, the newly-elected President Franklin Delano Roosevelt (1882 – 1945) combatted the economic crisis by using “new deal” economic policies to expand interventionist measures into almost every facet of the American economy.

fdrnewdeal

Let’s talk about the New Deal a little bit more. The New Deal was the name for the Keynesian-based economic policies that President Roosevelt used to try and end the Great Depression. It included forty-seven Congress-approved programs that abandoned laissez-faire capitalism and enacted the kind of social and economic reforms that Europe had enjoyed for more than a generation. Ultimately, the New Deal aimed to create jobs, provide relief for farmers, boost manufacturing by building partnerships between the private and public sectors, and stabilise the US financial system.

The New Deal was largely inspired by the events of the Great War. During the War, the US Government had managed to increase economic activity by establishing planning boards to set wages and prices. President Roosevelt took this as proof positive that it was government guidance, not private business, that helped grow the economy. However, Roosevelt failed to realise that the increase in economic activity during the Great War came as the result of inflated war demands, not as the achievement of government planning. Roosevelt believed, falsely, that it was better to have government control the economy in times of crisis rather than relying on the market to correct itself.

The New Deal came in three waves. During his first hundred days in office, President Roosevelt approved the Emergency Banking Act, Government Economy Act, the Civilian Conservation Corps, the Federal Emergency Relief Act, Agricultural Adjustment Act, Emergency Farm Mortgage Act, the Tennessee Valley Authority Act, the Security Act, Abrogation of Gold Payment Clause, the Home Owners Refinancing Act, the Glass-Steagall Banking Act, the National Industrial Recovery Act, the Emergency Railroad Transportation Act, and the Civil Works Administration.

In 1934, President Roosevelt bolstered his initial efforts by pushing through the Gold Reserve Act, the National Housing Act, the Securities Exchange Act, and the Federal Communications Act.

In 1935, the Supreme Court rejected the National Industrial Act. President Roosevelt, concerned that other New Deal programs could also be in jeopardy, embarked on a litany of programs that would help the poor, the unemployed, and farmers. Second-wave New Deal programs included Soil Conservation and Domestic Allotment Act, Emergency Relief Appropriation, the Rural Electrification Act, the National Labor Relations Act, the Resettlement Act, and the Social Securities Act.

In 1937, Roosevelt unleashed the third wave of the New Deal by aiming to combat budget deficits. It included the United States Housing Act (Wagner-Steagall), the Bonneville Power Administration, the Farm Tenancy Act, the Farm Security Administration, the Federal National Mortgage, the New Agriculture Adjustment Act, and the Labor Standards Act.

According to the historical consensus, the New Deal proved effective in boosting the American economy. Economic growth increased by 1.8% in 1935, 12.9% in 1936, and 3.3% in 1937. It built schools, roads, hospitals, and more, prevented the collapse of the banking system, reemployed millions, and restored confidence among the American people.

Some even claim that the New Deal didn’t go far enough. Adam Cohen, the author of Nothing to Fear: FDR’s Inner Circle and the Hundred Days that Created Modern America, claims that the longevity of the Depression (the American economy didn’t return to pre-depression prosperity until the 1950s) is evidence that more New Deal spending was needed. Cohen commented that the New Deal had the effect of steadily increasing GDP (gross domestic product) and reducing unemployment. And, which is more, it reimagined the US Federal government as a welfare provider, a stock-market regulator, and a helper of people in financial difficulty.

However, the historical consensus is not to say that the New Deal is without its critics. The New Deal was criticised by many conservative businessmen for being too socialist. Others, such as Huey Long (1893 – 1935), criticised it for failing to do enough for the poor. Henry Morgenthau, Jr. (1891 – 1967), the Secretary of the Treasury, confessed before Democrats in the House Ways and Means Committee on May 9th, 1939 that the New Deal had failed as public policy. According to Morgenthau, it failed to produce an economic recovery and did not erase historic unemployment. Instead, it created a recession – the Roosevelt Recession – in 1937, failed to adequately combat unemployment because it created jobs that were only temporary, became the costliest government program in US history, and wasted money.

Conservatives offer supply-side economics as an alternative to demand-side economics. Supply-side economics aims at increasing aggregate supply. According to supply-side economics, the best way to stimulate economic growth or recovery is to lower taxes and thus increase the supply of goods and services. This increase leads, in turn, to lower prices and higher standards of living.

The lower-taxes policy has proved quite popular with politicians. The American businessman and industrialist, Andrew Mellon (1855 – 1937) argued for lower taxes in the 1920s, President John Fitzgerald Kennedy (1917 – 1963) argued for lower taxes in the 1960s, and both President Ronald Reagan (1911 – 2004) and President George Walker Bush (1946 – ) lowered taxes in the 1980s and 2000s, respectively.

Supply-side economics works on the principle that producers will create new and better products if they are allowed to keep their money. Put simply, supply-side economics (supply merely refers to the production of goods and services) works on the theory that cutting taxes on entrepreneurs, investors, and business-people incentives them to invest more in their endeavours. This money can be invested in capital – industrial machinery, factories, software, office buildings, and so forth.

The idea that lower taxes lead to greater economic prosperity is one of the central tenants of supply-side economics. Supporters of supply-side economics believe that providing financial benefits for investors (cutting capital gains tax, for example) stimulates economic growth. By contrast, high taxes, especially those metered out on businesses, discourage investment and encourages stagnation.

Tax rates and tax revenue are not the same thing, they can move in opposite directions depending on economic factors. The revenue collected from income tax for each year of the Reagan Presidency was higher than the revenues collected during any year of any previous Presidency. It can be argued that people change their economic behaviour according to the way they are taxed. The problem with increasing taxes on the rich is that the rich will use legal, and sometimes illegal, strategies for avoiding paying it. A businessman who is forced to pay forty-percent of his business’ profits on taxation is less likely to increase his productivity. As a consequence, high tax rates on businesses leads to economic stagnation.

laffer

Supply-side supporters use Arthur Laffer’s (1940 – ) – an advisor to President Ronald Regan –  Laffer Curve to argue that lower taxes lead to higher tax revenue. The Laffer curve showed the dichotomy between tax revenue and the amount of tax that is collected. Laffer’s idea that the more taxation increased, the more tax revenue is collected. However, if taxes are increased beyond a certain point, less revenue is collected because people are no longer willing to make an economic contribution.

Taxation only works when the price of engaging in productive behaviour is likewise reduced. Daniel Mitchell of the Heritage Foundation stated in an article entitled a “Supply-Side” Success Story, that tax cuts are not created equally. Mitchell wrote: “Tax cuts based on the Keynesian notion of putting money in people’s pockets in the form of rebates and credits do not work. Supply-side cuts, by contrast, do improve economic performance because they reduce tax rates on work, saving, and investment.” Mitchell used the differences between the 2001 and 2003 tax cuts as evidence for his argument. Mitchell pointed out that tax collections fell after the 2001 tax cuts whereas they grew by six-percent annually after the 2003 cuts. Mitchell points out that job numbers declined after the 2001 cuts whereas net job creation averaged more than 150,000 after the 2003 cuts. Mitchell points out that economic growth averaged 1.9% after the 2001 tax cuts, compared to 4.4% after the 2003 cuts.

Proposals to cut taxes have always been characterised by its opponents as “tax cuts for the rich.” The left believes that tax cuts, especially cuts on the top rate of tax, does not spur economic growth for lower and middle-class people and only serves to widen income inequality. They argue that tax cuts benefit the wealthy because they invest their newfound money in enterprises that benefit themselves. Bernie Sanders (1941 – ), the Independent Senator from Vermont, has argued that “trickle-down economics” is pushed by lobbyists and corporations to expand the wealth of the rich. Whilst opponents of President Ronal Reagan’s tax cuts likewise referred to the policy as “trickle-down economics.”

In reality, the left-wing slander of tax cuts can best be described as “tax lies for the gullible.” The rich do not become wealthy by spending frivolously or by hiding their money under the mattress. The rich become rich because they are prepared to invest their money in new products and ventures that will generate greater wealth. In reality, it is far more prudent to give an investor, entrepreneur, or business owner a tax cut because they are more likely to use their newfound wealth more prudently.

According to Prateek Agarwal at Intelligent Economist, supply-side economics is useful for lowering the natural rate of unemployment. Thomas Sowell, a supporter of supply-side economics, claims that while tax cuts are applied primarily to the wealthy, it is the working and middle classes who are the first and primary beneficiaries. This occurs because the wealthy, in Sowell’s view, are more likely to invest more money in their businesses which will provide jobs for the working class.

The purpose of economic policy is to facilitate the economic independence of their citizens by encouraging economic prosperity. Demand-side economics and supply-side economics represent two different approaches to achieving this endeavour. Demand-side economics argues that economic prosperity can be achieved by having the government increase demand by taking control of the economy. By contrast, supply-side economics, which is falsely denounced as “trickle-down economics” by the likes of people like Juice Media, champions the idea that the best way to achieve economic prosperity is by withdrawing, as far as humanly possible, government interference from the private sector of the economy. Supply-side economics is the economic philosophy of freedom, demand-side economics is not.

CIVILISATION IN TERMINAL DECLINE

the-funeral-of-sir-winston-churchill

Our society appears to be suffering a terminal decline. At least that’s the conclusion traditionalists and devout Christian believers like myself have been forced to conclude. As the old-world withers and vanishes, a culture of selfishness, moral relativism, and general immorality has been allowed to grow in its place. The culture that produced Vivaldi, Dickens, Shakespeare, and Aristotle has been replaced with one that has as its major ambassadors the likes of Kim Kardashian and Justin Bieber.

The first clue that a monumental change had taken place came in the guise of Princess Diana’s farce of a funeral in 1997. An event that was cynically exploited by politicians and celebrities and recorded for public consumption by round-the-clock news coverage (her funeral would be watched by two-and-a-half-billion people). As Gerry Penny of The Conversation noted, Diana’s death marked the beginning of the ‘mediated death.’ A death that is covered by the mass media in such a way that it attracts as much public attention, and therefore revenue, as possible.

Compared to Princess Diana, Winston Churchill’s funeral in 1965 was a spectacle of old world pomp and ceremony. After lying in state for three days, Churchill’s small coffin was carried by horse-drawn carriage along the historic streets of London to Saint Paul’s Cathedral. His procession was accompanied by Battle of Britain aircrews, royal marines, lifeguards, three chiefs of staff, Lord Mountbatten, and his own family. The silence that filled the air was broken only by a funerary march and the occasional honorary gunshot.

Much like Diana’s funeral, tens of thousands of people came to witness Churchill’s funeral. But unlike Diana’s mourners, who did everything they could to draw attention to themselves, Churchill’s mourners were silent and respectful. They realised, unlike Diana’s mourners, that the best way to commemorate a great man was to afford him the respect that his legacy deserved.

Cynics would dismiss Churchill’s funeral as nothing more than a ridiculous display of pomp and ceremony. However, these events serve an important cultural purpose by connecting the individual with his community, his culture, and his heritage. In doing so, they bring about order and harmony.

Winston Churchill was the great Briton of the 20th century. Like Horatio Lord Nelson in the early 19th century, it was Churchill’s leadership that saved Britain from Nazi invasion and it was his strength and resolve that gave ordinary Britons that courage to endure the worst periods of the War.

And understandably, many Britons felt something approximating a kind of personal gratitude towards him. A gratitude deep enough that when he died many felt it to be their duty to file reverently pass his body lying in state or stand in respectful silence as his funeral procession passed. What Churchill’s state funeral did was give the ordinary person the opportunity to pay their own respects and feel that they had played a part, if only in a minute way, in the celebration of his life.

Winston Churchill’s funeral and Princess Diana’s funeral represent eras that are as foreign to one another as Scotland is to Nepal. While Churchill’s funeral represented heritage and tradition, Princess Diana’s funeral symbolised mass nihilism and self-centredness.

But why has this happened? I believe the answer lies in the dual decline of Western culture and Christianity.

The French philosopher, Chantal Delsol described modern Western culture as being akin to Icarus had he survived the fall. (Icarus, of course, being the figure in Greek mythology whose wax wings melted when he flew too close to the sun). Where once it had been strong, resolute, and proud, it has now become weak, dejected, disappointed, and disillusioned. We have lost confidence in our own traditions and ideals.

Of course, the decline of Western culture has a direct correlation with the more consequential decline of Christianity. It is faith that informs culture and creates civilisation, and the faith that has informed the West has been Christianity. It is the moral ideals rooted in the Judeo-Christian tradition – that I love my neighbour, that my behaviour in this life will determine my fate in the next, that I should forgive my enemies – that form the axiomatic principles that undergird Western civilisation.

This faith has been replaced by an almost reverent belief in globalism, feminism, environmentalism, diversity, equality, and human rights. Our secularism has made us believe that those who came before us were ignorant, superstitious, and conformist. And what has the result of this loss of mass religiosity been? Mass nihilism and a decline in moral values.

But when faith falls so too does culture and civilisation. If we are to revive our civilisation, we must be prepared to acknowledge that tradition, heritage, and religion are not only integral, but vital.