Friday, December 22, 2017

From Dunkirk to North Africa: Britain Revitalizes the War Effort

In early 1940, the British military forces, which had come to western Europe in an effort to slow or stop the advance of the German army, were steadily retreating. Soon they were pinned down around Dunkirk, a town on France’s northern coast.

Inside the British government, some leaders wanted to meet the Nazis at a conference and negotiate a peace treaty. They would have let Hitler continue to control the German army and most of Europe.

Those who favored the idea of a peace conference feared that they’d lose a fight against Hitler. This fear was reasonable in light of the fact that the British military needed to be evacuated from Dunkirk to avoid be captured in its entirety.

But peace negotiations would imply and require at least a modicum of trust. Those opposed to a peace conference pointed out that Hitler was not to be trusted.

Inside the British government, there was a tension between those who wanted a peace conference and those who opposed the idea. Prime Minister Winston Churchill would break that tension, as historian Larry Arnn writes:

The day on which Churchill put an end to the idea of a peace conference was May 28, 1940. He walked into the cabinet room and made a stirring speech, which in the diary of Minister of Economic Warfare Hugh Dalton ended with these words: “If this long island story of ours is to end at last, let it end only when each one of us lies choking in his own blood upon the ground.” This speech, which provoked a demonstration of enthusiasm that swept throughout the government, was not a product of any trend or great evolution of history. It spoke in defiance of those forces.

A bit more than a year later, in late 1941 and early 1942, Churchill was developing his strategy for the war. The air war raged over England, but the Germans had not invaded Britain, and the British military was rebounding from the debacle at Dunkirk.

The British were not yet ready for a direct confrontation in Europe with the German military. As historians Peter Maslowski and Allan Millett write,

Reverting to their traditional approach of defeating continental enemies, the British wanted to avoid a direct confrontation with a full-strength Wehrmacht in northern Europe until this confrontation carried no risk of a 1914-1918 stalemate. Instead they urged operations in the Mediterranean theater, where they were already engaged and where their scarce naval, air, and ground forces had some some ability to check the Germans and Italians. Churchill stressed that the Mediterranean theater offered many strategic opportunities, since the African littoral could be wrested from the Vichy French forces in Morocco and Algeria and the German-Italian army campaigning in the Libyan-Egyptian area against the British 8th Army. Churchill argued that a 1942 campaign in this area would divert German troops from Russia and strengthen the British war effort. What he did not say was that this campaign would be British-commanded (thus presumably using the greatest Allied expertise in generalship) and help restore the integrity of the British Empire, which Churchill desperately wanted to preserve.

Especially in the areas of North Africa, including Egypt, and the Near East, maintaining some sense of the British Empire was crucial to keeping the peace. This would become painfully clear after the war.

The French and British presence in the Near East had already become less decisive in the wake of WW1. The Europeans had paid too high a price, both in terms of lives and in terms of money, to continue to devote a high level of resources to keeping the region policed.

Although the Near East had been civilized millennia before Europe, the region’s civilizations had become notably less humane and less peaceful in recent centuries. The French and British presence in broad swaths of the area - much of which had formerly been Ottoman possessions - kept a lid on the periodic feuds and bloodbaths which erupted on a regular basis.

Churchill was, even in the grim days of early 1942, thinking ahead to a postwar era, and thinking of ways to maintain peace in that era. Sadly, the British Empire would continue fading, and more than half a century of violence in Near East, during the postwar decades, has shown that the absence of French and British occupational troops led to increased violence.

Human civilization paid a high price in order to defeat Hitler. While millions of Germans were freed from his cruel Nazi dictatorship, millions of people in the Near East were left exposed as major European powers could no longer afford to keep the peace there.

Wednesday, December 6, 2017

Global Climatic Instability: A Continuation of Ancient Trends

Variations in the Earth’s climate have caused researchers to look for the causes of such changes. Specifically, certain political groups have asserted that warming and cooling trends could be anthropogenic, and have sought evidence to support this view.

The notion propagated over social media is roughly this, that since the increased use of ‘fossil fuels’ (coal, oil, and natural gas) started in the late 1700s, emissions, specifically carbon dioxide have accumulated in the earth’s atmosphere and thereby initiated changes in the climate.

Long-term trends in the planet’s climate, however, predate large-scale industrialization.

A wide variety of techniques allow researchers to measure conditions prior to the advent of recorded readings from thermometers: core samples from ice or soil; tree-ring evaluation; written records quantifying the expansion and contraction of glaciers, and gauging snowfall and rainfall; and general agricultural and naturalistic records documenting which types of plant thrived in various locations.

From such measurements, it is possible to accurately reconstruct a climatic history of the Earth. The Intergovernmental Panel on Climate Change (IPCC) sets forth a history which includes a Roman Warm Period (ending around 400 A.D.), a Dark Ages Cold Period (ending around 950 A.D.), a Medieval Warm Period (ending around 1250 A.D.), and a Little Ice Age (ending around 1870 A.D.).

Naturally, these dates are not precise, but rather indicate a general and gradual change in a trend. Terminology varies somewhat, as the Medieval Warm Period is, e.g., also known as the Medieval Climate Optimum or the Medieval Climate Anomaly (MCA). The IPCC notes that

New warm-season temperature reconstructions covering the past 2 millennia show that warm European summer conditions were prevalent during 1st century, followed by cooler conditions from the 4th to the 7th century.

What seems, in the short-term context of a century or two, to be a warming trend, is in the larger context of several millennia merely the global climate emerging from end of the Little Ice Age.

A visual graph on a Cartesian plane of the Earth’s climatic temperature over time would look something like a sine wave, warming and cooling patterns following each other over the centuries. The IPCC writes that

Persistent warm conditions also occurred during the 8th–11th centuries, peaking throughout Europe during the 10th century. Prominent periods with cold summers occurred in the mid-15th and early 19th centuries.

This pattern extends backward in time long before the commencement of the large-scale use of fossil fuel. The planet has experienced centuries in which northern Europe and North America had temperatures warm enough to support subtropical plant life.

Conversely, there were centuries in which, e.g., large parts of northern and central Africa experience temperatures consistent with temperate zones. These extremes occurred at a time when the use of coal, oil, and gas was nearly unknown. The IPCC reports that

There is high confidence that northern Fennoscandia from 900 to 1100 was as warm as the mid-to-late 20th century.

The pattern which emerges, then, is a steady sinusoid pattern, with the planet’s climate alternating between warm periods and cool periods every few centuries. This general pattern seems to continue unaffected by any anthropogenic variables.

If the use of fossil fuels were capable of altering the earth’s climate, then other human activities of comparable size and scope should be also capable of producing changes in climate.

But other human actions, like the shockingly large number of atomic bombs and hydrogen bombs detonated during the era of frequent bomb tests, have had no long-term measurable effect on the climate. The explosion of over 2,000 nuclear weapons over a few years was not capable of altering the climate.

If the weapons-testing programs were not capable of producing long-term warming or cooling trends (they might have had temporary localized effects), then it’s much less likely that the comparatively smaller use of fossil fuel would generate any change in climate trends.

The reliability of the sinusoid as a best-fit pattern for global climate stands, even though individual data points are sometimes outliers which depart from the best-fit line, as the IPCC indicates:

The evidence also suggests warm conditions during the 1st century, but comparison with recent temperatures is restricted because long-term temperature trends from tree-ring data are uncertain.
It would be noteworthy, even surprising, if the planet weren’t in the midst of a warming or cooling trend. The ‘normal’ condition of the Earth’s climate is to be in constant change.

Going back several thousand years, the Earth’s climate has demonstrated a semi-regular pattern of alternating between several warm centuries and several cool centuries. In light of this historical record, the climate’s current behavior is within ranges established by this pattern.

Because temperature trends in the late twentieth and early twenty-first centuries are consistent with the tendencies observed in previous millennia, there might be no need to search for extraordinary or anthropogenic causes. The IPCC notes that the MCA, a millennium ago, was likely even warmer than the current twenty-first century climate:

In the European Alps region, tree-ring based summer temperature reconstructions show higher temperatures in the last decades than during any time in the MCA, while reconstructions based on lake sediments show as high, or slightly higher temperatures during parts of the MCA compared to most recent decades.

Likewise, the Roman Warm Period, beginning around 250 B.C., was perhaps warmer than the current climate:

The longest summer temperature reconstructions from parts of the Alps show several intervals during Roman and earlier times as warm (or warmer) than most of the 20th century.

In sum, the climate is ‘changing’ only in the sense that it has been continuously and predictably changing for at least the last two millennia. The climate is doing nothing extraordinary.

Observed and measured temperatures, and warming and cooling trends, do not correlate with the onset of industrialization and the widespread use of fossil fuels. The data do not suggest that any generalizations gathered from climatic observation are anthropogenic.

Tuesday, December 5, 2017

Chiang in Context: China in the Mid 1930s

As the leader of mainland China from 1928 to 1949, and the leader of Taiwan China from 1949 to 1975, Chiang Kai-shek experienced many ups and downs. Throughout his time in mainland China, he wrestled continuously against the communists, either in a state of outright war, or in an uneasy truce.

Mao, the leader of the communists, at times found it expedient to lavishly if insincerely praise Chiang. The communists would exploit times of ceasefire during the ongoing Chinese civil war to regroup and rebuild their strength, and then resume their attack.

The war inside mainland China lasted from 1927 to 1949, and so was largely coextensive with Chiang’s tenure as leader there.

Although the struggle was long, there were times at which things went well for the ‘Nationalists,’ as those who defended the Chinese against communism were called. Historian Jay Taylor writes about the year 1936:

Despite continuing civil wars, the depression, depredations by Japan, and preparation for a general war, the power and authority of the Chinese central government was greater than at any time since the Taiping Uprising. In the spring, displaying military, political, and covert action skills, Chiang had quickly put down another rebellion by the Guangxi Clique and the usual dissidents in Guangdong. The rebels had again charged Chiang with appeasement and dictatorship but essentially the rebellion reflected the ongoing power struggle between the warlords and the central government. In their own provinces, the warlords were more like dictators than Chiang Kai-shek was, and within two years their preferred national leader, Wang Jingwei — at the moment still in Europe — would defect to Japan. Chiang's generous treatment of the incorrigible southerners, even sending them three million central government yuan or fabi in emergency aid, was an act of enlightened self-interest, which is perhaps all one can expect of a national leader.

The Guangxi region has a long cultural history - predating and outlasting Chiang - of being fiercely independent, and not identifying with the rest of mainland China. Mao, then, was not the only headache which Chiang faced.

In the southernmost part of China, and bordering on Guangxi, was the Guangdong region, which likewise had a traditionally independent attitude. Therefore many of Chiang’s troubles came from the south.

Wang Jingwei was a competitor who wanted Chiang’s power, but Wang Jingwei would obtain significance only to the extent that he attached himself either to the communists or the Japanese. In 1932, Chiang and Wang Jingwei had reached a compromise in which Wang Jingwei assumed political leadership of the Nationalist party, while Chiang led the military and much of the government.

Although the communists would ultimately succeed in subjugating mainland China and ousting Chiang, his two decades of leadership were not an unmitigated tragedy. At many points, things seemed to be going well.

Wednesday, November 22, 2017

The Island Mentality

Geography influences culture. Societies on islands are profoundly affected by their boundaries: England, Sri Lanka, Hawaii, and Cuba. Four quite different places, yet all share this factor.

Residents of an island nation find that coming and going are clearly defined events. Leaving or returning to an island requires more planning, and is a more clearly defined event than leaving or returning to a continental nation.

Because such coming and going both represent the transversal of a significant geographical feature, and require more time, effort, and money, island nations develop a distinct self-concept of being set apart.

Going from one continental nation to another can be done with such great ease that people are sometimes not even aware that they’ve done it.

Island nations, then, whether they are sovereign states or parts of other political entities, develop according to certain patterns produced by their geography.

Monday, November 20, 2017

The Carrying Capacity of Planet Earth: Human Population

The term ‘carrying capacity’ is used to designate the maximum population that a certain habitat can support.

A square mile deciduous forest might maximally support a certain number of squirrels. An inland lake of a certain number of gallons might maximally maintain a certain number of carp. Those are examples of ‘carrying capacity,’ - the largest number of inhabitants which can be maintained without environmental degradation.

At one time, in the late 1960s and the early 1970s, the popular press and news media carried stories of the Earth nearing its total carrying capacity in terms of human beings. These reports had a certain “scare effect,” and there was talk of striving for “zero population growth.”

Although they were mere sensationalism, these media events do raise a legitimate question: is there an upper limit to the number of human beings which the planet can support?

Any number put forward as an answer to this question will necessarily be an estimate, and a rough one at that. The staggering number of variables in play, and their highly complex relationships to each other, are compounded by the constantly changing technology which allows for increasing food production.

In a sustainable and renewable way, without environmental degradation, how much food, clean water, and clean air can be available? How many people can inhabit planet Earth?

Reasoned estimates point to some number greater than 100 billion. But how much greater, nobody’s really willing to guess.

These estimates factor in a “first world” standard of living for the population: telephone, television, running water and other indoor plumbing, HVAC, electricity, etc., and around 200 square feet of indoor living space per person.

This is quite a rosy scenario, and much brighter than the doom and gloom presented by those who thought that a “population bomb” would soon cause global misery. (The Population Bomb was actually the title of a 1968 book whose authors believed that the Earth was near the upper limit of its carrying capacity.)

If the planet is nowhere near maximum population, then why is there hunger and famine? Why are there regions without sufficient water? Why is there poverty?

Those conditions are the result of bad decisions by people: a few honest mistakes, but mostly a lot of greed and corruption.

If the Earth’s population were 1,000,000 - which is less than 1% of what it is now - there would still be someone without enough proper food, without clean water, and living in poverty.

Even with only a tiny fraction of its current population, the planet would still be home to malnutrition, unclean drinking water, and poverty.

Surprisingly, as the population has grown, the percentage of people living in poverty has decreased. A larger percentage of the Earth’s population lived in poverty 4,000 years than now.

A steadily growing population (as opposed to a rapidly growing or erratically growing one) is the best environment for economic growth.

Interestingly, a “zero growth” population, or a declining one, tends to be worse for the environment than other populations. A smaller supply of young people - young workers - nudges industry to choose options which are not friendly to the environment, even when such technology is available, because such measures are labor-intensive.

Wednesday, October 4, 2017

Human Sacrifice in Norse Mythology

Most, and probably all, civilizations in their earliest phases have practiced proto-religions which routinely included human sacrifice. From the Greeks and Romans to the Sumerians and Egyptians, myth and magic constituted these pre-religious varieties of polytheism.

Attempts to manipulate nature are termed ‘magic,’ and the mythologies of the Norse are no exception. One of the first authors to note this was Tacitus, the ethnographer whose chief work about northern Europe is titled Germania. Two scholars, E.O.G. Turville-Petre and Edgar Charles Polomé, write that:

In his Germania, Tacitus described the worship of a goddess, Nerthus, on an island, probably in the Baltic Sea. Whatever symbol represented her was kept hidden in a grove and taken around once a year in a covered chariot. During her pageant, there was rejoicing and peace, and all weapons were laid aside. Afterward, she was bathed in a lake and returned to her grove, but those who participated in her lustration were drowned in the lake as a sacrifice to thank her for her blessings.

By killing human beings as offerings to the Norse deities, the Norse hoped to gain some good fortune: a military victory, good weather, human fertility, or a bountiful harvest.

The Norse were not monolithic. Among them were many different tribes, each of which had its own variant of the foundational mythology. The Semnones occupied regions in southern Scandinavia and in what is now Germany.

Sacrifice often was conducted in the open or in groves and forests. The human sacrifice to the tribal god of the Semnones, described by Tacitus, took place in a sacred grove; other examples of sacred groves include the one in which Nerthus usually resides. Tacitus does, however, mention temples in Germany, though they were probably few. Old English laws mention fenced places around a stone, tree, or other object of worship. In Scandinavia, men brought sacrifice to groves and waterfalls.

Human sacrifice would continue to be common among not only among the Norse, but also among other European tribes, until roughly the era of Charlemagne. Made emperor in 800 A.D., he fostered education and culture, expanding his Frankish influence eastward and northward.

The Danish National Museum concludes that “archaeological finds from recent years show that human sacrifice was a reality in Viking Age Denmark.”

Monday, August 28, 2017

An Unlikely Bond: Henry Ford and Mohandas Gandhi

People like Henry Ford, a wealthy industrialist and a famous inventor, are more likely to receive fan letters than to write them, but Ford did write at least one. In July 1941, he wrote a letter to Mohandas Gandhi, expressing his admiration for Gandhi’s work.

The letter took two months to reach Gandhi. Mail from Detroit to India was slowed by WW2.

When Ford wrote the letter, the United States had not yet entered the war. By the time Gandhi received the letter, President Roosevelt was delivering a speech to Congress, and war was declared on December 8, 1941, the day Gandhi got Ford’s correspondence.

But Henry Ford hadn’t written about the war. He was merely expressing his admiration for Gandhi’s work.

One link between Ford and Gandhi was that both men had a thorough understanding of industrial processes and economics. Gandhi’s political work in India was carried out by the concrete process of forming thread on a spinning wheel and by the process of deriving table salt from evaporating seawater.

Both men understood the economic, social, and political impacts of such mechanical processes.

In response to Ford’s letter, Gandhi sent him a spinning wheel, which Ford displayed in his museum in Dearborn, Michigan. The spinning wheel is still there.

Friday, August 11, 2017

Social Class Structure in Mexico in the Early 19th Century

At the beginning of the 1800s, a system of social classes structured Mexican society, politics, and economics. At the top were the criollos, colonists born in Mexico to European parents. Along with the criollos were the European settlers themselves, who, however, by this time were few in number.

The fragmentation of Mexican society extended to subgroups within the major groups. The criollos, e.g., were divided among themselves regarding how far suffrage should extend.

Further down were the Indians, the Native Americans living in Mexico. The imperial government in Spain guaranteed them certain conditions, that they would have specified lands and be given a certain immunity from the colonial government.

The Spanish government didn’t do this out of the purest altruism: by stabilizing the Indians and giving them an at least minimal status, the imperial government in Spain hoped to create a roadblock to the increasing ambitions of the criollos, who wanted increasing influence in the colonial administration, and eventually independence.

The Spanish clergy worked toward the same goals as the Spanish imperial government, but with contrasting motives. As one history textbook, World History: Patterns of Interaction (McDougal-Littell, 2007), notes, “Spanish priests worked” in Mexico “for better treatment of Native Americans.”

Mexican society included a spectrum of other social classes, based on race, ethnicity, and culture: mestizos and zambos and about a half-dozen more.

But by the 1820s, the Mexicans succeeded in gaining independence from Spain. The social classes were now seen as classes of citizenship. Starting in Mexico City after the revolution, the right to vote was clearly defined and limited to certain demographic groups. As historian Irving Levinson writes,

By 1846, this pattern of exclusion applied to the entire nation. In that year, the federal government promulgated election regulations limiting suffrage to members of eight groups: land owners, mine owners and operators, military officers, clergy, magistrates, manufacturers, and members of learned professions. For some of these categories, such as that of landowner, the state also set minimum income requirements. By definition, owners of small farms and ranches, rural laborers, industrial workers, common tradespersons, and mine workers could not vote, let alone run for office.

The politics of Mexico, as a newly-independent nation, became the politics of class warfare.

As this class system asserted itself in the first two decades after Mexico gained its independence, it weakened Mexican society. Levinson continues:

In 1846, deep and violent disputes whose origins lay in the country’s colonial past divided Mexico. The first and most important of these chasms lay along lines of ethnicity and race.

Mexico’s internal social divisions would cause it to lose its first major war, and cause a major change among the political factions. The fissures in Mexican society prevented unified support for Mexico’s war efforts. The war is linked to a significant shift in Mexican partisan politics. The new postwar partisan trends led to an era known as “La Reforma” in Mexican history.

Wednesday, May 10, 2017

Exploring Sharia - Hadd and Hudud

As Islam continues to spread geographically, and to intensify itself in those regions it already occupies, an understanding of the intersection between Islamic culture and Islamic law explains events.

Sharia, as is known, is a distillation of legal principles from the ‘trilogy’ - the three texts which form the core of Islam: the Qur’an, the Hadith, and the Sira.

The foundation of Sharia is the Hadd. While there is room for debate and interpretation in much of Sharia, the Hadd is considered the non-negotiable bedrock of Islamic civil law.

Hudud is simply the plural form of Hadd, and both words are used in scholarship. A glossary published in Oxford Islamic Studies explains that Hadd is a

Limit or prohibition; pl. hudud. A punishment fixed in the Quran and hadith for crimes considered to be against the rights of God. The six crimes for which punishments are fixed are theft (amputation of the hand), illicit sexual relations (death by stoning or one hundred lashes), making unproven accusations of illicit sex (eighty lashes), drinking intoxicants (eighty lashes), apostasy (death or banishment), and highway robbery (death).

Islamic scholars find room for interpretation regarding any crime which doesn’t fit into one of the six categories listed above:

Punishment for all other crimes is left to the discretion of the court; these punishments are called tazir.

While the forms of Sharia vary from one nation to another, the core of Hadd is immutable, and forms a standard within Islam. Saudi Arabia has long enforced Hadd, and

recently fundamentalist ideologies have demanded the reintroduction of hudud, especially in Sudan, Iran, and Afghanistan.

Pakistan approved legislation in February 1979 which incorporated Hadd into its civil code, and subsequently stoning, whipping, and amputation of hands is considered a governmental function, not a religious one, within the Islamic Republic of Pakistan.

Wednesday, April 12, 2017

Carl Menger Develops Economic Thought

During his years, from 1873 to 1903, as a professor of economics at the University of Vienna, Carl Menger developed both the ‘marginal utility theory’ and the ‘subjective theory of value.’

What are these concepts, and how do they affect our daily economic activities and experiences?

The ‘subjective theory of value’ tells us that it is in the mind of the individual buyer that a relative value is established for anything that might be purchased. If one person really likes coffee, he’ll be willing to pay more for it than a person who doesn’t care that much for the coffee.

This shows that the ‘value’ of the coffee is not located in the coffee, but rather in the mind. It’s the same coffee, but one person might be willing to pay $1 for it, while another person is willing to pay $2 for it.

The ‘value’ is also not determined by relative scarcity or abundance. Two goods which are equally rare, or equally common, may yet fetch two different prices.

The ‘marginal utility theory’ tells us that the amount of practical use or please we obtain from each unit of a good varies. If I’m a student getting ready for a new academic year, the first pencil I buy is probably of great value to me: the second pencil less so. I might buy a package of ten pencils, but I probably won’t buy a package of 1,000 pencils. Why?

The difference between the first pencil and the second pencil is that the second pencil has less value to me, but only slightly less. The second pencil still adds significantly to my perceived utility, so I buy it.

At some point the value of an additional pencil is very small. The difference between having one pencil or having two is significant. The difference between having 999 pencils or having 1,000 pencils is not significant.

Carl Menger developed these ideas within the history of economics. As the Encyclopedia Britannica notes,

Goods are valuable because they serve various uses whose importance differs. Menger used this insight to resolve the diamond-water paradox that Adam Smith posed but never solved.

The ‘diamond-water paradox’ refers to the fact that, although water is necessary to life and diamonds are not, we usually pay much more for diamonds. But a man dying of thirst in the desert will reverse the relative prices, and pay more for a glass of water than for a diamond.

The notion of marginal utility applies not only to goods, but also to labor:

Menger also used it to refute the view, popularized by David Ricardo and Karl Marx, that the value of goods derives from the value of labour used to produce them. Menger proved just the opposite: that the value of labour derives from the value of the goods it produces, which is why, for example, the best professional basketball players or most popular actors are paid so much.

Before Menger, much of ‘classical’ economics focused on impersonal concepts such as aggregate supply and aggregate demand, relative scarcity and relative abundance, and the cost of labor and material required to produce a given good.

The work of Carl Menger supersedes the thought of Ricardo, Marx, and even Adam Smith, first because it offers an insight into the ‘diamond-water paradox,’ but also because, as Joseph Salerno writes, it transforms economic thought by introducing, and focusing upon, the psychology of the individual buyer or seller.

This brings us to the question of value which so vexed, and ultimately defeated, the Classical economists. Because they were tragically unable to grasp that specific quantities and not entire classes of goods were the object of human action, the Classical economists dropped ‘use value’ from their analysis. But Menger, with his unblinking focus on individual action, easily recognized the profound significance of the concept of the marginal unit - the quantity of a good relevant to choice - for the whole of economic theory.

Marginal utility theory also explains how and why marginal price changes: usually downward - after paying for one slice of cake, I’m full and I’m likely to offer a lower price for the second or third slice. It is important to note cases in which the marginal price goes up: I’m likely to offer a higher price for 32 chess figures than for 31 of them. I’m likely to offer a much higher price for 52 playing cards than for 51 of them.

Carl Menger’s economic writings have played a major role in the twentieth and twenty-first centuries.

Monday, March 20, 2017

The Berlin Conference: Avoiding War in Africa

In the late 1800s, various European nations were eager to explore Africa. To prevent these expeditions from getting out of control, a major international gathering was held in Berlin.

Many different countries sent representatives to this meeting. They made agreements so that the competition between the European nations remained moderate.

This ended the fear that rivalry between different groups of European settlers could turn into war. (Four centuries earlier, in 1494, an agreement between Spain and Portugal prevented war in South America, where both of those nations had settlers.)

Beyond preventing war, the Berlin Conference help establish some other regulations for Africa. The treaty which contained these agreements was called the “General Act” of the Berlin Conference. As historian Andrew Zimmerman writes,

The General Act guaranteed a free flow of commerce along the African coast and in the Congo and Niger Rivers and their tributaries. The signatories also vowed to fight slavery in Africa; “watch over the preservation of the native tribes, and to care for the improvement of the conditions of their moral and material well-being”; guarantee “freedom of conscience and religious toleration” for foreigners and Africans alike; and protect “Christian missionaries, scientists and explorers.”

While the treaty avoided a war between the European nations, and ensured the welfare of the Africans, it created tensions with the Ottoman Empire. Egypt and the Sudan, along with other regions of northern and eastern Africa, were still part of the Ottoman Empire.

The Ottomans operated a thriving slave trade in the late 1800s and early 1900s. Although slavery had been eliminated in Europe and North America, it still existed in many areas in the Middle East.

The slave traders from the Ottoman Empire were concerned that the Europeans in Africa would prevent them from capturing Africans and sending them to be sold in the slave markets of Istanbul or Arabia. This led to frictions between Ottoman representatives and European governments.

Wednesday, March 1, 2017

The Berlin Conference: Fighting Slavery

What was the net effect of European colonization in Africa? Was this activity merely the expression of greed for power and greed for materials? That is the story often presented. Is there another side to this narrative?

The documentation of the Berlin Conference, at which the various European powers organized territory in Africa, offers a glimpse into a different set of motives.

The agreements made in Berlin committed the Europeans to reducing and eventually eliminating slavery and the slave trade in Africa. Although slavery had ended in the United States in 1863, it continued in Brazil until 1888, and in the Ottoman Empire, slaves were publicly bought and sold until 1908.

At the time of the Berlin Conference, then, slavery was still a very real problem in the world. As historian Andrew Zimmerman writes,

The European partition of Africa, given formal sanction at the Berlin West Africa Conference of 1884-1885, is often treated as an expression of exuberant nationalism in which each nation, vying for what Germans would come to call a “place in the sun,” sought to outdo the others in sticking their flags in far-flung territories. In fact, it was an expression of exuberant humanitarianism, guaranteed by such state power as the signatories of the General Act of the Berlin Conference were willing to provide.

Not only did the Berlin Conference seek to end slavery and the slave trade in Africa, it worked also to promote religious freedom, and to preserve the native cultures of the tribes in its territories.

This led to increased tension and conflict with Muslim lands, because Saudi Arabia and Qatar were still importing African slaves well into the twentieth century. Saudi Arabia, along with Yemen, didn’t formally make slavery illegal until 1962. In Qatar, slavery is still legal, as recently as 2017.

After the Berlin Conference, in the early 1890s, caravans of slaves were still transported through Ethiopia on their way to the Ottoman Empire. Eventually, the signatories to the Berlin Conference were able to largely reduce, but not entirely eliminate, slavery and the slave trade in Africa.

Tuesday, February 21, 2017

Britain’s Bronze Age: Massive Non-Anthropogenic Climate Change

Traditional textbooks explain that the Neolithic Age was followed by the Bronze Age, which in turn was succeeded by the Iron Age. Because, however, these phases of technological development emerged at substantially different times in various places, it was possible that Neolithic societies, Bronze Age societies, and Iron Age societies lived simultaneously, albeit in separate locations.

This situation prevents simple definitions of these three ages as beginning, even approximately, at some stated time, or ending at some stated time. In some locations, the Bronze Age was ending at the same time that it was beginning in other places. As historian Jason Urbanus writes,

Some 3,000 years ago, throughout Britain, broad changes in settlement patterns, society, and technology were slowly bringing an end to what archeologists call the British Bronze Age (2500 - 800 B.C.). In the coming centuries, the Iron Age would emerge.

Greece and China, among others, had already ended their Bronze Ages at the time when Britain was beginning its Bronze Age. But other types of change were happening in England contemporaneously with that region’s Bronze Age: climate change.

Paleontologists, archeologists, and historians document several massive non-anthropogenic climate swings throughout history. A “Roman Warm Period” (250 B.C. to 400 A.D.) and a “Medieval Warm Period” (950 A.D. to 1250 A.D.) represent temperature outliers much warmer than those observed at the end of the twentieth century or the beginning of the twenty-first century.

Likewise, a “Dark Ages Cold Period” (450 A.D. to 950 A.D.) and a “Little Ice Age” (1300 A.D. to 1870 A.D.) were the greatest global coolings recorded during historical times (as opposed to the even colder Ice Ages of prehistoric times). As its Bronze Age was ending, Britain experienced another climate change.

The key feature of these historic climate trends was that they were both massive and non-anthropogenic.

But in the wetlands of East Anglia, referred to as the Fenland, a transformation of another sort, both more conspicuous and tangible, was taking place. Climate change was gradually causing water levels to rise, and, as marshland increased, vital dry land became scarcer. The solution for one small settlement was to build its homes on pylons directly above the water.

The hypothesis that these “stilt houses” were built to maximize arable land is questionable. Many early societies built houses on pylons over water, swamps, or other types of land.

In most cases, the purpose for the stilts was not to preserve agricultural land. Often, it was to escape flooding, or to protect the houses from rodent infestation.

Saturday, January 28, 2017

The Zhou Dynasty Emerges

Chinese history usually begins with the Xia dynasty as the first royal family. This dynasty ruled approximately from 2070 B.C. to 1600 B.C., but texts and archeology are scant, giving little concrete information.

The sixteenth emperor in the Xia lineage seems to have married a queen who treated the people cruelly. Zi Lu, who was known as ‘Tang’ and who founded the Shang dynasty, overthrew this sixteenth and last Xia emperor.

The Shang era lasted from around 1500 B.C. to 1045 B.C. and ended when the Shang were overthrown. One possible hypotheses concerning this dethroning was that the Shang had grown complacent, depending upon the services of smaller allied kingdoms in matters of defense.

If this hypothesis is true, a reasonable comparison could be made to the way in which the Carolingians supplanted the Merovingians. As one history book explains:

Outside the Shang domains were the domains of allied and rival polities. To the west were the fierce Qiang, who probably spoke an early form of Tibetan. Between the Shang capital and the Qiang was a frontier state called Zhou, which shared most of the material culture of the Shang.

The date of 1045 B.C. or 1046 B.C. for the establishment of the Zhou as successors to the Shang is relatively precise, unlike the somewhat more generalized dates for earlier events among China’s imperial dynasties.

Once established, the earlier phase of Zhou history is referred to as the ‘Western Zhou’ due to the location of the imperial capital:

This state rose against the Shang and defeated it. The first part of the Zhou Dynasty is called the Western Zhou period.

The ‘Western Zhou’ phase lasted from 1045 B.C. to 771 B.C., when the capital was relocated. A new capital was established in the East after a Zhou king was assassinated.

Its capital was in the west near modern Xi’an in Shaanxi province (to distinguish it from the Eastern Zhou, after the capital was moved to near modern Luoyang in Henan province.)

While the ‘Eastern Zhou’ manifested increased literacy and cultural advancements, it never attained the military and political strength which the Western Zhou had.

What gave the Western Zhou empire its brawn? One hypothesis is its internal political structure; a competing hypothesis points to its being closer in time to its founding as a tenacious frontier monarchy.

At the center of the Western Zhou political structure was the Zhou king, who was simultaneously ritual head of the royal lineage and supreme lord of the nobility. Rather than attempt to rule all of their territories directly, the early Zhou rulers sent out relatives and trusted subordinates to establish walled garrisons in the conquered territories, creating a decentralized, quasi-feudal system. The king’s authority was maintained by rituals of ancestor worship and court visits.

Perhaps as the decades and centuries went by, the Zhou monarchs forgot their rough origins and the doggedness which came from them. One incident, around 806 B.C., illustrates how political systems worked:

A younger son of King You was made a duke and sent east to establish the state of Zheng in a swampy area that needed to be drained. This duke and his successors nevertheless spent much of their time at the Zhou court, serving as high ministers.

This would have been a mere thirty or forty years prior to the assassination which ended the Western Zhou period. The duke’s presence at court, instead of out supervising work, may be a telling example of the dynasty’s softening. Again, comparisons to the Merovingian-Carolingian succession are invited.

It is not always clear whether to refer to Zhou monarchs are kings or emperors. In either case, a system of delegating oversight of provinces can be compared to Rome’s imperial system.

The texts above are quoted from Pre-Modern East Asia: to 1800 (A Cultural, Social, and Political History), written by Patricia Ebrey, Anne Walthall, and James Palais.

Tuesday, January 17, 2017

Canada Fails to Confront

In July 2012, Brian Lilley, reporting for the Toronto Sun, brought to Canada’s attention the fact that an internal document, produced for the Royal Canadian Mounted Police (RCMP), failed to take seriously Islam’s threats to peace, safety, and security.

The early months of 2011 were filled with news of the ‘Arab Spring,’ a brief and ill-fated movement toward the establishment of constitutional republics in several Muslim majority states in the mid-East and in northern Africa.

Although idealistic, the movement failed, and in the end, succeeded in merely replacing cruel secular dictators with cruel Islamic governments. Egypt and Libya now organized their states around words like Qur’an, Jihad, Hadith, and Shari’a.

The political leaders within the RCMP, hoping to appease or mollify activist Muslims in Canada, issued instructions in booklet for RCMP officers dealing with the public. In addition, as Brian Lilley writes, the booklet attempted to affirm the oppressive governments taking root in the Near East:

The document also makes a defense of the Muslim Brotherhood, the Islamist group that now controls much of Egypt. During his campaign for the Egyptian presidency, Mohammed Morsi, a man closely tied to the Brotherhood, told a crowd, “The Qur’an is our constitution. The Prophet Mohammed is our leader. Jihad is our path and death for the sake of Allah is our most lofty aspiration.”

Islamic terror attacks, both in North America and around the world, during the years following the internal publication of this RCMP text reveal that it is somewhat naive.

Saturday, January 14, 2017

Soviet Aggression Against Poland

Toward the end of WW2, the brutality of the Soviet Army became more and more visible to the world. Beyond what was militarily necessary to win battles, the Soviet Socialist forces committed war crimes of horrifying cruelty.

Early in the war, prior to mid-1941, the USSR had made at least a few attempts to hide its savagery. When a million Poles were taken to work camps inside the Soviet Union, where they were enslaved, and when an estimated 15,000 to 22,000 Polish officers were murdered in a mass execution in Katyn Forest, Soviet authorities made an effort to prevent the news media from reporting about these events.

Also in 1941, the USSR rounded up and executed Polish labor leaders: an ironic crime, because the Soviet Union presented itself as a worker’s paradise.

By August 1944, the Soviets took little effort to hide their actions. The advancing Soviet Army paused its attack on Warsaw in order to give the Nazis time to finish butchering thousands of Jews in the Warsaw uprising. The Soviet Socialists were entirely complicit in the murdering of these Holocaust victims.

The USSR continued this pattern up to the very end of the war, as historians Stan Evans and Herbert Romerstein write:

A fourth atrocity occurred in the spring of 1945. With the war winding down and the defeat of Hitler certain, sixteen Polish leaders were summoned to Moscow to negotiate postwar arrangements for their country. A promise of safe passage was given, but in the familiar Soviet manner broken, as all sixteen were arrested and imprisoned. Again, the common feature, beyond the usual treachery and deception, was the meaning of such episodes for postwar Poland. By these actions, the Soviets were systematically liquidating Polish leaders who could have resisted the Red takeover of their country.

Joseph Stalin seemed more intent on killing people than on winning the war. Soviet troops spent time and energy murdering and raping civilian populations - efforts which could have been used to defeat the enemy on the battlefield.

By war’s end, millions of Poles were dead at the hands of the USSR. Weakened, Poland was in no position to resist the postwar occupational dictatorship.

Thursday, January 12, 2017

Bulgaria: a Nation Seeks Freedom

What was going on in southeastern Europe in the 14th century? Why is the year 1389 so deeply ingrained in the history of Yugoslavia?

In that year, Islamic armies defeated the combined defenses of Serbs, Albanians, and Hungarians, and began an extended military occupation of the Balkans and of southeastern Europe in general.

In the 14th century, the various ethnic groups of southeastern Europe fell under the oppression of the Muslims. For almost five hundred years, they were geographically and politically isolated from the scientific and cultural advancements of Europe.

With the help of numerous rebellions and their own intellectual advancements, first the Serbs, and then the Greeks, returned to the larger cultural community of Europe. The British poet Lord Byron died helping in Greece’s successful effort to throw off the tyranny of the Islamic occupational armies in 1824.

In 1876, the Bulgarians also began to assert their desire for freedom. The first major step in this liberation was the in April of that year: although many of them died in this rebellion, the Bulgarians began to believe more and more in the possibility that they could stop the Muslim aggression and be free.

Many Europeans cheered the Bulgarian desire for liberty: Victor Hugo, Giuseppe Garibaldi, Otto von Bismarck, William Gladstone, Charles Darwin, Konstantin Jirecek, Leo Tolstoy, Fyodor Dostoyevsky, Ivan Turgenev, Dmitri Mendeleev, and others.

A glance over that list of names will see that Christians and atheists, scientists and poets, Englishmen, Russians, and Germans all united in their desire to see Bulgaria liberated from Islamic oppression.

Six months after this April uprising, the nations of the region gathered at a conference in Constantinople and saw that the Bulgarian question was one of the central matters in the diplomacy of the era.

One of the many heroic Bulgarians who died in the quest for freedom was Hristo Botev (also spelled Khristo Botev), who demonstrated that his poetry was backed up by his selfless actions as he sought liberty for his fellow countrymen.