A Century of Financial History
CLICK ON THE ICONS FOR MORE DETAILS
Source: Thomson Reuters Datastream, Bloomberg, SNB, Corestone Calculations
Throughout history, financial markets have had their ups and downs, booms and busts. Browse through ground breaking books, innovations and events which rocked and shaped financial markets and observe how Swiss institutional investors adapted to the increased complexity and succeeded over the last 100 years.
CHOOSE YOUR INDEX "All indices in local currency, unhedged"
SELECT FURTHER CONTEXT
CLICK ON THE ICONS FOR MORE DETAILS
Source: Thomson Reuters Datastream, Bloomberg, SNB, Corestone Calculations
CLICK ON LABELS FOR DETAILED ASSET INFO
«Real-life» actual portfolio composite of Swiss institutional investors. Data was derived and broken down for sub asset class estimates based on publicly available historical data and annual reports.
BROWSE THROUGH GROUNDBREAKING BOOKS & CLICK FOR FURTHER INFO
SELECT FOR DETAILED INFO
SELECT FOR DETAILED INFO
The Federal Reserve Act created a system of private and public entities. There were to be at least eight and no more than twelve private regional Federal Reserve banks. Twelve were established, and each had various branches, a board of directors, and district boundaries. The Federal Reserve Board, consisting of seven members, was created as the governing body of the Fed. Each member is appointed by the President of the U.S and confirmed by the U.S. Senate. In 1935, the Board was renamed and restructured. Also created as part of the Federal Reserve System was a 12-member Federal Advisory Committee and a single new United States currency, the Federal Reserve Note. The Federal Reserve Act created a national currency and a monetary system that could respond effectively to the stresses in the banking system and create a stable financial system. With the goal of creating a national monetary system and financial stability, the Federal Reserve Act also provided many other functions and financial services for the economy, such as check clearing and collection for all members of the Federal Reserve.
With the passing of the Federal Reserve Act, Congress required that all nationally chartered banks become members of the Federal Reserve System. These banks were required to purchase specified non-transferable stock in their regional Federal Reserve banks, and to set aside a stipulated amount of non-interest bearing reserves with their respective reserve banks. Since 1980, all depository institutions have been required to set aside reserves with the Federal Reserve. Such institutions are entitled to certain Federal Reserve services. State chartered banks were given the option of becoming members of the Federal Reserve System and in the case of the exercise of such option were to be subject to supervision, in part, by the Federal Reserve System. Member banks became entitled to have access to discounted loans at the discount window in their respective reserve banks, to a 6% annual dividend in their Federal Reserve stock, and to other services.
The Russian Revolution was a pair of revolutions in Russia in 1917 which dismantled the Tsarist autocracy and led to the rise of the Soviet Union. The Russian Empire collapsed with the abdication of Emperor Nicholas II and the old regime was replaced by a provisional government during the first revolution of February 1917 (March in the Gregorian calendar; the older Julian calendar was in use in Russia at the time). Alongside it arose grassroots community assemblies (called ’soviets‘) which contended for authority. In the second revolution that October, the Provisional Government was toppled and all power was given to the soviets.
The February Revolution (March 1917) was a revolution focused around Petrograd (now Saint Petersburg), the capital of Russia at that time. In the chaos, members of the Imperial parliament (the Duma) assumed control of the country, forming the Russian Provisional Government which was heavily dominated by the interests of large capitalists and the noble aristocracy. The army leadership felt they did not have the means to suppress the revolution, resulting in Nicholas’s abdication. The soviets, which were dominated by soldiers and the urban industrial working class, initially permitted the Provisional Government to rule, but insisted on a prerogative to influence the government and control various militias. The February Revolution took place in the context of heavy military setbacks during the First World War (1914–18), which left much of the Russian Army in a state of mutiny.
A period of dual power ensued, during which the Provisional Government held state power while the national network of soviets, led by socialists, had the allegiance of the lower classes and, increasingly, the left-leaning urban middle class. During this chaotic period there were frequent mutinies, protests and many strikes. Many socialist political organizations were engaged in daily struggle and vied for influence within the Duma and the soviets, central among which were the Bolsheviks („Ones of the Majority“) led by Vladimir Lenin who campaigned for an immediate end to the war, land to the peasants, and bread to the workers. When the Provisional Government chose to continue fighting the war with Germany, the Bolsheviks and other socialist factions were able to exploit virtually universal disdain towards the war effort as justification to advance the revolution further. The Bolsheviks turned workers‘ militias under their control into the Red Guards (later the Red Army) over which they exerted substantial control.
In the October Revolution (November in the Gregorian calendar), the Bolsheviks led an armed insurrection by workers and soldiers in Petrograd that successfully overthrew the Provisional Government, transferring all its authority to the soviets with the capital being relocated to Moscow shortly thereafter. The Bolsheviks had secured a strong base of support within the soviets and, as the now supreme governing party, established a federal government dedicated to reorganizing the former empire into the world’s first socialist republic, practicing soviet democracy on a national and international scale. The promise to end Russia’s participation in the First World War was honored promptly with the Bolshevik leaders signing the Treaty of Brest-Litovsk with Germany in March 1918. To further secure the new state, the Cheka was established which functioned as a revolutionary security service that sought to weed out and punish those considered to be „enemies of the people“ in campaigns consciously modeled on similar events during the French Revolution.
Soon after, civil war erupted among the „Reds“ (Bolsheviks), the „Whites“ (counter-revolutionaries), the independence movements and the non-Bolshevik socialists. It continued for several years, during which the Bolsheviks defeated both the Whites and all rival socialists and thereafter reconstituted themselves as the Communist Party. In this way, the Revolution paved the way for the creation of the Union of Soviet Socialist Republics (USSR) in 1922. While many notable historical events occurred in Moscow and Petrograd, there was also a visible movement in cities throughout the state, among national minorities throughout the empire and in the rural areas, where peasants took over and redistributed land.
The Great Depression was a severe worldwide economic depression that took place mostly during the 1930s, originating in the United States. The timing of the Great Depression varied across nations; in most countries it started in 1929 and lasted until 1941. It was the longest, deepest, and most widespread depression of the 20th century. In the 21st century, the Great Depression is commonly used as an example of how far the world’s economy can decline.
The depression started in the United States after a major fall in stock prices that began around September 4, 1929, and became worldwide news with the stock market crash of October 29, 1929 (known as Black Tuesday). Between 1929 and 1932, worldwide gross domestic product (GDP) fell by an estimated 15%. By comparison, worldwide GDP fell by less than 1% from 2008 to 2009 during the Great Recession. Some economies started to recover by the mid-1930s. However, in many countries, the negative effects of the Great Depression lasted until the beginning of World War II.
The Great Depression had devastating effects in countries both rich and poor. Personal income, tax revenue, profits and prices dropped, while international trade plunged by more than 50%. Unemployment in the U.S. rose to 25% and in some countries rose as high as 33%.
Cities all around the world were hit hard, especially those dependent on heavy industry. Construction was virtually halted in many countries. Farming communities and rural areas suffered as crop prices fell by about 60%. Facing plummeting demand with few alternative sources of jobs, areas dependent on primary sector industries such as mining and logging suffered the most.
The Tariff Act of 1930 (codified at 19 U.S.C. ch. 4), otherwise known as the Smoot–Hawley Tariff or Hawley–Smoot Tariff, was an act implementing protectionist trade policies sponsored by Senator Reed Smoot and Representative Willis C. Hawley and was signed into law on June 17, 1930. The act raised U.S. tariffs on over 20,000 imported goods.
The tariffs (this does not include duty-free imports – see Tariff levels below) under the act were the second-highest in the U.S. in 100 years, exceeded by a small margin by the Tariff of 1828. The Act and following retaliatory tariffs by America’s trading partners were major factors of the reduction of American exports and imports by more than half during the Depression. Although economists disagree by how much, the consensus view among economists and economic historians is that „The passage of the Smoot–Hawley Tariff exacerbated the Great Depression.“
First passenger flight: Wilbur Wright takes an employee along for a ride
Army Airfield established at College Park, Md., by Wilbur Wright, making it the longest continuously operating airport in the world today
Orville Wright opens the first commercial flight school in Montgomery, Ala.
Burgess Co. becomes the first licensed commercial aircraft manufacturer
Silas Christofferson carries passengers by hydroplane between San Francisco and Oakland harbors
National Air Mail service inaugurated
KLM begins operation, making it the oldest carrier in the world still operating under its original name
Sydney Airport opens for commercial service
Minneapolis-St.Paul International Airport opens for commercial service
International air service is offered by Aeromarine West Indies Airways between Key West, Fla., and Havana, Cuba
First permanent airport and commercial terminal used solely for commercial flights opens at Flughafen Devau near Konigsberg, East Prussia
Aeromarine Airways of Cleveland, Ohio, is established as the first airline ticketing agency
First transcontinental non-stop flight
Congress adopts the Air Commerce Act of 1926, which authorized the Secretary of Commerce to designate air routes, develop air navigation systems, and license pilots and aircraft
Deutsche Luft Hansa (now known as Lufthansa) begins scheduled service in Germany
First flight lands at Candler Field, today’s busiest U.S. airport – Hartsfield-Jackson Atlanta International Airport
Pan American Airlines inaugurates its first passenger flight from Miami to San Juan by way of Belize and Managua
First female flight attendant, Ellen Church, is hired by Boeing Air Transport (now United Airlines)
United Airlines begins flying coast to coast with a Boeing 247 flight lasting nearly 20 hours
Boeing designs the 307 Stratoliner, the first commercial aircraft with a pressurized cabin
Pan American inaugurates passenger flights across the Pacific Ocean
Pan American begins transatlantic passenger service
New York Municipal Airport opens
Many commercial airlines and airports go offline to commercial traffic to support World War II military efforts
Transatlantic route is the world’s most traveled air route
De Havilland Comet becomes the world’s first commercial jet airliner
Pan American initiates its New York to London route with the Boeing 707
Beijing Capital International Airport opens
American Airlines offers first domestic jetliner flights with routes from New York to Los Angeles
Concorde jet flies first supersonic passenger flight
Airline Deregulation Act is signed into law, removing government control over fares, routes and market entry
First frequent flier program introduced
Almost half of total flights worldwide took place in the U.S.
First ticketless travel becomes available
Boeing produces twin-engine 777, the first aircraft produced via computer-aided design and engineering
First airline tickets are sold via the Internet
First web-based passenger check-in and online boarding passes
Transportation Security Administration established in response to September 11 attacks
Airbus A380 enters commercial service capable of carrying 850 passengers
Transportation Security Administration formally accepts airport scanners as the primary method of pre-flight screening
The New Deal was a series of federal programs, public work projects, financial reforms and regulations enacted in the United States during the 1930s in response to the Great Depression. Some of these federal programs included the Civilian Conservation Corps (CCC), the Civil Works Administration (CWA), the Farm Security Administration (FSA), the National Industrial Recovery Act of 1933 (NIRA) and the Social Security Administration (SSA). These programs included support for farmers, the unemployed, youth and the elderly as well as new constraints and safeguards on the banking industry and changes to the monetary system. Most programs were enacted between 1933–1938, though some were later. They included both laws passed by Congress as well as presidential executive orders, most during the first term of the presidency of Franklin D. Roosevelt. The programs focused on what historians refer to as the „3 Rs“: relief for the unemployed and poor, recovery of the economy back to normal levels and reform of the financial system to prevent a repeat depression. The New Deal produced a political realignment, making the Democratic Party the majority (as well as the party that held the White House for seven out of the nine presidential terms from 1933–1969) with its base in liberal ideas, the South, traditional Democrats, big city machines and the newly empowered labor unions and ethnic minorities. The Republicans were split, with conservatives opposing the entire New Deal as an alleged enemy of business and growth and liberals accepting some of it and promising to make it more efficient. The realignment crystallized into the New Deal coalition that dominated most presidential elections into the 1960s while the opposing conservative coalition largely controlled Congress from 1939–1964.
By 1936, the term „liberal“ typically was used for supporters of the New Deal and „conservative“ for its opponents. From 1934 to 1938, Roosevelt was assisted in his endeavors by a „pro-spender“ majority in Congress (drawn from two-party, competitive, non-machine, progressive and left party districts). In the 1938 midterm election, Roosevelt and his liberal supporters lost control of Congress to the bipartisan conservative coalition. Many historians distinguish between a „First New Deal“ (1933–1934) and a „Second New Deal“ (1935–1938), with the second one more liberal and more controversial.
The „First New Deal“ (1933–1934) dealt with the pressing banking crises through the Emergency Banking Act and the 1933 Banking Act. The Federal Emergency Relief Administration (FERA) provided $500 million ($9.45 billion today) for relief operations by states and cities, while the short-lived CWA gave locals money to operate make-work projects in 1933–1934. The Securities Act of 1933 was enacted to prevent a repeated stock market crash. The controversial work of the National Recovery Administration (NRA) was also part of the First New Deal.
The „Second New Deal“ in 1935–1938 included the Wagner Act to protect labor organizing, the Works Progress Administration (WPA) relief program (which made the federal government by far the largest single employer in the nation), the Social Security Act and new programs to aid tenant farmers and migrant workers. The final major items of New Deal legislation were the creation of the United States Housing Authority and the FSA, which both occurred in 1937; and the Fair Labor Standards Act of 1938, which set maximum hours and minimum wages for most categories of workers. The FSA was also one of the oversight authorities of the Puerto Rico Reconstruction Administration, which administered relief efforts to Puerto Rican citizens affected by the Great Depression.
The economic downturn of 1937–1938 and the bitter split between the American Federation of Labor (AFL) and Congress of Industrial Organizations (CIO) labor unions led to major Republican gains in Congress in 1938. Conservative Republicans and Democrats in Congress joined in the informal conservative coalition. By 1942–1943, they shut down relief programs such as the WPA and the CCC and blocked major liberal proposals. Nonetheless, Roosevelt turned his attention to the war effort and won reelection in 1940–1944. Furthermore, the Supreme Court declared the NRA and the first version of the Agricultural Adjustment Act (AAA) unconstitutional, but the AAA was rewritten and then upheld. Republican president Dwight D. Eisenhower (1953–1961) left the New Deal largely intact, even expanding it in some areas. In the 1960s, Lyndon B. Johnson’s Great Society used the New Deal as inspiration for a dramatic expansion of liberal programs, which Republican Richard Nixon generally retained. However, after 1974 the call for deregulation of the economy gained bipartisan support. The New Deal regulation of banking (Glass–Steagall Act) lasted until it was suspended in the 1990s.
Several New Deal programs remain active and those operating under the original names include the Federal Deposit Insurance Corporation (FDIC), the Federal Crop Insurance Corporation (FCIC), the Federal Housing Administration (FHA) and the Tennessee Valley Authority (TVA). The largest programs still in existence today are the Social Security System and the Securities and Exchange Commission (SEC).
The Glass–Steagall legislation describes four provisions of the U.S. Banking Act of 1933 separating commercial and investment banking. The article 1933 Banking Act describes the entire law, including the legislative history of the provisions covered here.
(The common name comes from the names of the Congressional sponsors, Senator Carter Glass and Representative Henry B. Steagall. A separate 1932 law described in the article Glass–Steagall Act of 1932 had the same sponsors, and is also referred to as the Glass–Steagall Act.)
The separation of commercial and investment banking prevented securities firms and investment banks from taking deposits, and commercial Federal Reserve member banks from:
Starting in the early 1960s, federal banking regulators‘ interpretations of the Act permitted commercial banks, and especially commercial bank affiliates, to engage in an expanding list and volume of securities activities. Congressional efforts to „repeal the Glass–Steagall Act“, referring to those four provisions (and then usually to only the two provisions that restricted affiliations between commercial banks and securities firms), culminated in the 1999 Gramm–Leach–Bliley Act (GLBA), which repealed the two provisions restricting affiliations between banks and securities firms.
By that time, many commentators argued Glass–Steagall was already „dead“. Most notably, Citibank’s 1998 affiliation with Salomon Smith Barney, one of the largest US securities firms, was permitted under the Federal Reserve Board’s then existing interpretation of the Glass–Steagall Act. In November 1999, President Bill Clinton publicly declared „the Glass–Steagall law is no longer appropriate“.
Some commentators have stated that the GLBA’s repeal of the affiliation restrictions of the Glass–Steagall Act was an important cause of the financial crisis of 2007–2008. Economics Nobel prize laureate Joseph Stiglitz, for instance, argued that „[w]hen repeal of Glass-Steagall brought investment and commercial banks together, the investment-bank culture came out on top“ Economists at the Federal Reserve, such as Chairman of the Fed Ben Bernanke, have argued that the activities linked to the financial crisis were not prohibited (or, in most cases, even regulated) by the Glass–Steagall Act.
The General Theory of Employment, Interest and Money was written by the English economist John Maynard Keynes. The book, generally considered to be his magnum opus, is largely credited with creating the terminology and shape of modern macroeconomics. Published in February 1936, it sought to bring about a revolution, commonly referred to as the „Keynesian Revolution“, in the way some economists believe. Especially in relation to the proposition that a market economy tends naturally to restore itself to full employment after temporary shocks.
Regarded widely as the cornerstone of Keynesian thought, the book challenged the established classical economics and introduced important concepts such as the consumption function, the multiplier, the marginal efficiency of capital, the principle of effective demand and liquidity preference.
World War II (often abbreviated to WWII or WW2), also known as the Second World War, was a global war that lasted from 1939 to 1945, although related conflicts began earlier. It involved the vast majority of the world’s countries—including all of the great powers—eventually forming two opposing military alliances: the Allies and the Axis. It was the most widespread war in history, and directly involved more than 100 million people from over 30 countries. In a state of total war, the major participants threw their entire economic, industrial, and scientific capabilities behind the war effort, erasing the distinction between civilian and military resources.
World War II was the deadliest conflict in human history, marked by 50 million to 85 million fatalities, most of which were civilians in the Soviet Union and China. It included massacres, the deliberate genocide of the Holocaust, strategic bombing, starvation, disease and the first use of nuclear weapons in history.
The Empire of Japan aimed to dominate Asia and the Pacific and was already at war with the Republic of China in 1937, but the world war is generally said to have begun on 1 September 1939 with the invasion of Poland by Nazi Germany and subsequent declarations of war on Germany by France and the United Kingdom. From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. Under the Molotov–Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. The war continued primarily between the European Axis powers and the coalition of the United Kingdom and the British Commonwealth, with campaigns including the North Africa and East Africa campaigns, the aerial Battle of Britain, the Blitz bombing campaign, and the Balkan Campaign, as well as the long-running Battle of the Atlantic. On 22 June 1941, the European Axis powers launched an invasion of the Soviet Union, opening the largest land theatre of war in history, which trapped the major part of the Axis military forces into a war of attrition. In December 1941, Japan attacked the United States and European colonies in the Pacific Ocean, and quickly conquered much of the Western Pacific.
The Axis advance halted in 1942 when Japan lost the critical Battle of Midway, and Germany and Italy were defeated in North Africa and then, decisively, at Stalingrad in the Soviet Union. In 1943, with a series of German defeats on the Eastern Front, the Allied invasion of Sicily and the Allied invasion of Italy which brought about Italian surrender, and Allied victories in the Pacific, the Axis lost the initiative and undertook strategic retreat on all fronts. In 1944, the Western Allies invaded German-occupied France, while the Soviet Union regained all of its territorial losses and invaded Germany and its allies. During 1944 and 1945 the Japanese suffered major reverses in mainland Asia in South Central China and Burma, while the Allies crippled the Japanese Navy and captured key Western Pacific islands.
The war in Europe concluded with an invasion of Germany by the Western Allies and the Soviet Union, culminating in the capture of Berlin by Soviet troops, the suicide of Adolf Hitler and the subsequent German unconditional surrender on 8 May 1945. Following the Potsdam Declaration by the Allies on 26 July 1945 and the refusal of Japan to surrender under its terms, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki on 6 and 9 August respectively. With an invasion of the Japanese archipelago imminent, the possibility of additional atomic bombings and the Soviet invasion of Manchuria, Japan formally surrendered on 2 September 1945. Thus ended the war in Asia, cementing the total victory of the Allies.
World War II changed the political alignment and social structure of the world. The United Nations (UN) was established to foster international co-operation and prevent future conflicts. The victorious great powers—China, France, the Soviet Union, the United Kingdom, and the United States—became the permanent members of the United Nations Security Council. The Soviet Union and the United States emerged as rival superpowers, setting the stage for the Cold War, which lasted for the next 46 years. Meanwhile, the influence of European great powers waned, while the decolonisation of Africa and Asia began. Most countries whose industries had been damaged moved towards economic recovery. Political integration, especially in Europe, emerged as an effort to end pre-war enmities and to create a common identity.
The Bretton Woods Conference, formally known as the United Nations Monetary and Financial Conference, was the gathering of 730 delegates from all 44 Allied nations at the Mount Washington Hotel, situated in Bretton Woods, New Hampshire, United States, to regulate the international monetary and financial order after the conclusion of World War II.
The conference was held from July 1–22, 1944. Agreements were signed that, after legislative ratification by member governments, established the International Bank for Reconstruction and Development (IBRD) and the International Monetary Fund (IMF).
The Bretton Woods Conference had three main results: (1) Articles of Agreement to create the IMF, whose purpose was to promote stability of exchange rates and financial flows. (2) Articles of Agreement to create the IBRD, whose purpose was to speed reconstruction after the Second World War and to foster economic development, especially through lending to build infrastructure. (3) Other recommendations for international economic cooperation. The Final Act of the conference incorporated these agreements and recommendations.
Within the Final Act, the most important part in the eyes of the conference participants and for the later operation of the world economy was the IMF agreement. Its major features were:
– An adjustably pegged foreign exchange market rate system: Exchange rates were pegged to gold. Governments were only supposed to alter exchange rates to correct a „fundamental disequilibrium.“
– Member countries pledged to make their currencies convertible for trade-related and other current account transactions. There were, however, transitional provisions that allowed for indefinite delay in accepting that obligation, and the IMF agreement explicitly allowed member countries to regulate capital flows. The goal of widespread current account convertibility did not become operative until December 1958, when the currencies of the IMF’s Western European members and their colonies became convertible.
– As it was possible that exchange rates thus established might not be favourable to a country’s balance of payments position, governments had the power to revise them by up to 10% from the initially agreed level („par value“) without objection by the IMF. The IMF could concur in or object to changes beyond that level. The IMF could not force a member to undo a change, but could deny the member access to the resources of the IMF.
– All member countries were required to subscribe to the IMF’s capital. Membership in the IBRD was conditioned on being a member of the IMF. Voting in both institutions was apportioned according to formulas giving greater weight to countries contributing more capital („quotas“).
During the final stage of World War II, the United States detonated two nuclear weapons over the Japanese cities of Hiroshima and Nagasaki on August 6 and 9, 1945, respectively. The United States dropped the bombs after obtaining the consent of the United Kingdom, as required by the Quebec Agreement. The two bombings killed at least 129,000 people, most of whom were civilians. They remain the only use of nuclear weapons in the history of warfare.
In the final year of the war, the Allies prepared for what was anticipated to be a very costly invasion of the Japanese mainland. This undertaking was preceded by a conventional and firebombing campaign that destroyed 67 Japanese cities. The war in Europe had concluded when Germany signed its instrument of surrender on May 8, 1945. As the Allies turned their full attention to the war in the Pacific War, the Japanese faced the same fate. The Allies called for the unconditional surrender of the Imperial Japanese armed forces in the Potsdam Declaration on July 26, 1945—the alternative being „prompt and utter destruction“. The Japanese ignored the ultimatum and the war continued.
By August 1945, the Allies‘ Manhattan Project had produced two types of atomic bomb, and the 509th Composite Group of the United States Army Air Forces (USAAF) was equipped with the specialized Silverplate version of the Boeing B-29 Superfortress that could deliver them from Tinian in the Mariana Islands. Orders for atomic bombs to be used on four Japanese cities were issued on July 25. On August 6, one of its B-29s dropped a Little Boy uranium gun-type bomb on Hiroshima. Three days later, on August 9, a Fat Man plutonium implosion-type bomb was dropped by another B-29 on Nagasaki. The bombs immediately devastated their targets. Over the next two to four months, the acute effects of the atomic bombings killed 90,000–146,000 people in Hiroshima and 39,000–80,000 people in Nagasaki; roughly half of the deaths in each city occurred on the first day. Large numbers of people continued to die from the effects of burns, radiation sickness, and other injuries, compounded by illness and malnutrition, for many months afterward. In both cities, most of the dead were civilians, although Hiroshima had a sizable military garrison.
Japan announced its surrender to the Allies on August 15, six days after the bombing of Nagasaki and the Soviet Union’s declaration of war. On September 2, the Japanese government signed the instrument of surrender, effectively ending World War II. The atomic bombings‘ immediate and long-term consequences for military strategy (including the new realm of nuclear warfare), human health, and international relations, as well as their impact on the social and political character of subsequent world history and popular culture, have been extensively studied. The ethical and legal justification for the bombings is still debated to this day.
The first patent for the field-effect transistor principle was filed in Canada by Austrian-Hungarian physicist Julius Edgar Lilienfeld on October 22, 1925, but Lilienfeld published no research articles about his devices, and his work was ignored by industry. In 1934 German physicist Dr. Oskar Heil patented another field-effect transistor. There is no direct evidence that these devices were built, but later work in the 1990s show that one of Lilienfeld’s designs worked as described and gave substantial gain. Legal papers from the Bell Labs patent show that William Shockley and a co-worker at Bell Labs, Gerald Pearson, had built operational versions from Lilienfeld’s patents, yet they never referenced this work in any of their later research papers or historical articles.
John Bardeen, William Shockley and Walter Brattain at Bell Labs, 1948.
The Bell Labs work on the transistor emerged from war-time efforts to produce extremely pure germanium „crystal“ mixer diodes, used in radar units as a frequency mixer element in microwave radar receivers. A parallel project on germanium diodes at Purdue University succeeded in producing the good-quality germanium semiconducting crystals that were used at Bell Labs. Early tube-based circuits did not switch fast enough for this role, leading the Bell team to use solid state diodes instead.
After the war, Shockley decided to attempt the building of a triode-like semiconductor device. He secured funding and lab space, and went to work on the problem with Bardeen and Brattain. John Bardeen eventually developed a new branch of quantum mechanics known as surface physics to account for the „odd“ behavior they saw, and Bardeen and Walter Brattain eventually succeeded in building a working device.
The key to the development of the transistor was the further understanding of the process of the electron mobility in a semiconductor. It was realized that if there was some way to control the flow of the electrons from the emitter to the collector of this newly discovered diode (discovered 1874; patented 1906), one could build an amplifier. For instance, if one placed contacts on either side of a single type of crystal the current would not flow through it. However, if a third contact could then „inject“ electrons or holes into the material, the current would flow.
Actually doing this appeared to be very difficult. If the crystal were of any reasonable size, the number of electrons (or holes) required to be injected would have to be very large -– making it less useful than an amplifier because it would require a large injection current to start with. That said, the whole idea of the crystal diode was that the crystal itself could provide the electrons over a very small distance, the depletion region. The key appeared to be to place the input and output contacts very close together on the surface of the crystal on either side of this region.
Brattain started working on building such a device, and tantalizing hints of amplification continued to appear as the team worked on the problem. Sometimes the system would work but then stop working unexpectedly. In one instance a non-working system started working when placed in water. The electrons in any one piece of the crystal would migrate about due to nearby charges. Electrons in the emitters, or the „holes“ in the collectors, would cluster at the surface of the crystal where they could find their opposite charge „floating around“ in the air (or water). Yet they could be pushed away from the surface with the application of a small amount of charge from any other location on the crystal. Instead of needing a large supply of injected electrons, a very small number in the right place on the crystal would accomplish the same thing.
Their understanding solved the problem of needing a very small control area to some degree. Instead of needing two separate semiconductors connected by a common, but tiny, region, a single larger surface would serve. The emitter and collector leads would both be placed very close together on the top, with the control lead placed on the base of the crystal. When current was applied to the „base“ lead, the electrons or holes would be pushed out, across the block of semiconductor, and collect on the far surface. As long as the emitter and collector were very close together, this should allow enough electrons or holes between them to allow conduction to start.
An early witness of the phenomenon was Ralph Bray, a young graduate student. He joined the germanium effort at Purdue University in November 1943 and was given the tricky task of measuring the spreading resistance at the metal-semiconductor contact. Bray found a great many anomalies, such as internal high-resistivity barriers in some samples of germanium. The most curious phenomenon was the exceptionally low resistance observed when voltage pulses were applied. This effect remained a mystery because nobody realised, until 1948, that Bray had observed minority carrier injection – the effect that was identified by William Shockley at Bell Labs and made the transistor a reality.
Bray wrote: „That was the one aspect that we missed, but even had we understood the idea of minority carrier injection…we would have said, ‚Oh, this explains our effects.‘ We might not necessarily have gone ahead and said, ‚Let’s start making transistors,‘ open up a factory and sell them… At that time the important device was the high back voltage rectifier“.
The Bell team made many attempts to build such a system with various tools, but generally failed. Setups where the contacts were close enough were invariably as fragile as the original cat’s whisker detectors had been, and would work briefly, if at all. Eventually they had a practical breakthrough. A piece of gold foil was glued to the edge of a plastic wedge, and then the foil was sliced with a razor at the tip of the triangle. The result was two very closely spaced contacts of gold. When the plastic was pushed down onto the surface of a crystal and voltage applied to the other side (on the base of the crystal), current started to flow from one contact to the other as the base voltage pushed the electrons away from the base towards the other side near the contacts. The point-contact transistor had been invented.
On 15 December 1947, „When the points were, very close together got voltage amp about 2 but not power amp. This voltage amplification was independent of frequency 10 to 10,000 cycles“.
On 16 December 1947, „Using this double point contact, contact was made to a germanium surface that had been anodized to 90 volts, electrolyte washed off in H2O and then had some gold spots evaporated on it. The gold contacts were pressed down on the bare surface. Both gold contacts to the surface rectified nicely… The separation between points was about 4×10−3 cm. One point was used as a grid and the other point as a plate. The bias (D.C.) on the grid had to be positive to get amplification… power gain 1.3 voltage gain 15 on a plate bias of about 15 volts“.
Brattain and H. R. Moore made a demonstration to several of their colleagues and managers at Bell Labs on the afternoon of 23 December 1947, often given as the birth date of the transistor. The „PNP point-contact germanium transistor“ operated as a speech amplifier with a power gain of 18 in that trial. In 1956 John Bardeen, Walter Houser Brattain, and William Bradford Shockley were honored with the Nobel Prize in Physics „for their researches on semiconductors and their discovery of the transistor effect“.
Twelve people are mentioned as directly involved in the invention of the transistor in the Bell Laboratory.
At the same time some European scientists were led by the idea of solid-state amplifiers. In August 1948 German physicists Herbert F. Mataré (1912–2011) and Heinrich Welker (1912–1981), working at Compagnie des Freins et Signaux Westinghouse in Aulnay-sous-Bois, France applied for a patent on an amplifier based on the minority carrier injection process which they called the „transistron“. Since Bell Labs did not make a public announcement of the transistor until June 1948, the transistron was considered to be independently developed. Mataré had first observed transconductance effects during the manufacture of silicon diodes for German radar equipment during WWII. Transistrons were commercially manufactured for the French telephone company and military, and in 1953 a solid-state radio receiver with four transistrons was demonstrated at the Düsseldorf Radio Fair.
The Cold War was a state of geopolitical tension after World War II between powers in the Eastern Bloc (the Soviet Union and its satellite states) and powers in the Western Bloc (the United States, its NATO allies and others). Historians do not fully agree on the dates, but a common timeframe is the period between 1947, the year the Truman Doctrine, a U.S. foreign policy pledging to aid nations threatened by Soviet expansionism, was announced, and either 1989, when communism fell in Eastern Europe, or 1991, when the Soviet Union collapsed. The term „cold“ is used because there was no large-scale fighting directly between the two sides, but they each supported major regional wars known as proxy wars.
The Cold War split the temporary wartime alliance against Nazi Germany, leaving the Soviet Union and the United States as two superpowers with profound economic and political differences. The USSR was a Marxist–Leninist state led by its Communist Party of the Soviet Union, which in turn was dominated by a leader with different titles over time, and a small committee called the Politburo. The Party controlled the press, the military, the economy and many organizations. It also controlled the other states in the Eastern Bloc, and funded Communist parties around the world, sometimes in competition with Communist China, particularly following the Sino-Soviet split of the 1960s. In opposition stood the capitalist West, led by the United States, a federal republic with a two-party presidential system. The First World nations of the Western Bloc were generally liberal democratic with a free press and independent organizations, but were economically and politically entwined with a network of banana republics and other authoritarian regimes throughout the Third World, most of which were the Western Bloc’s former colonies. Some major Cold War frontlines such as Vietnam, Indonesia, and the Congo were still Western colonies in 1947.
A small neutral bloc arose with the Non-Aligned Movement; it sought good relations with both sides. The two superpowers never engaged directly in full-scale armed combat, but they were heavily armed in preparation for a possible all-out nuclear world war. Each side had a nuclear strategy that discouraged an attack by the other side, on the basis that such an attack would lead to the total destruction of the attacker—the doctrine of mutually assured destruction (MAD). Aside from the development of the two sides‘ nuclear arsenals, and their deployment of conventional military forces, the struggle for dominance was expressed via proxy wars around the globe, psychological warfare, massive propaganda campaigns and espionage, rivalry at sports events, and technological competitions such as the Space Race.
The first phase of the Cold War began in the first two years after the end of the Second World War in 1945. The USSR consolidated its control over the states of the Eastern Bloc, while the United States began a strategy of global containment to challenge Soviet power, extending military and financial aid to the countries of Western Europe (for example, supporting the anti-communist side in the Greek Civil War) and creating the NATO alliance. The Berlin Blockade (1948–49) was the first major crisis of the Cold War. With the victory of the communist side in the Chinese Civil War and the outbreak of the Korean War (1950–53), the conflict expanded. The USSR and USA competed for influence in Latin America and the decolonizing states of Africa and Asia. Meanwhile, the Hungarian Revolution of 1956 was stopped by the Soviets. The expansion and escalation sparked more crises, such as the Suez Crisis (1956), the Berlin Crisis of 1961, and the Cuban Missile Crisis of 1962. Following the Cuban Missile Crisis, a new phase began that saw the Sino-Soviet split complicate relations within the communist sphere, while US allies, particularly France, demonstrated greater independence of action. The USSR crushed the 1968 Prague Spring liberalization program in Czechoslovakia, and the Vietnam War (1955–75) ended with the defeat of the US-backed Republic of Vietnam, prompting further adjustments.
By the 1970s, both sides had become interested in making allowances in order to create a more stable and predictable international system, ushering in a period of détente that saw Strategic Arms Limitation Talks and the US opening relations with the People’s Republic of China as a strategic counterweight to the Soviet Union. Détente collapsed at the end of the decade with the beginning of the Soviet–Afghan War in 1979. The early 1980s were another period of elevated tension, with the Soviet downing of Korean Air Lines Flight 007 (1983), and the „Able Archer“ NATO military exercises (1983). The United States increased diplomatic, military, and economic pressures on the Soviet Union, at a time when the communist state was already suffering from economic stagnation. In the mid-1980s, the new Soviet leader Mikhail Gorbachev introduced the liberalizing reforms of perestroika („reorganization“, 1987) and glasnost („openness“, c. 1985) and ended Soviet involvement in Afghanistan. Pressures for national independence grew stronger in Eastern Europe, especially Poland. Gorbachev meanwhile refused to use Soviet troops to bolster the faltering Warsaw Pact regimes as had occurred in the past. The result in 1989 was a wave of revolutions that peacefully (with the exception of the Romanian Revolution) overthrew all of the communist regimes of Central and Eastern Europe. The Communist Party of the Soviet Union itself lost control and was banned following an abortive coup attempt in August 1991. This in turn led to the formal dissolution of the USSR in December 1991 and the collapse of communist regimes in other countries such as Mongolia, Cambodia and South Yemen. The United States remained as the world’s only superpower.
The Cold War and its events have left a significant legacy. It is often referred to in popular culture, especially in media featuring themes of espionage (e.g., the internationally successful James Bond movie franchise) and the threat of nuclear warfare.
The Marshall Plan (officially the European Recovery Program, ERP) was an American initiative to aid Western Europe, in which the United States gave over $13,000,000,000 (nearly $140 billion in current dollar value as of September 2017) in economic assistance to help rebuild Western European economies after the end of World War II. The plan was in operation for four years beginning on April 8, 1948. The goals of the United States were to rebuild war-torn regions, remove trade barriers, modernise industry, make Europe prosperous once more, and prevent the spread of Communism. The Marshall Plan required a lessening of interstate barriers, a dropping of many regulations, and encouraged an increase in productivity, trade union membership, as well as the adoption of modern business procedures.
The Marshall Plan aid was divided amongst the participant states roughly on a per capita basis. A larger amount was given to the major industrial powers, as the prevailing opinion was that their resuscitation was essential for general European revival. Somewhat more aid per capita was also directed towards the Allied nations, with less for those that had been part of the Axis or remained neutral. The largest recipient of Marshall Plan money was the United Kingdom (receiving about 26% of the total), followed by France (18%) and West Germany (11%). Some eighteen European countries received Plan benefits. Although offered participation, the Soviet Union refused Plan benefits, and also blocked benefits to Eastern Bloc countries, such as East Germany and Poland. The United States provided similar aid programs in Asia, but they were not part of the Marshall Plan.
However, its role in the rapid recovery has been debated. Most reject the idea that it only miraculously revived Europe, since the evidence shows that a general recovery was already under way. The Marshal Plan’s accounting reflects that aid accounted for less than 3% of the combined national income of the recipient countries between 1948 and 1951, which means an increase in GDP growth of only 0.3%.
Bradford De Long and Barry Eichengreen conclude it was, “ History’s Most Successful Structural Adjustment Program.“ They state:
It was not large enough to have significantly accelerated recovery by financing investment, aiding the reconstruction of damaged infrastructure, or easing commodity bottlenecks. We argue, however, that the Marshall Plan did play a major role in setting the stage for post-World War II Western Europe’s rapid growth. The conditions attached to Marshall Plan aid pushed European political economy in a direction that left its post World War II „mixed economies“ with more „market“ and less „controls“ in the mix.
Jacob Magid argues:
there is little evidence that direct economic effects account for the Marshall Plan’s success. Instead, the indirect economic effects, particularly in the implementation of liberal capitalistic policies, and the political effects, particularly the ideal of European integration and government-business partnerships, are the major reasons for Europe’s unsurpassed growth.
The initiative was named after United States Secretary of State George Marshall. The plan had bipartisan support in Washington, where the Republicans controlled Congress and the Democrats controlled the White House with Harry S. Truman as President. The Plan was largely the creation of State Department officials, especially William L. Clayton and George F. Kennan, with help from the Brookings Institution, as requested by Senator Arthur H. Vandenberg, chairman of the Senate Foreign Relations Committee. Marshall spoke of an urgent need to help the European recovery in his address at Harvard University in June 1947. The purpose of the Marshall Plan was to aid in the economic recovery of nations after WWII and to reduce the influence of Communist parties within them. To combat the effects of the Marshall Plan, the USSR developed its own economic plan, known as the Molotov Plan. It pumped large amounts of resources from the Eastern Bloc countries to the USSR.
The phrase „equivalent of the Marshall Plan“ is often used to describe a proposed large-scale economic rescue program.
The Great Leap Forward (Chinese: 大跃进; pinyin: Dà Yuèjìn) of the People’s Republic of China (PRC) was an economic and social campaign by the Communist Party of China (CPC) from 1958 to 1962. The campaign was led by Chairman Mao Zedong and aimed to rapidly transform the country from an agrarian economy into a socialist society through rapid industrialization and collectivization. However, it is widely considered to have caused the Great Chinese Famine.
Chief changes in the lives of rural Chinese included the incremental introduction of mandatory agricultural collectivization. Private farming was prohibited, and those engaged in it were persecuted and labeled counter-revolutionaries. Restrictions on rural people were enforced through public struggle sessions and social pressure, although people also experienced forced labor. Rural industrialization, officially a priority of the campaign, saw „its development… aborted by the mistakes of the Great Leap Forward.“
It is widely regarded by historians that The Great Leap resulted in tens of millions of deaths. A lower-end estimate is 18 million, while extensive research by Yu Xiguang suggests the death toll from the movement is closer to 55.6 million. Historian Frank Dikötter asserts that „coercion, terror, and systematic violence were the foundation of the Great Leap Forward“ and it „motivated one of the most deadly mass killings of human history“.
The years of the Great Leap Forward saw economic regression, with 1958 through 1962 being one of two periods (the other being the Cultural Revolution) between 1953 and 1976 in which China’s economy shrank. Political economist Dwight Perkins argues, „enormous amounts of investment produced only modest increases in production or none at all. … In short, the Great Leap was a very expensive disaster.“
In subsequent conferences in March 1960 and May 1962, the negative effects of the Great Leap Forward were studied by the CPC, and Mao was criticized in the party conferences. Moderate Party members like President Liu Shaoqi and Deng Xiaoping rose to power, and Chairman Mao was marginalized within the party, leading him to initiate the Cultural Revolution in 1966.
Solar power is the conversion of energy from sunlight into electricity, either directly using photovoltaics (PV), indirectly using concentrated solar power, or a combination. Concentrated solar power systems use lenses or mirrors and tracking systems to focus a large area of sunlight into a small beam. Photovoltaic cells convert light into an electric current using the photovoltaic effect.
Photovoltaics were initially solely used as a source of electricity for small and medium-sized applications, from the calculator powered by a single solar cell to remote homes powered by an off-grid rooftop PV system. Commercial concentrated solar power plants were first developed in the 1980s. The 392 MW Ivanpah installation is the largest concentrating solar power plant in the world, located in the Mojave Desert of California.
As the cost of solar electricity has fallen, the number of grid-connected solar PV systems has grown into the millions and utility-scale solar power stations with hundreds of megawatts are being built. Solar PV is rapidly becoming an inexpensive, low-carbon technology to harness renewable energy from the Sun. The current largest photovoltaic power station in the world is the 850 MW Longyangxia Dam Solar Park, in Qinghai, China.
The International Energy Agency projected in 2014 that under its „high renewables“ scenario, by 2050, solar photovoltaics and concentrated solar power would contribute about 16 and 11 percent, respectively, of the worldwide electricity consumption, and solar would be the world’s largest source of electricity. Most solar installations would be in China and India. Currently, as of 2016, solar power provides just 1% of total worldwide electricity production but is growing at 33% per annum.
In the 1960s and 1970s, important structural changes eventually led to the breakdown of international monetary management. One change was the development of a high level of monetary interdependence. The stage was set for monetary interdependence by the return to convertibility of the Western European currencies at the end of 1958 and of the Japanese yen in 1964. Convertibility facilitated the vast expansion of international financial transactions, which deepened monetary interdependence.
Growth of international currency markets
Another aspect of the internationalization of banking has been the emergence of international banking consortia. Since 1964 various banks had formed international syndicates, and by 1971 over three quarters of the world’s largest banks had become shareholders in such syndicates. Multinational banks can and do make huge international transfers of capital not only for investment purposes but also for hedging and speculating against exchange rate fluctuations.
These new forms of monetary interdependence made possible huge capital flows. During the Bretton Woods era, countries were reluctant to alter exchange rates formally even in cases of structural disequilibria. Because such changes had a direct impact on certain domestic economic groups, they came to be seen as political risks for leaders. As a result, official exchange rates often became unrealistic in market terms, providing a virtually risk-free temptation for speculators. They could move from a weak to a strong currency hoping to reap profits when a revaluation occurred. If, however, monetary authorities managed to avoid revaluation, they could return to other currencies with no loss. The combination of risk-free speculation with the availability of huge sums was highly destabilizing.
A second structural change that undermined monetary management was the decline of U.S. hegemony. The U.S. was no longer the dominant economic power it had been for more than two decades. By the mid-1960s, the E.E.C. and Japan had become international economic powers in their own right. With total reserves exceeding those of the U.S., higher levels of growth and trade, and per capita income approaching that of the U.S., Europe and Japan were narrowing the gap between themselves and the United States.
The shift toward a more pluralistic distribution of economic power led to increasing dissatisfaction with the privileged role of the U.S. dollar as the international currency. As in effect the world’s central banker, the U.S., through its deficit, determined the level of international liquidity. In an increasingly interdependent world, U.S. policy greatly influenced economic conditions in Europe and Japan. In addition, as long as other countries were willing to hold dollars, the U.S. could carry out massive foreign expenditures for political purposes—military activities and foreign aid—without the threat of balance-of-payments constraints.
Dissatisfaction with the political implications of the dollar system was increased by détente between the U.S. and the Soviet Union. The Soviet military threat had been an important force in cementing the U.S.-led monetary system. The U.S. political and security umbrella helped make American economic domination palatable for Europe and Japan, which had been economically exhausted by the war. As gross domestic production grew in European countries, trade grew. When common security tensions lessened, this loosened the transatlantic dependence on defence concerns, and allowed latent economic tensions to surface.
Reinforcing the relative decline in U.S. power and the dissatisfaction of Europe and Japan with the system was the continuing decline of the dollar—the foundation that had underpinned the post-1945 global trading system. The Vietnam War and the refusal of the administration of U.S. President Lyndon B. Johnson to pay for it and its Great Society programs through taxation resulted in an increased dollar outflow to pay for the military expenditures and rampant inflation, which led to the deterioration of the U.S. balance of trade position. In the late 1960s, the dollar was overvalued with its current trading position, while the German Mark and the yen were undervalued; and, naturally, the Germans and the Japanese had no desire to revalue and thereby make their exports more expensive, whereas the U.S. sought to maintain its international credibility by avoiding devaluation. Meanwhile, the pressure on government reserves was intensified by the new international currency markets, with their vast pools of speculative capital moving around in search of quick profits.
In contrast, upon the creation of Bretton Woods, with the U.S. producing half of the world’s manufactured goods and holding half its reserves, the twin burdens of international management and the Cold War were possible to meet at first. Throughout the 1950s Washington sustained a balance of payments deficit to finance loans, aid, and troops for allied regimes. But during the 1960s the costs of doing so became less tolerable. By 1970 the U.S. held under 16% of international reserves. Adjustment to these changed realities was impeded by the U.S. commitment to fixed exchange rates and by the U.S. obligation to convert dollars into gold on demand.
By 1968, the attempt to defend the dollar at a fixed peg of $35/ounce, the policy of the Eisenhower, Kennedy and Johnson administrations, had become increasingly untenable. Gold outflows from the U.S. accelerated, and despite gaining assurances from Germany and other nations to hold gold, the unbalanced fiscal spending of the Johnson administration had transformed the dollar shortage of the 1940s and 1950s into a dollar glut by the 1960s. In 1967, the IMF agreed in Rio de Janeiro to replace the tranche division set up in 1946. Special drawing rights (SDRs) were set as equal to one U.S. dollar, but were not usable for transactions other than between banks and the IMF. Nations were required to accept holding SDRs equal to three times their allotment, and interest would be charged, or credited, to each nation based on their SDR holding. The original interest rate was 1.5%.
The intent of the SDR system was to prevent nations from buying pegged gold and selling it at the higher free market price, and give nations a reason to hold dollars by crediting interest, at the same time setting a clear limit to the amount of dollars that could be held.
A negative balance of payments, growing public debt incurred by the Vietnam War and Great Society programs, and monetary inflation by the Federal Reserve caused the dollar to become increasingly overvalued. The drain on US gold reserves culminated with the London Gold Pool collapse in March 1968. By 1970, the U.S. had seen its gold coverage deteriorate from 55% to 22%. This, in the view of neoclassical economists, represented the point where holders of the dollar had lost faith in the ability of the U.S. to cut budget and trade deficits.
In 1971 more and more dollars were being printed in Washington, then being pumped overseas, to pay for government expenditure on the military and social programs. In the first six months of 1971, assets for $22 billion fled the U.S. In response, on 15 August 1971, Nixon issued Executive Order 11615 pursuant to the Economic Stabilization Act of 1970, unilaterally imposing 90-day wage and price controls, a 10% import surcharge, and most importantly „closed the gold window“, making the dollar inconvertible to gold directly, except on the open market. Unusually, this decision was made without consulting members of the international monetary system or even his own State Department, and was soon dubbed the Nixon Shock.
The August shock was followed by efforts under U.S. leadership to reform the international monetary system. Throughout the fall (autumn) of 1971, a series of multilateral and bilateral negotiations between the Group of Ten countries took place, seeking to redesign the exchange rate regime.
Meeting in December 1971 at the Smithsonian Institution in Washington D.C., the Group of Ten signed the Smithsonian Agreement. The US pledged to peg the dollar at $38/ounce with 2.25% trading bands, and other countries agreed to appreciate their currencies versus the dollar. The group also planned to balance the world financial system using special drawing rights alone.
The agreement failed to encourage discipline by the Federal Reserve or the United States government. The Federal Reserve was concerned about an increase in the domestic unemployment rate due to the devaluation of the dollar. In attempt to undermine the efforts of the Smithsonian Agreement, the Federal Reserve lowered interest rates in pursuit of a previously established domestic policy objective of full national employment. With the Smithsonian Agreement, member countries anticipated return flow of dollars to the U.S, but the reduced interest rates within the United States caused dollars to continue to flow out of the U.S. and into foreign central banks. The inflow of dollars into foreign banks continued the monetization process of the dollar overseas, defeating the aims of the Smithsonian Agreement. As a result, the dollar price in the gold free market continued to cause pressure on its official rate; soon after a 10% devaluation was announced in February 1973, Japan and the EEC countries decided to let their currencies float. This proved to be the beginning of the collapse of the Bretton Woods System. The end of Bretton Woods was formally ratified by the Jamaica Accords in 1976. By the early 1980s, all industrialised nations were using floating currencies.
The Black–Scholes, or Black–Scholes–Merton model is a mathematical model of a financial market containing derivative investment instruments. From the partial differential equation in the model, known as the Black–Scholes equation, one can deduce the Black–Scholes formula, which gives a theoretical estimate of the price of European-style options and shows that the option has a unique price regardless of the risk of the security and its expected return (instead replacing the security’s expected return with the risk-neutral rate). The formula led to a boom in options trading and provided mathematical legitimacy to the activities of the Chicago Board Options Exchange and other options markets around the world. It is widely used, although often with adjustments and corrections, by options market participants.:751 Many empirical tests have shown that the Black–Scholes price is „fairly close“ to the observed prices, although there are well-known discrepancies such as the „option smile“.
Based on works previously developed by market researchers and practitioners, such as Louis Bachelier, Sheen Kassouf and Ed Thorp among others, Fischer Black and Myron Scholes proved in the late 1960s that a dynamic revision of a portfolio removes the expected return of the security, thus inventing the risk neutral argument. In 1970, after they attempted to apply the formula to the markets and incurred financial losses due to lack of risk management in their trades, they decided to focus in their domain area, the academic environment. After three years of efforts, the formula named in honor of them for making it public, was finally published in 1973 in an article entitled „The Pricing of Options and Corporate Liabilities“, in the Journal of Political Economy. Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the options pricing model, and coined the term „Black–Scholes options pricing model“. Merton and Scholes received the 1997 Nobel Memorial Prize in Economic Sciences for their work, the committee citing their discovery of the risk neutral dynamic revision as a breakthrough that separates the option from the risk of the underlying security. Though ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the Swedish Academy.
The key idea behind the model is to hedge the option by buying and selling the underlying asset in just the right way and, as a consequence, to eliminate risk. This type of hedging is called „continuously revised delta hedging“ and is the basis of more complicated hedging strategies such as those engaged in by investment banks and hedge funds.
The model’s assumptions have been relaxed and generalized in many directions, leading to a plethora of models that are currently used in derivative pricing and risk management. It is the insights of the model, as exemplified in the Black–Scholes formula, that are frequently used by market participants, as distinguished from the actual prices. These insights include no-arbitrage bounds and risk-neutral pricing (thanks to continous revision). Further, the Black–Scholes equation, a partial differential equation that governs the price of the option, enables pricing using numerical methods when an explicit formula is not possible.
The Black–Scholes formula has only one parameter that cannot be directly observed in the market: the average future volatility of the underlying asset, though it can be found from the price of other options. Since the option value (whether put or call) is increasing in this parameter, it can be inverted to produce a „volatility surface“ that is then used to calibrate other models, e.g. for OTC derivatives.
The 1973 oil crisis began in October 1973 when the members of the Organization of Arab Petroleum Exporting Countries proclaimed an oil embargo. The embargo was targeted at nations perceived as supporting Israel during the Yom Kippur War. The initial nations targeted were Canada, Japan, the Netherlands, the United Kingdom and the United States with the embargo also later extended to Portugal, Rhodesia and South Africa. By the end of the embargo in March 1974, the price of oil had risen from US$3 per barrel to nearly $12 globally; US prices were significantly higher. The embargo caused an oil crisis, or „shock“, with many short- and long-term effects on global politics and the global economy. It was later called the „first oil shock“, followed by the 1979 oil crisis, termed the „second oil shock.“
The embargo was a response to American involvement in the 1973 Yom Kippur War. Six days after Egypt and Syria launched a surprise military campaign against Israel, the US supplied Israel with arms. In response to this, the Organization of Arab Petroleum Exporting Countries (OAPEC, consisting of the Arab members of OPEC plus Egypt and Syria) announced an oil embargo against Canada, Japan, the Netherlands, the United Kingdom and the United States.
The crisis had a major impact on international relations and created a rift within NATO. Some European nations and Japan sought to disassociate themselves from United States foreign policy in the Middle East to avoid being targeted by the boycott. Arab oil producers linked any future policy changes to peace between the belligerents. To address this, the Nixon Administration began multilateral negotiations with the combatants. They arranged for Israel to pull back from the Sinai Peninsula and the Golan Heights. By January 18, 1974, US Secretary of State Henry Kissinger had negotiated an Israeli troop withdrawal from parts of the Sinai Peninsula. The promise of a negotiated settlement between Israel and Syria was enough to convince Arab oil producers to lift the embargo in March 1974.
Independently, OAPEC members agreed to use their leverage over the world price-setting mechanism for oil to stabilize their incomes by raising world oil prices after the recent failure of negotiations with Western oil companies.
The embargo occurred at a time of rising petroleum consumption by industrialized countries and coincided with a sharp increase in oil imports by the world’s largest oil consumer, the United States. In the aftermath, targeted countries initiated a wide variety of policies to contain their future dependency.
The 1973 „oil price shock“, with the accompanying 1973–74 stock market crash, was regarded as the first discrete event since the Great Depression to have a persistent effect on the US economy.
The embargo’s success demonstrated Saudi Arabia’s diplomatic and economic power. It was the largest oil exporter and a politically and religiously conservative kingdom.
Home computers were a class of microcomputers entering the market in 1977, and becoming common during the 1980s. They were marketed to consumers as affordable and accessible computers that, for the first time, were intended for the use of a single nontechnical user. These computers were a distinct market segment that typically cost much less than business, scientific or engineering-oriented computers of the time such as the IBM PC, and were generally less powerful in terms of memory and expandability. However, a home computer often had better graphics and sound than contemporary business computers. Their most common uses were playing video games, but they were also regularly used for word processing, doing homework, and programming.
Home computers were usually not electronic kits; home computers were sold already manufactured in stylish metal or plastic enclosures. There were, however, commercial kits like the Sinclair ZX80 which were both home and home-built computers since the purchaser could assemble the unit from a kit.
Advertisements in the popular press for early home computers were rife with possibilities for their practical use in the home, from cataloging recipes to personal finance to home automation, but these were seldom realized in practice. For example, using a typical 1980s home computer as a home automation appliance would require the computer to be kept powered on at all times and dedicated to this task. Personal finance and database use required tedious data entry.
By contrast, advertisements in the specialty computer press often simply listed specifications. If no packaged software was available for a particular application, the home computer user could program one—provided they had invested the requisite hours to learn computer programming, as well as the idiosyncrasies of their system. Since most systems shipped with the BASIC programming language included on the system ROM, it was easy for users to get started creating their own simple applications. Many users found programming to be a fun and rewarding experience, and an excellent introduction to the world of digital technology.
The line between ‚business‘ and ‚home‘ computer market segments blurred or vanished completely once IBM PC compatibles became commonly used in the home, since now both categories of computers typically use the same processor architectures, peripherals, operating systems, and applications. Often the only difference may be the sales outlet through which they are purchased. Another change from the home computer era is that the once-common endeavour of writing one’s own software programs has almost vanished from home computer use.
A GPS satellite is a satellite used by the NAVSTAR Global Positioning System (GPS). The first satellite in the system, Navstar 1, was launched February 22, 1978. The GPS satellite constellation is operated by the 50th Space Wing of the United States Air Force.
The GPS satellites circle the Earth at an altitude of about 20,000 km (12,427 miles) and complete two full orbits every day.
The first satellite in the system, Navstar 1, was launched February 22, 1978.
The GPS satellites circle the Earth at an altitude of about 20,000 km (12,427 miles) and complete two full orbits every day.
It is a global navigation satellite system that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.
The GPS does not require the user to transmit any data, and it operates independently of any telephonic or internet reception, though these technologies can enhance the usefulness of the GPS positioning information. The GPS provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains it, and makes it freely accessible to anyone with a GPS receiver.
The GPS project was launched by the U.S. Department of Defense in 1973 for use by the United States military and became fully operational in 1995. It was allowed for civilian use in the 1980s. Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System (OCX). Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization effort, GPS III. During the 1990s, GPS quality was degraded by the United States government in a program called „Selective Availability“, however, this is no longer the case, and was discontinued in May 2000 by law signed by former President Bill Clinton. New GPS receiver devices using the L5 frequency to begin release in 2018 are expected to have a much higher accuracy to pinpoint a device within to 30 centimeters or just under one foot.
In addition to GPS, other systems are in use or under development, mainly because the US government can selectively deny access to the system, as happened to the Indian military in 1999 during the Kargil War, or degrade the service at any time.
In the 1970s, „less developed countries“ (LDCs) was the common term for markets that were less „developed“ (by objective or subjective measures) than the developed countries such as the United States, Japan, and those in Western Europe. These markets were supposed to provide greater potential for profit but also more risk from various factors like patent infringement. This term was replaced by emerging market. The term is misleading in that there is no guarantee that a country will move from „less developed“ to „more developed“; although that is the general trend in the world, countries can also move from „more developed“ to „less developed“.
Originally coined in 1981 by then World Bank economist Antoine Van Agtmael, the term is sometimes loosely used as a replacement for emerging economies, but really signifies a business phenomenon that is not fully described by or constrained to geography or economic strength; such countries are considered to be in a transitional phase between developing and developed status. Examples of emerging markets include many countries in Africa, most countries in Eastern Europe, some countries of Latin America, some countries in the Middle East, Russia and some countries in Southeast Asia. Emphasizing the fluid nature of the category, political scientist Ian Bremmer defines an emerging market as „a country where politics matters at least as much as economics to the markets“.
The Plaza Accord or Plaza Agreement was an agreement between the governments of France, West Germany, Japan, the United States, and the United Kingdom, to depreciate the U.S. dollar in relation to the Japanese yen and German Deutsche Mark by intervening in currency markets. The five governments signed the accord on September 22, 1985 at the Plaza Hotel in New York City.
Between 1980 and 1985 the dollar had appreciated by about 50% against the Japanese yen, Deutsche Mark, French Franc and British pound, the currencies of the next four biggest economies at the time. This caused considerable difficulties for American industry but at first their lobbying was largely ignored by government. The financial sector was able to profit from the rising dollar, and a depreciation would have run counter to Ronald Reagan’s administration’s plans for bringing down inflation. A broad alliance of manufacturers, service providers, and farmers responded by running an increasingly high-profile campaign asking for protection against foreign competition.
Major players included grain exporters, car producers, engineering companies like Caterpillar Inc., as well as high-tech companies including IBM and Motorola. By 1985, their campaign had acquired sufficient traction for Congress to begin considering passing protectionist laws. The prospect of trade restrictions spurred the White House to begin the negotiations that led to the Plaza Accord.
The justification for the dollar’s devaluation was twofold: to reduce the U.S. current account deficit, which had reached 3.5% of the GDP, and to help the U.S. economy to emerge from a serious recession that began in the early 1980s. The U.S. Federal Reserve System under Paul Volcker had halted the stagflation crisis of the 1970s by raising interest rates, but this resulted in the dollar becoming overvalued to the extent that it made industry in the U.S. (particularly the automobile industry) less competitive in the global market.
Devaluing the dollar made U.S. exports cheaper to purchase for its trading partners, which in turn allegedly meant that other countries would buy more American-made goods and services.
The exchange rate value of the dollar versus the yen declined by 51% from 1985 to 1987. Most of this devaluation was due to the $10 billion spent by the participating central banks. Currency speculation caused the dollar to continue its fall after the end of coordinated interventions. Unlike some similar financial crises, such as the Mexican and the Argentine financial crises of 1994 and 2001 respectively, this devaluation was planned, done in an orderly, pre-announced manner and did not lead to financial panic in the world markets. The Plaza Accord was successful in reducing the U.S. trade deficit with Western European nations but largely failed to fulfill its primary objective of alleviating the trade deficit with Japan. This deficit was due to structural conditions that were insensitive to monetary policy, specifically trade conditions. The manufactured goods of the United States became more competitive in the exports market but were still largely unable to succeed in the Japanese domestic market due to Japan’s structural restrictions on imports. The Louvre Accord would be signed in 1987 to halt the continuing decline of the U.S. dollar.
The signing of the Plaza Accord was significant in that it reflected Japan’s emergence as a real player in managing the international monetary system. However, the recessionary effects of the strengthened yen in Japan’s export-dependent economy created an incentive for the expansionary monetary policies that led to the Japanese asset price bubble of the late 1980s. It is thus postulated that Plaza Accord contributed to the Japanese asset price bubble, which progressed into a protracted period of deflation and low growth in Japan known as the Lost Decade.
Basel I is the round of deliberations by central bankers from around the world, and in 1988, the Basel Committee on Banking Supervision (BCBS) in Basel, Switzerland, published a set of minimum capital requirements for banks. This is also known as the 1988 Basel Accord, and was enforced by law in the Group of Ten (G-10) countries in 1992. A new set of rules known as Basel II was later developed with the intent to supersede the Basel I accords. However they were criticized by some for allowing banks to take on additional types of risk, which was considered part of the cause of the US subprime financial crisis that started in 2008. In fact, bank regulators in the United States took the position of requiring a bank to follow the set of rules (Basel I or Basel II) giving the more conservative approach for the bank. Because of this it was anticipated that only the few very largest US banks would operate under the Basel II rules, the others being regulated under the Basel I framework. Basel III was developed in response to the financial crisis; it does not supersede either Basel I or II, but focuses on different issues primarily related to the risk of a bank run.
Basel I, that is, the 1988 Basel Accord, is primarily focused on credit risk and appropriate risk-weighting of assets. Assets of banks were classified and grouped in five categories according to credit risk, carrying risk weights of 0% (for example cash, bullion, home country debt like Treasuries), 20% (securitisations such as mortgage-backed securities (MBS) with the highest AAA rating), 50% (municipal revenue bonds, residential mortgages), 100% (for example, most corporate debt), and some assets given No rating. Banks with an international presence are required to hold capital equal to 8% of their risk-weighted assets (RWA).
The tier 1 capital ratio = tier 1 capital / all RWA
The total capital ratio = (tier 1 + tier 2 + tier 3 capital) / all RWA
Leverage ratio = total capital/average total assets
Banks are also required to report off-balance-sheet items such as letters of credit, unused commitments, and derivatives. These all factor into the risk weighted assets. The report is typically submitted to the Federal Reserve Bank as HC-R for the bank-holding company and submitted to the Office of the Comptroller of the Currency (OCC) as RC-R for just the bank.
From 1988 this framework was progressively introduced in member countries of G-10, comprising 13 countries as of 2013: Belgium, Canada, France, Germany, Italy, Japan, Luxembourg, Netherlands, Spain, Sweden, Switzerland, United Kingdom and the United States of America.
Over 100 other countries also adopted, at least in name, the principles prescribed under Basel I. The efficacy with which the principles are enforced varies, even within nations of the Group.
The World Wide Web (abbreviated WWW or the Web) is an information space where documents and other web resources are identified by Uniform Resource Locators (URLs), interlinked by hypertext links, and can be accessed via the Internet. English scientist Tim Berners-Lee invented the World Wide Web in 1989. He wrote the first web browser computer program in 1990 while employed at CERN in Switzerland. The Web browser was released outside CERN in 1991, first to other research institutions starting in January 1991 and to the general public on the Internet in August 1991.
The World Wide Web has been central to the development of the Information Age and is the primary tool billions of people use to interact on the Internet. Web pages are primarily text documents formatted and annotated with Hypertext Markup Language (HTML). In addition to formatted text, web pages may contain images, video, audio, and software components that are rendered in the user’s web browser as coherent pages of multimedia content.
Embedded hyperlinks permit users to navigate between web pages. Multiple web pages with a common theme, a common domain name, or both, make up a website. Website content can largely be provided by the publisher, or interactively where users contribute content or the content depends upon the users or their actions. Websites may be mostly informative, primarily for entertainment, or largely for commercial, governmental, or non-governmental organisational purposes.
The Cold War period of 1985–1991 began with the rise of Mikhail Gorbachev as leader of the Soviet Union. Gorbachev was a revolutionary leader for the USSR, as he was the first to promote liberalization of the political landscape (Glasnost) and capitalist elements into the economy (Perestroika); prior to this, the USSR had been strictly prohibiting liberal reform and maintained an inefficient centralized economy. The USSR, despite facing massive economic difficulties, was involved in a costly arms race with the United States under President Ronald Reagan. Regardless, the USSR began to crumble as liberal reforms proved difficult to handle and capitalist changes to the centralized economy were badly transitioned and caused major problems. After a series of revolutions in Soviet Bloc states, and a failed coup by conservative elements opposed to the ongoing reforms, on New Year’s Eve 1991 the Soviet Union collapsed and the Cold War came to an end.
1990 August 2 – February 28 1991
The Gulf War (2 August 1990 – 28 February 1991), codenamed Operation Desert Shield (2 August 1990 – 17 January 1991) for operations leading to the buildup of troops and defense of Saudi Arabia and Operation Desert Storm (17 January 1991 – 28 February 1991) in its combat phase, was a war waged by coalition forces from 35 nations led by the United States against Iraq in response to Iraq’s invasion and annexation of Kuwait.
The war is also known under other names, such as the Persian Gulf War, First Gulf War, Gulf War I, Kuwait War, First Iraq War or Iraq War,before the term „Iraq War“ became identified instead with the 2003 Iraq War (also referred to in the US as „Operation Iraqi Freedom“). The Iraqi Army’s occupation of Kuwait that began 2 August 1990 was met with international condemnation and brought immediate economic sanctions against Iraq by members of the UN Security Council. Together with the UK’s prime minister Margaret Thatcher (who had fiercely resisted the invasion by Argentina of the Falkland Islands a decade earlier), George H. W. Bush deployed US forces into Saudi Arabia, and urged other countries to send their own forces to the scene. An array of nations joined the coalition, forming the largest military alliance since World War II. The great majority of the coalition’s military forces were from the US, with Saudi Arabia, the United Kingdom and Egypt as leading contributors, in that order. Kuwait and Saudi Arabia paid around US$32 billion of the US$60 billion cost.
The war was marked by the introduction of live news broadcasts from the front lines of the battle, principally by the US network CNN. The war has also earned the nickname Video Game War after the daily broadcast of images from cameras on board US bombers during Operation Desert Storm.
The initial conflict to expel Iraqi troops from Kuwait began with an aerial and naval bombardment on 17 January 1991, continuing for five weeks. This was followed by a ground assault on 24 February. This was a decisive victory for the coalition forces, who liberated Kuwait and advanced into Iraqi territory. The coalition ceased its advance and declared a ceasefire 100 hours after the ground campaign started. Aerial and ground combat was confined to Iraq, Kuwait, and areas on Saudi Arabia’s border. Iraq launched Scud missiles against coalition military targets in Saudi Arabia and against Israel.
The European Economic Community (EEC) was a regional organisation which aimed to bring about economic integration among its member states. It was created by the Treaty of Rome of 1957. Upon the formation of the European Union (EU) in 1993, the EEC was incorporated and renamed as the European Community (EC). In 2009 the EC’s institutions were absorbed into the EU’s wider framework and the community ceased to exist.
The Community’s initial aim was to bring about economic integration, including a common market and customs union, among its six founding members: Belgium, France, Italy, Luxembourg, the Netherlands and West Germany. It gained a common set of institutions along with the European Coal and Steel Community (ECSC) and the European Atomic Energy Community (EURATOM) as one of the European Communities under the 1965 Merger Treaty (Treaty of Brussels). In 1993, a complete single market was achieved, known as the internal market, which allowed for the free movement of goods, capital, services, and people within the EEC. In 1994, the internal market was formalised by the EEA agreement. This agreement also extended the internal market to include most of the member states of the European Free Trade Association, forming the European Economic Area covering 15 countries.
Upon the entry into force of the Maastricht Treaty in 1993, the EEC was renamed the European Community to reflect that it covered a wider range than economic policy. This was also when the three European Communities, including the EC, were collectively made to constitute the first of the three pillars of the European Union, which the treaty also founded. The EC existed in this form until it was abolished by the 2009 Treaty of Lisbon, which incorporated the EC’s institutions into the EU’s wider framework and provided that the EU would „replace and succeed the European Community“.
The EEC was also known as the Common Market in the English-speaking countries and sometimes referred to as the European Community even before it was officially renamed as such in 1993.
The North American Free Trade Agreement (NAFTA; Spanish: Tratado de Libre Comercio de América del Norte, TLCAN; French: Accord de libre-échange nord-américain, ALÉNA) is an agreement signed by Canada, Mexico, and the United States, creating a trilateral trade bloc in North America. The agreement came into force on January 1, 1994. It superseded the Canada–United States Free Trade Agreement between the U.S. and Canada.
NAFTA has two supplements: the North American Agreement on Environmental Cooperation (NAAEC) and the North American Agreement on Labor Cooperation (NAALC).
Most economic analyses indicate that NAFTA has been beneficial to the North American economies and the average citizen, but harmed a small minority of workers in industries exposed to trade competition. Economists hold that withdrawing from NAFTA or renegotiating NAFTA in a way that reestablishes trade barriers will adversely affect the U.S. economy and cost jobs. Though Mexico would be much more affected severely by job loss and reduction of economic growth over both the short and long term.
The Mexican peso crisis was a currency crisis sparked by the Mexican government’s sudden devaluation of the peso against the U.S. dollar in December 1994, which became one of the first international financial crises ignited by capital flight.
During the 1994 presidential election, the incumbent administration embarked on expansionary fiscal and monetary policy. The Mexican treasury began issuing short-term debt instruments denominated in domestic currency with a guaranteed repayment in U.S. dollars, attracting foreign investors. Mexico enjoyed investor confidence and new access to international capital following its signing of the North American Free Trade Agreement (NAFTA). However, a violent uprising in the state of Chiapas, as well as the assassination of the presidential candidate Luis Donaldo Colosio, resulted in political instability, causing investors to place an increased risk premium on Mexican assets.
In response, the Mexican central bank intervened in the foreign exchange markets to maintain the Mexican peso’s peg to the U.S. dollar by issuing dollar-denominated public debt to buy pesos. The peso’s strength caused demand for imports to increase, resulting in a trade deficit. Speculators recognized an overvalued peso and capital began flowing out of Mexico to the United States, increasing downward market pressure on the peso. Under election pressures, Mexico purchased its own treasury securities to maintain its money supply and avert rising interest rates, drawing down the bank’s dollar reserves. Supporting the money supply by buying more dollar-denominated debt while simultaneously honoring such debt depleted the bank’s reserves by the end of 1994.
The central bank devalued the peso on December 20, 1994, and foreign investors‘ fear led to an even higher risk premium. To discourage the resulting capital flight, the bank raised interest rates, but higher costs of borrowing merely hurt economic growth. Unable to sell new issues of public debt or efficiently purchase dollars with devalued pesos, Mexico faced a default. Two days later, the bank allowed the peso to float freely, after which it continued to depreciate. The Mexican economy experienced hyperinflation of around 52% and mutual funds began liquidating Mexican assets as well as emerging market assets in general. The effects spread to economies in Asia and the rest of Latin America. The United States organized a $50 billion bailout for Mexico in January 1995, administered by the IMF with the support of the G7 and Bank for International Settlements. In the aftermath of the crisis, several of Mexico’s banks collapsed amidst widespread mortgage defaults. The Mexican economy experienced a severe recession and poverty and unemployment increased.
Netscape Navigator was the name of Netscape’s web browser from versions 1.0 through 4.8. The first beta release versions of the browser was released in 1994 and known as Mosaic and then Mosaic Netscape until a legal challenge from the National Center for Supercomputing Applications (makers of NCSA Mosaic, which many of Netscape’s founders used to develop), led to the name change to Netscape Navigator. The company’s name also changed from Mosaic Communications Corporation to Netscape Communications Corporation.
The browser was easily the most advanced available and was therefore an instant success, becoming market leader while still in beta. Netscape’s feature-count and market share continued to grow rapidly after version 1.0 was released. Version 2.0 added a full mail reader called Netscape Mail, thus transforming Netscape from a mere web browser to an Internet suite. During this period, both the browser and the suite were known as Netscape Navigator. Around the same time, AOL started bundling their software with Microsoft’s Internet Explorer.
The World Trade Organization (WTO) is an intergovernmental organization that regulates international trade. The WTO officially commenced on 1 January 1995 under the Marrakesh Agreement, signed by 123 nations on 15 April 1994, replacing the General Agreement on Tariffs and Trade (GATT), which commenced in 1948. It is the largest international economic organization in the world. The WTO deals with regulation of trade in goods, services and intellectual property between participating countries by providing a framework for negotiating trade agreements and a dispute resolution process aimed at enforcing participants‘ adherence to WTO agreements, which are signed by representatives of member governments and ratified by their parliaments. Most of the issues that the WTO focuses on derive from previous trade negotiations, especially from the Uruguay Round (1986–1994).
The WTO’s current Director-General is Roberto Azevêdo, who leads a staff of over 600 people in Geneva, Switzerland. A trade facilitation agreement, part of the Bali Package of decisions, was agreed by all members on 7 December 2013, the first comprehensive agreement in the organization’s history. On 23 January 2017, the amendment to the WTO Trade Related Aspects of Intellectual Property Rights (TRIPS) Agreement marks the first time since the organization opened its doors in 1995 that WTO accords have been amended, and this change should secure for developing countries a legal pathway to access affordable remedies under WTO rules.
The Asian financial crisis was a period of financial crisis that gripped much of East Asia beginning in July 1997 and raised fears of a worldwide economic meltdown due to financial contagion.
The crisis started in Thailand (known in Thailand as the Tom Yum Goong crisis; Thai: วิกฤตต้มยำกุ้ง) with the financial collapse of the Thai baht after the Thai government was forced to float the baht due to lack of foreign currency to support its currency peg to the U.S. dollar. At the time, Thailand had acquired a burden of foreign debt that made the country effectively bankrupt even before the collapse of its currency. As the crisis spread, most of Southeast Asia and Japan saw slumping currencies, devalued stock markets and other asset prices, and a precipitous rise in private debt.
Indonesia, South Korea, and Thailand were the countries most affected by the crisis. Hong Kong, Laos, Malaysia and the Philippines were also hurt by the slump. Brunei, China, Singapore, Taiwan, and Vietnam were less affected, although all suffered from a loss of demand and confidence throughout the region. Japan was also affected, though less significantly.
Foreign debt-to-GDP ratios rose from 100% to 167% in the four large Association of Southeast Asian Nations (ASEAN) economies in 1993–96, then shot up beyond 180% during the worst of the crisis. In South Korea, the ratios rose from 13% to 21% and then as high as 40%, while the other northern newly industrialized countries fared much better. Only in Thailand and South Korea did debt service-to-exports ratios rise.
Although most of the governments of Asia had seemingly sound fiscal policies, the International Monetary Fund (IMF) stepped in to initiate a $40 billion program to stabilize the currencies of South Korea, Thailand, and Indonesia, economies particularly hard hit by the crisis. The efforts to stem a global economic crisis did little to stabilize the domestic situation in Indonesia, however. After 30 years in power, President Suharto was forced to step down on 21 May 1998 in the wake of widespread rioting that followed sharp price increases caused by a drastic devaluation of the rupiah. The effects of the crisis lingered through 1998. In 1998 growth in the Philippines dropped to virtually zero. Only Singapore and Taiwan proved relatively insulated from the shock, but both suffered serious hits in passing, the former due to its size and geographical location between Malaysia and Indonesia. By 1999, however, analysts saw signs that the economies of Asia were beginning to recover. After the 1997 Asian Financial Crisis, economies in the region worked toward financial stability and better financial supervision.
Until 1999, Asia attracted almost half of the total capital inflow into developing countries. The economies of Southeast Asia in particular maintained high interest rates attractive to foreign investors looking for a high rate of return. As a result, the region’s economies received a large inflow of money and experienced a dramatic run-up in asset prices. At the same time, the regional economies of Thailand, Malaysia, Indonesia, Singapore and South Korea experienced high growth rates, of 8–12% GDP, in the late 1980s and early 1990s. This achievement was widely acclaimed by financial institutions including IMF and World Bank, and was known as part of the „Asian economic miracle“.
The dot-com bubble (also known as the dot-com boom, the dot-com crash, the Y2K crash, the Y2K bubble, the tech bubble, the Internet bubble, the dot-com collapse, and the information technology bubble) was a historic economic bubble and period of excessive speculation that occurred roughly from 1997 to 2001, a period of extreme growth in the usage and adaptation of the Internet by businesses and consumers. During this period, many Internet-based companies, commonly referred to as dot-coms, were founded, many of which failed.
During 2000–2002, the bubble collapsed. Some companies, such as Pets.com and Webvan, failed completely and shut down. Others, such as Cisco, whose stock declined by 86%, and Qualcomm, lost a large portion of their market capitalization but survived, and some companies, such as eBay and Amazon.com, later recovered and surpassed their stock price peaks during the bubble.
The Russian financial crisis (also called Ruble crisis or the Russian Flu) hit Russia on 17 August 1998. It resulted in the Russian government and the Russian Central Bank devaluing the ruble and defaulting on its debt. The crisis had severe impacts on the economies of many neighboring countries. Meanwhile, James Cook, the senior vice president of The U.S. Russia Investment Fund, suggested the crisis had the positive effect of teaching Russian banks to diversify their assets.
On 17 August 1998, the Russian government devalued the ruble, defaulted on domestic debt, and declared a moratorium on repayment of foreign debt. On that day the Russian government and the Central Bank of Russia issued a „Joint Statement“ announcing, in essence, that:
On 17 August 1998 the government declared that certain state securities (GKOs and OFZs) would be transformed into new securities.
At the same time, in addition to widening the currency band, authorities also announced that they intended to allow the RUB/USD rate to move more freely within the wider band.
At the time, the Moscow Interbank Currency Exchange (or „MICEX“) set a daily „official“ exchange rate through a series of interactive auctions based on written bids submitted by buyers and sellers. When the buy and sell prices matched, this „fixed“ or „settled“ the official MICEX exchange rate, which would then be published by Reuters. The MICEX rate was (and is) commonly used by banks and currency dealers worldwide as the reference exchange rate for transactions involving the Russian ruble and foreign currencies.
From 17 to 25 August 1998, the ruble steadily depreciated on the MICEX, moving from 6.43 to 7.86 RUB/USD. On 26 August 1998, the Central Bank terminated dollar-ruble trading on the MICEX, and the MICEX did not fix a ruble-dollar rate that day.
On 2 September 1998 the Central Bank of the Russian Federation decided to abandon the „floating peg“ policy and float the ruble freely. By 21 September 1998 the exchange rate had reached 21 rubles for one US dollar, meaning it had lost two thirds of its value of less than a month earlier.
On 28 September 1998 Boris Fyodorov was discharged from the position of the Head of the State Tax Service.
The moratorium imposed by the Joint Statement expired on 15 November 1998, and the Russian government and Central Bank did not renew it.
Long-Term Capital Management L.P. (LTCM) was a hedge fund management firm based in Greenwich, Connecticut that used absolute-return trading strategies combined with high financial leverage. The firm’s master hedge fund, Long-Term Capital Portfolio L.P., collapsed in the late 1990s, leading to an agreement on September 23, 1998, among 16 financial institutions—which included Bankers Trust, Barclays, Bear Stearns, Chase Manhattan Bank, Credit Agricole, Credit Suisse First Boston, Deutsche Bank, Goldman Sachs, JP Morgan, Lehman Brothers, Merrill Lynch, Morgan Stanley, Paribas, Salomon Smith Barney, Societe Generale, and UBS—for a $3.6 billion recapitalization (bailout) under the supervision of the Federal Reserve.
LTCM was founded in 1994 by John W. Meriwether, the former vice-chairman and head of bond trading at Salomon Brothers. Members of LTCM’s board of directors included Myron S. Scholes and Robert C. Merton, who shared the 1997 Nobel Memorial Prize in Economic Sciences for a „new method to determine the value of derivatives“. Initially successful with annualized return of over 21% (after fees) in its first year, 43% in the second year and 41% in the third year, in 1998 it lost $4.6 billion in less than four months following the 1997 Asian financial crisis and 1998 Russian financial crisis, requiring financial intervention by the Federal Reserve, with the fund liquidating and dissolving in early 2000.
The euro (sign: €; code: EUR) is the official currency of the European Union. Currently 19 of 28 member states use the euro (eurozone). It is the second most traded currency in the foreign exchange market after the United States dollar. The euro is subdivided into 100 cents (defined as eurocent).
The currency is also officially used by the institutions of the European Union and four other European countries, as well as unilaterally by two others, and is consequently used daily by some 337 million Europeans as of 2015. Outside Europe, a number of overseas territories of EU members also use the euro as their currency. Additionally, 210 million people worldwide as of 2013 use currencies pegged to the euro.
The euro is the second largest reserve currency as well as the second most traded currency in the world after the United States dollar. As of January 2017, with more than €1.1 trillion in circulation, the euro has one of the highest combined values of banknotes and coins in circulation in the world, having surpassed the US dollar.
The name euro was officially adopted on 16 December 1995 in Madrid. The euro was introduced to world financial markets as an accounting currency on 1 January 1999, replacing the former European Currency Unit (ECU) at a ratio of 1:1 (US$1.1743). Physical euro coins and banknotes entered into circulation on 1 January 2002, making it the day-to-day operating currency of its original members, and by May 2002 had completely replaced the former currencies. While the euro dropped subsequently to US$0.8252 within two years (26 October 2000), it has traded above the US dollar since the end of 2002, peaking at US$1.6038 on 18 July 2008. Since late 2009, the euro has been immersed in the European sovereign-debt crisis which has led to the creation of the European Financial Stability Facility as well as other reforms aimed at stabilising the currency. In July 2012, the euro fell below US$1.21 for the first time in two years, following concerns raised over Greek debt and Spain’s troubled banking sector.
In the 1960s the Office of the Comptroller of the Currency issued aggressive interpretations of Glass–Steagall to permit national banks to engage in certain securities activities. Although most of these interpretations were overturned by court decisions, by the late 1970s bank regulators began issuing Glass–Steagall interpretations that were upheld by courts and that permitted banks and their affiliates to engage in an increasing variety of securities activities. Starting in the 1960s banks and non-banks developed financial products that blurred the distinction between banking and securities products, as they increasingly competed with each other.
Separately, starting in the 1980s, Congress debated bills to repeal Glass–Steagall’s affiliation provisions (Sections 20 and 32). In 1999 Congress passed the Gramm–Leach–Bliley Act, also known as the Financial Services Modernization Act of 1999, to repeal them. Eight days later, President Bill Clinton signed it into law.
The dot-com bubble (also known as the dot-com boom, the dot-com crash, the Y2K crash, the Y2K bubble, the tech bubble, the Internet bubble, the dot-com collapse, and the information technology bubble) was a historic economic bubble and period of excessive speculation that occurred roughly from 1997 to 2001, a period of extreme growth in the usage and adaptation of the Internet by businesses and consumers. During this period, many Internet-based companies, commonly referred to as dot-coms, were founded, many of which failed.
During 2000–2002, the bubble collapsed. Some companies, such as Pets.com and Webvan, failed completely and shut down. Others, such as Cisco, whose stock declined by 86%, and Qualcomm, lost a large portion of their market capitalization but survived, and some companies, such as eBay and Amazon.com, later recovered and surpassed their stock price peaks during the bubble.
Several companies that produced network equipment were irrevocably damaged by the debt taken on to fund their expansion and went bankrupt, causing what is known as the telecoms crash. However, some communications companies that supplied equipment and outsourced manufacturing, such as Cisco and Qualcomm, were able to survive. Similarly, in Europe, mobile phone companies overspent on 3G licences, which led them deep into debt and led to the telecoms crash. The investments in infrastructure were far out of proportion to both their current and projected cash flow.
WorldCom, led by Bernard Ebbers, was the subject of accounting scandals. WorldCom’s stock price fell drastically when the accounting scandal was publicized, and, in July 2002, it filed the largest corporate bankruptcy ever at the time.
The September 11 attacks (also referred to as 9/11) were a series of four coordinated terrorist attacks by the Islamic terrorist group al-Qaeda on the United States on the morning of Tuesday, September 11, 2001. The attacks killed 2,996 people, injured over 6,000 others, and caused at least $10 billion in infrastructure and property damage.
Four passenger airliners operated by two major U.S. passenger air carriers (United Airlines and American Airlines) — all of which departed from airports in the northeastern United States bound for California — were hijacked by 19 al-Qaeda terrorists. Two of the planes, American Airlines Flight 11 and United Airlines Flight 175, were crashed into the North and South towers, respectively, of the World Trade Center complex in New York City. Within an hour and 42 minutes, both 110-story towers collapsed, with debris and the resulting fires causing partial or complete collapse of all other buildings in the World Trade Center complex, including the 47-story 7 World Trade Center tower, as well as significant damage to ten other large surrounding structures. A third plane, American Airlines Flight 77, was crashed into the Pentagon (the headquarters of the United States Department of Defense) in Arlington County, Virginia, leading to a partial collapse of the building’s western side. The fourth plane, United Airlines Flight 93, was initially steered toward Washington, D.C., but crashed into a field in Stonycreek Township near Shanksville, Pennsylvania, after its passengers tried to overcome the hijackers. 9/11 was the single deadliest incident for firefighters and law enforcement officers in the history of the United States, with 343 and 72 killed respectively.
Suspicion quickly fell on al-Qaeda. The United States responded by launching the War on Terror and invading Afghanistan to depose the Taliban, which had harbored al-Qaeda. Many countries strengthened their anti-terrorism legislation and expanded the powers of law enforcement and intelligence agencies to prevent terrorist attacks. Although al-Qaeda’s leader, Osama bin Laden, initially denied any involvement, in 2004 he claimed responsibility for the attacks. Al-Qaeda and bin Laden cited U.S. support of Israel, the presence of U.S. troops in Saudi Arabia, and sanctions against Iraq as motives. After evading capture for almost a decade, Osama bin Laden was located and killed in Pakistan by SEAL Team Six of the U.S. Navy in May 2011.
The destruction of the World Trade Center and nearby infrastructure caused serious damage to the economy of Lower Manhattan and had a significant effect on global markets, resulting in the closing of Wall Street until September 17 and the civilian airspace in the U.S. and Canada until September 13. Many closings, evacuations, and cancellations followed, out of respect or fear of further attacks. Cleanup of the World Trade Center site was completed in May 2002, and the Pentagon was repaired within a year. On November 18, 2006, construction of One World Trade Center began at the World Trade Center site. The building was officially opened on November 3, 2014. Numerous memorials have been constructed, including the National September 11 Memorial & Museum in New York City, the Pentagon Memorial in Arlington County, Virginia, and the Flight 93 National Memorial in a field in Stonycreek Township near Shanksville, Pennsylvania.
China became a member of the World Trade Organization (WTO) on 11 December 2001. The admission of China to the WTO was preceded by a lengthy process of negotiations and required significant changes to the Chinese economy. It signified China’s deeper integration into the world economy.
Until the 1970s, China’s economy was managed by the communist government and was kept closed from other economies. Together with political reforms, China in the early 1980s began to open its economy and signed a number of regional trade agreements. China gained observer status with GATT and from 1986, began working towards joining that organization. China aimed to be included as a WTO founding member (which would validate it as a world economic power) but this attempt was thwarted because United States, European countries, and Japan requested that China first reform various tariff policies, including tariff reductions, open markets and industrial policies.
The federal popular initiative „yes to Europe!“ (« Oui à l’Europe ! ») on opening accession negotiations with the EU is rejected by 76.8% of voters.
The Iraq War was a protracted armed conflict that began in 2003 with the invasion of Iraq by a United States-led coalition that overthrew the government of Saddam Hussein. The conflict continued for much of the next decade as an insurgency emerged to oppose the occupying forces and the post-invasion Iraqi government. An estimated 151,000 to 600,000 or more Iraqis were killed in the first 3–4 years of conflict. The US became re-involved in 2014 at the head of a new coalition; the insurgency and many dimensions of the civil armed conflict continue. The invasion occurred as part of a declared war against international terrorism and its sponsors under the administration of US President George W. Bush following the September 11 terror attacks.
The invasion began on 20 March 2003, with the US, joined by the United Kingdom and several coalition allies, launching a „shock and awe“ bombing campaign. Iraqi forces were quickly overwhelmed as U.S. forces swept through the country. The invasion led to the collapse of the Ba’athist government; Saddam was captured during Operation Red Dawn in December of that same year and executed by a military court three years later. However, the power vacuum following Saddam’s demise and the mismanagement of the occupation led to widespread sectarian violence between Shias and Sunnis, as well as a lengthy insurgency against U.S. and coalition forces. Many violent insurgent groups were supported by Iran and al-Qaeda in Iraq. The United States responded with a troop surge in 2007. The winding down of U.S. involvement in Iraq accelerated under President Barack Obama. The US formally withdrew all combat troops from Iraq by December 2011.
The Bush administration based its rationale for the war principally on the assertion that Iraq, which had been viewed by the US as a rogue state since the Persian Gulf War, possessed weapons of mass destruction (WMDs) and that the Iraqi government posed an immediate threat to the United States and its coalition allies. Select U.S. officials accused Saddam of harbouring and supporting al-Qaeda, while others cited the desire to end a repressive dictatorship and bring democracy to the people of Iraq. After the invasion, no substantial evidence was found to verify the initial claims about WMDs, while claims of Iraqi officials collaborating with al-Qaeda were proven false. The rationale and misrepresentation of US prewar intelligence faced heavy criticism both domestically and internationally, with President Bush declining from his record-high approval ratings following 9/11 to become one of the most unpopular presidents in US history.
In the aftermath of the invasion, Iraq held multi-party elections in 2005. Nouri al-Maliki became Prime Minister in 2006 and remained in office until 2014. The al-Maliki government enacted policies that were widely seen as having the effect of alienating the country’s Sunni minority and worsening sectarian tensions. In the summer of 2014, the Islamic State of Iraq and the Levant (ISIL) launched a military offensive in Northern Iraq and declared a worldwide Islamic caliphate, eliciting another military response from the United States and its allies. The Iraq War caused over a hundred thousand civilian deaths and tens of thousands of military deaths (see estimates below). The majority of deaths occurred as a result of the insurgency and civil conflicts between 2004 and 2007.
Historical decisions of where financial assets would be placed were based on various criteria, financial return being predominant. However, there have always been plenty of other criteria for deciding where to place your money – from Political considerations to Heavenly Reward. It was in the 1950s and 60s that the vast pension funds managed by the Trades Unions recognised the opportunity to affect the wider social environment using their capital assets – in the United States the International Brotherhood of Electrical Workers invested their not inconsiderable capital in developing affordable housing projects, whilst the United Mine Workers invested in health facilities.
In the 1970s, the worldwide abhorrence of the apartheid regime in South Africa led to one of the most renowned examples of selective disinvestment along ethical lines. As a response to a growing call for sanctions against the regime, the Reverend Leon Sullivan, a board member of General Motors in the United States drew up a Code of Conduct in 1971 for practising business with South Africa. What became known as the Sullivan Principles attracted a great deal of attention and several reports were commissioned by the government, to examine how many US companies were investing in South African companies that were contravening the Sullivan Code. The conclusions of the reports led to a mass disinvestment by the US from many South African companies. The resulting pressure applied to the South African regime by its business community added great weight to the growing impetus for the system of apartheid to be abandoned.
In the 1960s and 1970s, Milton Friedman, in direct response to the prevailing mood of philanthropy argued that social responsibility adversely affects a firm’s financial performance and that regulation and interference from „big government“ will always damage the macro economy. His contention that the valuation of a company or asset should be predicated almost exclusively on the pure bottom line (with the costs incurred by social responsibility being deemed non-essential), underwrote the belief prevalent for most of the 20th century. Towards the end of the century however a contrary theory began to gain ground. In 1988 James S. Coleman wrote an article in the American Journal of Sociology entitled Social Capital in the Creation of Human Capital, the article challenged the dominance of the concept of ‘self-interest’ in economics and introduced the concept of social capital into the measurement of value.
There was a new form of pressure applied, acting in a coalition with environmental groups, it used the leveraging power of its collective investors to encourage companies and capital markets to incorporate environmental and social challenges into their day-to-day decision-making. The Ceres coalition today represents one of the world’s strongest investment groups with over 60 institutional investors from the U.S. and Europe managing over $4 trillion in assets.
Although the concept of selective investment was not a new one with the demand side of the investment market having a long history of those wishing to control the effects of their investments, what began to develop at the turn of the 21st century was a response from the supply-side of the equation. The investment market began to pick up on the growing need for products geared towards what was becoming known as the Responsible Investor. In 1998 John Elkington, co-founder of the business consultancy SustainAbility, published Cannibals with Forks: the Triple Bottom Line of 21st Century Business in which he identified the newly emerging cluster of non financial considerations which should be included in the factors determining a company or equity’s value. He coined the phrase the „triple bottom line“, referring to the financial, environmental and social factors included in the new calculation. At the same time the strict division between the environmental sector and the financial sector began to break down. In the City of London in 2002, Chris Yates-Smith a member of the international panel chosen to oversee the technical construction, accreditation and distribution of the Organic Production Standard and founder of one if the City of London’s leading Branding Consultancies, established one of the first environmental finance research groups. The informal group of financial leaders, city lawyers and environmental stewardship NGOs became known as The Virtuous Circle, its brief was to examine the nature of the correlation between environmental and social standards and financial performance. Several of the world’s big banks and investment houses began to respond to the growing interest in the ESG investment market with the provision of sell-side services, among the first were the Brazilian bank Unibanco, and Mike Tyrell’s Jupiter Fund in London which used ESG based research to provide both HSBC and Citicorp with selective investment services in 2001.
In the early years of the new millennium, the major part of the investment market still accepted the historical assumption that ethically directed investments were by their nature likely to reduce financial return. Philanthropy was not known to be a highly profitable business and Friedman had provided a widely accepted academic basis for the argument that the costs of behaving in an ethically responsible manner would outweigh the benefits. However the assumptions were beginning to be fundamentally challenged. In 1998 two journalists Robert Levering and Milton Moskowitz had brought out the Fortune 100 Best Companies to Work For, initially a listing in the magazine Fortune, then a book compiling a list of the best practicing companies in the United States with regard to corporate social responsibility and how their financial performance fared as a result. Of the three areas of concern that ESG represented, the environmental and social had received most of the public and media attention, not least because of the growing fears concerning climate change. Moskowitz brought the spotlight onto the corporate governance aspect of responsible investment. His analysis concerned how the companies were managed, what the stockholder relationships were and how the employees were treated. He argued that improving corporate governance procedures did not damage financial performance, on the contrary it maximised productivity, ensured corporate efficiency and led to the sourcing and utilising of superior management talents. In the early 2000s, the success of Moskowitz’s list and its impact on company’s ease of recruitment and brand reputation began to challenge the historical assumptions regarding the financial effect of ESG factors. In 2011, Alex Edmans, a finance professor at Wharton, published a paper in the Journal of Financial Economics showing that the 100 Best Companies to Work For outperformed their peers in terms of stock returns by 2-3% a year over 1984-2009, and delivered earnings that systematically exceeded analyst expectations.
In 2005, however, a quantum leap was taken in the integration of ESG considerations into the mainstream investment market. The United Nations Environment Programme Finance Initiative commissioned a report from the international law firm Freshfields Bruckhaus Deringer on the interpretation of the law with respect to investors and ESG issues. The conclusions of the report were startling. Freshfields concluded that not only was it permissible for investment companies to integrate ESG issues into investment analysis but it was arguably part of their fiduciary duty to do so. In 2014, the Law Commission (England and Wales) confirmed that there was no bar on pension trustees and others from taking account of ESG factors when making investment decisions.
Where Friedman had provided the academic support for the argument that the integration of ESG type factors into financial practice would reduce financial performance, numerous reports began to appear in the early years of the century which provided research that supported arguments to the contrary. In 2006 Oxford University’s Michael Barnett and New York University’s Robert Salomon published a highly influential study which concluded that the two sides of the argument might even be complementary – they propounded a curvilinear relationship between social responsibility and financial performance, both selective investment practices and non-selective could maximise financial performance of an investment portfolio, the only route likely to damage performance was a middle way of selective investment. Besides the large investment companies and banks taking an interest in matters ESG, an array of investment companies specifically dealing with responsible investment and ESG based portfolios began to spring up throughout the financial world.
The development of ESG factors as considerations in investment analysis is now widely assumed by the investment industry to be all but inevitable. The evidence supporting a nexus between performance on ESG issues and financial performance is becoming greater and the combination of fiduciary duty and a wide recognition of the necessity of the sustainability of investments in the long term has meant that environmental social and corporate governance concerns are now becoming increasingly important in the investment market. ESG has become less a question of philanthropy than practicality.
There has been wide uncertainty and debate as to what to call the inclusion of intangible factors relating to the sustainability and ethical impact of investments. Names have ranged from the early use of buzz words such as „green“ and „eco“, to the wide array of possible descriptions for the types of investment analysis – „responsible investment“, „socially responsible investment“ (SRI), „ethical“, „extra-financial“, „long horizon investment“ (LHI), „enhanced business“, „corporate health“, „non-traditional“, and others. But the predominance of the term ESG has now become fairly widely accepted. A survey of 350 global investment professionals conducted by AXA Investment Managers and AQ Research in 2008, led by Dr Raj Thamotheram, director of responsible investment at AXA, concluded that although both ESG and „sustainable“ were the most commonly used names for the new data integrated into mainstream investment analysis, the vast majority of professionals preferred the term ESG to describe such data.
Interest in ESG (Environmental, Social, Governance) and sustainable investing runs strong for plan participants, according to Natixis‘ 2016 Survey of Defined Contribution Plan Participants2. In fact, more than six in ten agreed they would be more likely to contribute or increase their contributions to their retirement plan if they knew their investments were doing social good.
An Inconvenient Truth is a 2006 American documentary film directed by Davis Guggenheim about former United States Vice President Al Gore’s campaign to educate citizens about global warming via a comprehensive slide show that, by his own estimate made in the film, he has given more than a thousand times. The idea to document his efforts came from producer Laurie David, who saw his presentation at a town-hall meeting on global warming, which coincided with the opening of The Day After Tomorrow. Laurie David was so inspired by Gore’s slide show that she, with producer Lawrence Bender, met with Guggenheim to adapt the presentation into a film.
Premiering at the 2006 Sundance Film Festival and opening in New York City and Los Angeles on May 24, 2006, the documentary was a critical and box office success, winning two Academy Awards for Best Documentary Feature and Best Original Song. The film grossed $24 million in the U.S. and $26 million at the international box office, becoming the tenth highest grossing documentary film to date in the United States. Since the film’s release, An Inconvenient Truth has been credited for raising international public awareness of global warming and reenergizing the environmental movement. The documentary has also been included in science curricula in schools around the world, which has spurred some controversy. A sequel to the film, titled An Inconvenient Sequel: Truth to Power, was released on July 28, 2017.
The financial crisis of 2007–2008, also known as the global financial crisis and the 2008 financial crisis, is considered by many economists to have been the worst financial crisis since the Great Depression of the 1930s.
It began in 2007 with a crisis in the subprime mortgage market in the US, and developed into a full-blown international banking crisis with the collapse of the investment bank Lehman Brothers on September 15, 2008. Excessive risk-taking by banks such as Lehman Brothers helped to magnify the financial impact globally. Massive bail-outs of financial institutions and other palliative monetary and fiscal policies were employed to prevent a possible collapse of the world financial system. The crisis was nonetheless followed by a global economic downturn, the Great Recession. The European debt crisis, a crisis in the banking system of the European countries using the euro, followed later.
The Dodd–Frank Act was enacted in the US in the aftermath of the crisis to „promote the financial stability of the United States“. The Basel III capital and liquidity standards were adopted by countries around the world.
Impact investing refers to investments „made into companies, organizations, and funds with the intention to generate a measurable, beneficial social or environmental impact alongside a financial return.“ The term “impact investing” was coined in 2007 by the Rockefeller Foundation which start the Rockefeller Foundation’s Impact Investing initiative.
Impact investments provide capital to address social and/or environmental issues. They can be made in either emerging or developed markets, and depending on the goals of the investors, can „target a range of returns from below-market to above-market rates“. Impact investors actively seek to place capital in businesses, nonprofits, and funds in industries such as renewable energy, basic services including housing, healthcare, and education, microfinance, and sustainable agriculture. Impact investing occurs across asset classes; for example, private equity/venture capital, debt, and fixed income.
Institutional investors, notably North American and European development finance institutions, pension funds and endowments have played a leading role in the development of impact investing.
A blockchain, originally block chain, is a continuously growing list of records, called blocks, which are linked and secured using cryptography. Each block typically contains a hash pointer as a link to a previous block, a timestamp and transaction data. By design, blockchains are inherently resistant to modification of the data. It is „an open, distributed ledger that can record transactions between two parties efficiently and in a verifiable and permanent way“. For use as a distributed ledger, a blockchain is typically managed by a peer-to-peer network collectively adhering to a protocol for validating new blocks. Once recorded, the data in any given block cannot be altered retroactively without the alteration of all subsequent blocks, which requires collusion of the network majority.
Blockchains are secure by design and are an example of a distributed computing system with high Byzantine fault tolerance. Decentralized consensus has therefore been achieved with a blockchain. This makes blockchains potentially suitable for the recording of events, medical records, and other records management activities, such as identity management, transaction processing, documenting provenance, food traceability or voting.
The first blockchain was conceptualized in 2008 by an anonymous person or group known as Satoshi Nakamoto and implemented in 2009 as a core component of bitcoin where it serves as the public ledger for all transactions. The invention of the blockchain for bitcoin made it the first digital currency to solve the double spending problem without the need of a trusted authority or central server. The bitcoin design has been the inspiration for other applications.
The European debt crisis (often also referred to as the Eurozone crisis or the European sovereign debt crisis) is a multi-year debt crisis that has been taking place in the European Union since the end of 2009. Several eurozone member states (Greece, Portugal, Ireland, Spain and Cyprus) were unable to repay or refinance their government debt or to bail out over-indebted banks under their national supervision without the assistance of third parties like other Eurozone countries, the European Central Bank (ECB), or the International Monetary Fund (IMF).
The detailed causes of the debt crisis varied. In several countries, private debts arising from a property bubble were transferred to sovereign debt as a result of banking system bailouts and government responses to slowing economies post-bubble. The structure of the eurozone as a currency union (i.e., one currency) without fiscal union (e.g., different tax and public pension rules) contributed to the crisis and limited the ability of European leaders to respond. European banks own a significant amount of sovereign debt, such that concerns regarding the solvency of banking systems or sovereigns are negatively reinforcing.
As concerns intensified in early 2010 and thereafter, leading European nations implemented a series of financial support measures such as the European Financial Stability Facility (EFSF) and European Stability Mechanism (ESM). The ECB also contributed to solve the crisis by lowering interest rates and providing cheap loans of more than one trillion euro in order to maintain money flows between European banks. On 6 September 2012, the ECB calmed financial markets by announcing free unlimited support for all eurozone countries involved in a sovereign state bailout/precautionary programme from EFSF/ESM, through some yield lowering Outright Monetary Transactions (OMT).
Return to economic growth and improved structural deficits enabled Ireland and Portugal to exit their bailout programmes in July 2014. Greece and Cyprus both managed to partly regain market access in 2014. Spain never officially received a bailout programme. Its rescue package from the ESM was earmarked for a bank recapitalisation fund and did not include financial support for the government itself.
The crisis has had significant adverse economic effects and labour market effects, with unemployment rates in Greece and Spain reaching 27%, and was blamed for subdued economic growth, not only for the entire eurozone, but for the entire European Union. As such, it can be argued to have had a major political impact on the ruling governments in 10 out of 19 eurozone countries, contributing to power shifts in Greece, Ireland, France, Italy, Portugal, Spain, Slovenia, Slovakia, Belgium and the Netherlands, as well as outside of the eurozone, in the United Kingdom.
The Arab Spring (Arabic: الربيع العربي ar-Rabīʻ al-ʻArabī), also referred to as Arab revolutions (Arabic: الثورات العربية aṯ-‚awrāt al-ʻarabiyyah), was a revolutionary wave of both violent and non-violent demonstrations, protests, riots, coups, foreign interventions, and civil wars in North Africa and the Middle East that began on 17 December 2010 in Tunisia with the Tunisian Revolution.
The Tunisian Revolution effect spread strongly to five other countries: Libya, Egypt, Yemen, Syria and Bahrain, where either the regime was toppled or major uprisings and social violence occurred, including civil wars or insurgencies. Sustained street demonstrations took place in Morocco, Iraq, Algeria, Iranian Khuzestan, Lebanon, Jordan, Kuwait, Oman and Sudan. Minor protests occurred in Djibouti, Mauritania, the Palestinian National Authority, Saudi Arabia, and the Moroccan-controlled Western Sahara. A major slogan of the demonstrators in the Arab world is ash-shaʻb yurīd isqāṭ an-niẓām („“the people want to bring down the regime““).
The wave of initial revolutions and protests faded by mid-2012, as many Arab Spring demonstrations were met with violent responses from authorities, as well as from pro-government militias and counter-demonstrators. These attacks were answered with violence from protestors in some cases. Large-scale conflicts resulted—the Syrian Civil War, Iraqi insurgency and the following civil war, the Egyptian Crisis and coup, the Libyan Civil War, and the Yemeni Crisis and following civil war.
A power struggle continued after the immediate response to the Arab Spring. While leadership changed and regimes were held accountable, power vacuums opened across the Arab world. Ultimately it came down to a contentious battle between a consolidation of power by religious elites and the growing support for democracy in many Muslim-majority states. The early hopes that these popular movements would end corruption, increase political participation, and bring about greater economic equity quickly collapsed in the wake of the counterrevolutionary moves of the deep state in Egypt, the regional and international interventions in Bahrain and Yemen, and the destructive civil wars in Syria and Libya.
Some have referred to the succeeding and still ongoing conflicts as the Arab Winter. As of January 2018, only the uprising in Tunisia has resulted in a transition to constitutional democratic governance.
Brexit is the prospective withdrawal of the United Kingdom (UK) from the European Union (EU).
In a referendum on 23 June 2016, 51.9% of the participating UK electorate (the turnout was 72.2% of the electorate) voted to leave the EU. On 29 March 2017, the UK government invoked Article 50 of the Treaty on the European Union. The UK is thus due to leave the EU on 29 March 2019.
UK Prime Minister Theresa May announced that the UK would not seek permanent membership of the single market or the customs union after leaving the EU and promised to repeal the European Communities Act of 1972 and incorporate existing European Union law into UK domestic law. A new government department, the Department for Exiting the European Union (DExEU), was created in July 2016, with Eurosceptic David Davis appointed its first Secretary of State. Negotiations with the EU officially started in June 2017.
The UK joined the European Communities (EC) in 1973, with membership confirmed by a referendum in 1975. In the 1970s and 1980s, withdrawal from the EC was advocated mainly by Labour Party members and trade union figures. From the 1990s, the main advocates of withdrawal were the newly founded UK Independence Party (UKIP) and an increasing number of Eurosceptic Conservative Party members.
There is strong agreement among economists and a broad consensus in existing economic research that Brexit is likely to reduce UK’s real per-capita income in the medium- and long-term. Studies on effects that have already materialised since the referendum show annual losses of £404 for the average British household and a loss of 1.3% of UK GDP. Brexit is likely to reduce immigration from European Economic Area (EEA) countries to the UK, and poses challenges for UK higher education and academic research. The size of the „divorce bill“ (the sum of money demanded by the EU from the UK for the departure), future of Scottish secession, Britain’s international agreements, relations with the Republic of Ireland, and the borders with France and between Gibraltar and Spain are uncertain. The precise impact on the UK depends on whether it would be a „hard“ Brexit (whereby the UK leaves the EU and does not join EFTA or the EEA) or a „soft“ Brexit (whereby the UK joins EFTA, the EEA, or enters into a special agreement with the EU that retains significant access to the EU’s single market).
The Good, the Bad and the Ugly
Corestone about the (re)rise of China:
It’s an odd thing to quote a 1960s Italo-Western (The Good, the Bad and the Ugly, 1966). Not to imply that China is the mirror image to the wild west with no rule of law in vast areas of the US during that time. However, when looking at China today, people familiar with the country do see ‘the Good, the Bad and the Ugly’ in certain areas of life, sectors of the economy or politics. Reflecting on all three and putting them in perspective should give a better basis to assess and evaluate current economic and political developments.
«Swiss Pension System in Dire Straits»
An aging society, low or no investment returns, unrealistic expectations of regulatory demands – the Swiss pension system is overstretched. Digitization could provide vital solutions, Jens Pongratz says in an exclusive essay for finews.first.
Why just 'best-in-class' ain’t good enough
Corestone’s view in the Dutch Financial Investigator on why a more sophisticated and holistic approach to portfolio construction will lead to more stable and better outcomes for investors and why this requires already an adjusted approach to screening and selection.
New fee models in asset management?
Is the time ripe for new fee models in the asset management industry? Some asset managers and investors seem to think so and have started introducing alternatives.
Aligning incentives between investors and asset managers has been a long-time challenge. The latest, potentially industry-disrupting attempt that has been making headlines since the beginning of the year comes from GPIF, the USD 1.5 trillion Japanese Government Pension Investment Fund.