Category Archives: Economics


Last month the city of Nelson, BC, said no to drive-thrus. There’s only one in the town anyway, but city councilors voted to prevent any more appearing. Councillor Deb Kozak described it as “a very Nelson” thing to do.

Nelson may be slightly off the mean when it comes to small towns—many a draft dodger settled there back in the Vietnam War era, and pot-growing allowed Nelson to better weather the downturn of the forest industry that occurred back in the 80s—but at the same time, dumping on drive-thrus is something that could only happen in a smaller urban centre.

The move is in support of controlling carbon pollution of course; no more idling cars lined up down the block (Hello, Fort McMurray?!), but what I like about it is that the new by-law obliges people to get out of their cars, to enjoy a little facetime with another human being, instead of leaning out their car window, shouting into a tinny speaker mounted in a plastic sign.

For all the degree of change being generated by the digital revolution, and for all the noise I’ve made about that change in this blog, there are two revolutions of recent decades that have probably had greater effect: the revolution in settlement patterns that we call urbanization, and the revolution in economic scale that we call globalization. Both are probably more evident in smaller cities and towns than anywhere else.

Grain elevators, Milestone, Saskatchewan, about 1928
Grain elevators, Milestone, Saskatchewan,
about 1928

Both of my parents grew up in truly small prairie towns; my mother in Gilbert Plains, Manitoba, present population about 750; my father in Sedgewick, Alberta, present population about 850. Sedgewick’s population has dropped some 4% in recent years, despite a concurrent overall growth rate in Alberta of some 20%. Both these towns were among the hundreds arranged across the Canadian prairies, marked off by rust-coloured grain elevators rising above the horizon, set roughly every seven miles along the rail lines. This distance because half that far was gauged doable by horse and wagon for all the surrounding farmers.

I grew up in Grande Prairie, Alberta, a town which officially became a city while I still lived there. The three blocks of Main Street that I knew were anchored at one end by the Co-op Store, where all the farmers shopped, and at the other by the pool hall, where all the young assholes like me hung out. In between were Lilge Hardware, operated by the Lilge brothers, Wilf and Clem, Joe’s Corner Coffee Shop, and Ludbrooks, which offered “variety” as “the spice of life,” and where we as kids would shop for board games, after saving our allowance money for months at a time.

Grande Prairie is virtually unrecognizable to me now, that is it looks much like every other small and large city across the continent: the same ‘big box’ stores surround it as surround Prince George, and Regina and Billings, Montana, I’m willing to bet. Instead of Lilge Hardware, Joe’s Corner Coffee Shop and Ludbrooks we have Walmart, Starbucks and Costco. This is what globalization looks like, when it arrives in your own backyard.

80% of Canadians live in urban centres now, as opposed to less than 30% at the beginning of the 20th century. And those urban centres now look pretty much the same wherever you go, once the geography is removed. It’s a degree of change that snuck up on us far more stealthily than has the digital revolution, with its dizzying pace, but it’s a no less disruptive transformation.

I couldn’t wait to get out of Grande Prairie when I was a teenager. The big city beckoned with diversity, anonymity, and vigour. Maybe if I was young in Grande Prairie now I wouldn’t feel the same need, given that I could now access anything there that I could in the big city. A good thing? Bad thing?

There’s no saying. Certain opportunities still exist only in the truly big centres of course, cities like Tokyo, New York or London. If you want to make movies it’s still true that you better get yourself to Los Angeles. But they’re not about to ban drive-thrus in Los Angeles. And that’s too bad.

Brainstorming Becalmed

‘Brainstorming’ originated as a creative process back in the 50s, and it’s still remarkably popular today in both opinion and practice, especially within business circles.  The practice sees a number of people get together to ‘free associate’ and ‘toss out ideas’ in a fast-paced, noncritical context.  The emphasis is on quantity, not quality; the more ideas the better.

The belief behind brainstorming is that the group, once freed from the restraints of collective judgment, will come up with more and better ideas than will an individual working alone.

Except that it isn’t true.

Jessica Gale photo morgueFile
Jessica Gale photo

This is for me perhaps the single most intriguing point made by Susan Cain in her recent book Quiet: The Power of Introverts in a World That Can’t Stop Talking.  According to Cain, studies dating as far back as 1963 have quite conclusively shown that, when it comes to either creativity or efficiency, working in groups produces fewer and poorer results than when people work in quiet, concentrated solitude.

Go figure.  I’m reminded of the likewise commonly held misconception that the ‘venting’ of anger or resentment is good for us.  This belief holds that when we suppress feelings like anger, when we ‘bottle it up,’ the effort leads to all sorts of possible afflictions, from ulcers to insomnia.  Women are held to be particularly vulnerable, because of greater societal expectations of ‘ladylike’ behavior.

Well, once again, for quite some time now, science has been definitively showing that venting anger feeds rather than diminishes the flame.  Anger is generally far more destructive—of both our health and our relationships—when it is expressed than when it is suppressed, when it is allowed to diffuse over time.

The implications of the ‘brainstorming doesn’t work’ finding are especially significant when it comes to matters like the physical layout of the workplace.  Most of us know that when we take on a creative challenge, any form of distraction or interruption, whether it be background noise or a phone call, can be an impediment to our best work.  Thus if employers wish to get the best results from their employees, it follows that those employees should be provided with an environment where quiet concentration is possible.  Chocker block cubicles in a noisy workspace fall far short of this mark, I would suggest, never mind the kind of collective open-space chaos that one often sees in the high-tech working world.

There is, however, one equally interesting corollary to the fallacy of face-to-face brainstorming.  Electronic brainstorming does seem to work.  The so-called ‘hive mind’ has validity.  When academics work together on research projects, the results tend to be more influential than when they work in greater isolation or face-to-face.  Wikis are after all a kind of electronic brainstorming, and they have been shown to produce outcomes that no individual could hope to.

The key here is of course that such online collaboration is essentially ‘brainstorming in solitude.’  Online teamwork can be accomplished from individual places supporting both silence and focus.  It also tends to happen at a much slower pace than the classic brainstorming session.  Online brainstorming (if we can even properly call it that) may be the optimum balance between individual and group work.

Multitasking is a related practice that may also be the norm in the contemporary workplace, almost an admired skill.  We can proudly perform numerous tasks at once, keep various undertakings moving forward simultaneously.  It’s worth remembering however, that we can never in fact pay full attention to two things at once, much less several things.  We have simply learned to switch rapidly from one to the other.  Someone now needs to do a study as to whether multitasking—juggling numerous pieces of fruit at once—does in fact deliver better results than tossing one apple at a time into the air, and thus being able to pay full and close attention to the challenge.  All at once may look flashier than one thing at a time, but is it actually more productive?

The quiet, never mind silence, that allows for focused and full attention is a prized commodity in today’s accelerated world.  The lesson here, it seems to me, is that this precious commodity may not only be good for the soul; it’s good for business.

Income Inequality Is Increasing Everywhere… Except Latin America

Income inequality—wherein the rich get richer and poor poorer, relatively speaking—is increasing almost everywhere.  Even in the Asian ‘tiger’ economies of China and India, the gap is growing.

A recent report by the Conference Board of Canada confirms that the condition exists here as well.  The report notes that, after reaching a peak in the late 1990s, “even though higher commodity demand and prices helped Canada’s economy grow faster from 2000 to 2010 than most of its peers, including the United States, income inequality did not decline.”

Les Chatfield photo

It seems the only general exceptions to this noxious trend are parts of southern Africa, and, interestingly, Latin America.  A 2012 study by the World Bank, as reported in the Guardian, offers some explanation: “For decades, Latin America was notorious for some of the widest income gaps in the world, but a combination of favourable economic conditions and interventionist policies by left-leaning governments in Brazil and other countries has brought it more closely in line with international norms.”

So as the income gap has been expanding in nearly all parts of the world, Central and South America have been tacking steadily into the winds of ‘free market forces’ which have been growing income disparity across the planet.

As usual, however, that’s not the end of the story.  Because while these two opposing trends have been underway, overall poverty in the world has been on the decrease.  Millions of people have in recent years been lifted up out of poverty, especially in countries like China and India.  In Brazil too, the last decade is reported to have seen 20 million people escape poverty.

And income inequality in many Latin American countries, including Brazil, is still high.  It’s just that it’s been getting better, whereas in the more ‘developed’ countries, the gap between rich and poor has widened in recent years.  Of the 16 countries which the Conference Board has designated as Canada’s peers, just five of them have seen income disparity shrink since the mid 90s.  If those 17 countries are ranked from lowest to highest growth in inequality, Canada comes 12th highest.  The U.S. ranks highest of all.  Between 1980 and 2007, the income of the richest 1% of Americans rose 197%.

In the States, there is of course heated debate as to why the gap has been growing so steadily.  In today’s world of what Al Gore calls “robosourcing,” where technology is displacing many low-skilled workers, the changes are often attributed to what have been traditionally—and fatalistically—labeled “market forces.”  Other, more progressive economists like Paul Klugman challenge that view, instead pointing to the decline of unions, stagnating minimum wage rates, deregulation, and government policies that favor the wealthy.

There is less debate as to why the gap has been expanding in India and China, where it’s generally recognized that the typical income level for those working in urban industries has been fast outpacing the average income of those who remain at work in rural, agricultural areas.

But if we consider the example of Latin America, where numerous decidedly “left leaning” governments have held power in recent times, the explanation as to growing North American income inequality coming from Klugman and his cohorts would seem to be more convincing.  Leaders like Evo Morales in Bolivia and the late Hugo Chavez in Venzuela, whatever your view of them, have not been shy about enacting programs of genuine income redistribution.  And their policies would seem to be at least part of the reason why revenue disparity has been improving in Latin America, while deteriorating elsewhere.

So as I’ve commented on elsewhere in this blog, the problems of a jobless recovery from the great recession of 2008, along with stagnating median incomes seem crucial for countries everywhere these days.  While it flies in the face of free market, anti-government views which seem to have held so much sway for so many years in the U.S. and Canada, the evidence emanating from Latin America would suggest that maybe it’s time we recognized that a more, not less interventionist government role is what increased economic fairness may require.


Marx Was Right

Those politicos who chant the competition-as-salvation mantra, especially those in America, may find it hard to believe, but not so long ago many prominent U.S. businessmen and politicians were singing the praises of corporate monopoly.  Incredibly, given America’s current climate of opinion—where the word government, never mind socialism, seems a dirty word—just 100 years ago, it was widely believed that there were four basic industries with “public callings”—telecommunications, transportation, banking and energy—that were best instituted as government sanctioned monopolies.  The most successful of the corporate entities to occupy this place of economic privilege was the American Telephone and Telegraph Company (AT&T), and here’s what its then President, Theodore Vail, had to say about the social value of competition, “In the long run… the public as a whole has never benefited by destructive competition.”

Groucho's older brother Karl (kidding)
Groucho’s older brother Karl (kidding)

Karl Marx may have been wrong about many things, including what best motivates the average human being, but he was certainly not wrong when he suggested that capitalism tends directly toward monopoly.  How could it not, when the most durable means of defeating the competition will always be to simply eliminate it?  In 1913, AT&T had been remarkably successful in doing just that, and its monopoly would survive undiminished until 1982, when the Reagan administration oversaw the breakup of AT&T into the seven so-called ‘Baby Bells.’

(Before you conclude that it’s only right-thinking, right-leaning governments, like Reagan’s, that can properly control corporate America, know that it was also a Republican administration, under President Taft, that condoned the ascendency to monopoly by AT&T in 1913.)

Tim Wu, in his book The Master Switch (cited last week in this blog), has postulated “the cycle” as continuously operative in the communications industries (all the way from telegraph to TV), whereby technical innovation gives birth to an initially wide-open trade, but where soon enough corporate consolidation leads to singular business empires.  It’s worth noting that by 2006, AT&T had, via some truly brutal business practices, essentially reunited its pre-breakup empire, leaving only two of the Baby Bells, Verizon and Qwest, still intact and independent.

The latest example of the tendency toward monopoly in Canada can be seen readily at play in the federal government’s efforts to boost competition among the oligopoly of this country’s big three telephone providers, Telus, Bell and Rogers.  Evidence suggests that, prior to the government’s most recent intervention—in 2008 reserving wireless spectrum for new companies like Mobilicity, Wind and Public Mobile—Canadians paid some of the highest mobile phone charges in the world.  Since their entry into the marketplace, these three rookie players, have—what a surprise—struggled to prosper, even survive in the face of fierce competition from the triad of telecom veterans.  All three ‘Canadian babies’ are now said to be up for sale, and the feds, to their credit, stepped in earlier this year to block a takeover of Wind Mobile by Telus Corp.

Former Baby Bell Verizon—now referred to in comparison to Canadian telecoms as “giant” or “huge”—is reported to be circling Canada’s wireless market, rumoured to be considering a bid on either of Wind Mobile or Mobilicity.  Facilitating this move—and setting off alarm bells (no pun intended) near the Canadian cultural core—is a recent legislative relaxation of formerly stringent foreign ownership rules to allow foreign takeovers of telecoms with less than 10 per cent of the market.

Wu’s book asks if the internet will succumb to the same cycle of amalgamation that so many other electronic media have.  His answer: too soon to tell, but history teaches us to keep a wary eye.  And if you consider Apple’s cozy relationship with AT&T over the iPhone, or the fact that Google and Verizon have courted, you’d have to agree with his concern.  Wu concludes his book with an advocacy of what he terms “The Separations Principle,” an enforced separation of “those who develop information, those who control the network infrastructure on which it travels, and those who control the tools or venues of access” to that information.

The internet, given its decentralized construction, is not easy to consolidate, but no one should feel confident that today’s corporate titans won’t try.  Nor should we underestimate their ability to succeed in that effort.



Andrew Blum, in his new book Tubes, describes a scene taking place late in the summer of 1969—an excited group of grad students had gathered on an otherwise placid Saturday afternoon in the courtyard of Boelter Hall on the UCLA campus.  This was the summer of Woodstock and, even though they were exclusively science geeks, no doubt a few of them wore bell bottom pants, and more than one moustache adorned an eager young face.  Somebody clutched a bottle of champagne.

220px-Interface_Message_Processor_Front_PanelThe occasion was the arrival of the very first “IMP”—interface message processor—a 900-pound behemoth costing $80,000 ($500,000 in today’s dollars) which was nevertheless referred to as a “minicomputer.”  It had been air freighted from Boston by the engineering firm of Bolt, Beranek and Newman, who a few years earlier had signed a one million dollar contract with the U.S. Department of Defense to develop an impervious computer network to be called the ARPANET.

This celebratory event harkened directly back to 1957, when a Polish-born engineer named Paul Baron landed a new job at the RAND Corporation, where he soon became involved in a project to design a nuclear-attack-proof communications system for the U.S. military.  Elvis Presley made his seventh and final appearance on The Ed Sullivan Show that year, and in November of 1957 the Soviet Union launched Sputnik 1, the first earth satellite, traumatizing the American psyche and prompting the federal government to call for the building of more backyard bomb shelters across the nation.

Baron began experimenting with different communication models, and soon fell upon the advantages of a ‘fishnet’ design, as opposed to the ‘star’ design employed by the then monopoly AT&T system.  The AT&T model featured a focal hub through which nearly all information flowed, making it especially vulnerable to a nuclear attack.  The fishnet model had no vulnerable centre; if one part of it was damaged or destroyed, any bit of sent information could still reach its final destination by flowing through an alternate set of nodes.

These occurrences in turn of course relate directly to the birth of computers connected via the internet, the single most disruptive technological transformation humankind has witnessed.  Economists have labeled these kinds of innovations general purpose technology (GPT), that is technology which can be applied in a great many situations, work and play, thus resulting in far greater social and economic impact than any kind of industry-specific invention.  Prior GPTs include steam power, electricity and the internal combustion engine.

Recent studies—Race Against the Machine, by MIT professors Erik Brynjolfsson and Andrew McAfee is a particularly apt and concise summation—have shown that, while productivity has grown with the onset of digital technology, employment has not.  Median family income has stagnated in recent years, and the last decade is the first since the Great Depression which saw no overall net gain in job creation.  We have the phenomenon of the ‘jobless recovery’ from the crash of 2008, and Brynjolfsson and McAfee are clear about a major contributing factor that’s often been ignored by both business and government:

“… there has been relatively little talk about the role of acceleration of technology.  It may seem paradoxical that faster progress can hurt wages and jobs for millions of people but we argue that’s what’s been happening.”

The military origins of the internet are unmistakable, but I think we can safely assume that the Generals never foresaw that their nuke-proof system would one day be used by so many civilians in so many different circumstances.  Similarly, we’ve done a poor job of foreseeing the economic consequences of rapidly evolving digital technologies, in part because this evolution has been so incredibly rapid.

Ned Ludd and his followers were wrong about the loss of jobs inherent in the steam-powered weaving machines they were smashing back in the early 1800s.  As the industrial revolution progressed, more, not fewer jobs were created.  Those new jobs were more often in the city, not the village or farm, but there were nevertheless more of them, and what’s more they often required more than just an able body.  Sometimes they even required genuine creativity.

We’re living in a new post-revolutionary world, where again jobs are not what they used to be, and where right now there may be fewer of them.   Whether we can adapt to, work with rather than against the new machines to create a new and better society is the question once again at hand.



The Robots Are Coming! The Robots Are Coming!

“Technology will get to everybody eventually.”

Jarod Lanier said the above during an interview with a writer from, the news and entertainment website; this during Lanier’s book tour for his latest publication: Who Owns the Future?  Lanier is the internet apostate I wrote about earlier this year who once championed open-source culture, but who now suggests that digital technology is economically undermining the entire middle-class.

He offers this startling example of as much in the ‘Prelude’ to his new book:

“At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only thirteen people.”

images-1Lanier is suggesting that the musicians, video store clerks and journalists who have already seen their livelihoods erased or eroded by the internet are just the canaries in the coalmine, members of the first wave of economic casualties.  Soon the driverless Google cars will be taking down taxi drivers, caregivers will be replaced by robots, and all diagnoses of illness will be arrived at online.  The digital revolution is coming for us all, and it’s not a matter of if, but when.

The same chilling note is struck in a terrific article by Kevin Drum in the May/June 2013 issue of Mother Jones.  Drum points out that the development of Artificial Intelligence (AI) has been steady but not spectacular since the invention of the first programmable computer in 1940.  That’s because the human brain is an amazingly complex processor, and even today, after more than seven decades of exponential increase in the power of computers, they are still operating at about one thousandth of the power of a human brain.

The thing is, exponential is the key word here.  Anyone who has ever looked at an exponential growth curve plotted on a graph knows that for a long time the line runs fairly flat, but with the doubling effect that comes with exponential growth, the curve eventually begins a very steep climb.  Many people believe that we’re now at the base of that sharp rise.  Henry Markram, a neuroscientist working at the Swiss Federal Institute of Technology in Lausanne thinks he will be able to successfully model the human brain by 2020.

He may or may not be right in that prediction, but again, if he is wrong, it won’t be about if, only when.  And when we combine that eventual reality with Lanier’s telling Kodak-to-Instagram employment factoid above, there appears to be grounds for genuine concern.  Finally, after many years of dire (or celebratory) predictions, labor may be about to go into real oversupply.  If these ideas are at all accurate, robots will soon be displacing human employment just about everywhere you look.  Accountants, teachers, architects, the last of the assembly-line workers, even writers; we’re all vulnerable.  As Drum sees it, capital, not labor will be the commodity in short supply in the near future, and that bodes well only for those folks who already have plenty of capital.

One of the conditions that follows from an oversupply of labor and an increased demand for capital is of course that wealth will flow from those earning salaries to those holding the capital.  The proverbial rich will get richer, the poor poorer.  And this condition is of course extant and growing, especially in the U.S.—an escalating income inequality between the 99 and 1 per cents.

History tells us that times of high unemployment are times dangerous to us all, often leading to unrest that in turn leads to illusory socio-economic solutions— communism, fascism, anti-immigration laws, etc.  What to do?  Well, anticipate the problem, first of all.  Our leaders need to make some contingency plans.

A tax on capital?  Not likely any time soon, certainly not in America.  But if indeed the coming economic reality is that more and more people will be without work, while a select few citizens will be ever more wealthy, the concept of ‘income redistribution’ needs to come into play, one way or another.  And orderly, democratic economic reform beats the hell out of rioting in the streets.


Oligarchs of the Internet

Steve Jobs had no use for philanthropy.  There is no record of him having made any charitable donations during his lifetime, despite his immense wealth.  Jobs never signed Bill Gates and Warren Buffet’s ‘billionaire’s pledge,’ to give at least half of his fortune to charity, as have more than 100 other exceptionally wealthy individuals from around the globe.  He also condoned sweatshop conditions—for children—at Apple manufacturing sites in China.  Apple employs about 700,000 people via subcontractors, according to The New York Times, but almost none of them work in the U.S.  Steve had no problem with any of this.

(His wife, Laurene Powell Jobs, has a better record than Steve when it comes to giving back, having emerged from his shadow after his death to contribute actively to a number of worthy causes, especially education.)

Mark Zuckerberg did sign Buffet’s pledge, but it’s also the case that last year Facebook spent nearly $2.5 million lobbying in Washington against tougher privacy laws, and for immigration reform that would allow the employment of immigrant IT workers at lower wages.  Like Jobs, Zuckerberg is taxes-averse, and Facebook actually succeeded in paying no taxes last year, despite profits of more than a billion dollars.

How about Google, the ‘Don’t be evil’ corporation?  Sergey Brin and his wife have been generous, particularly in giving to the battle against Parkinson’s disease (Brin carries the flawed Parkinson’s gene), but a recent article in Wired magazine strikes a disturbing note. The piece recounts how, in 2009, a low-life drug dealer named David Whitaker was looking for leniency from the US Food and Drug Administration after being busted for selling steroids and human growth hormones online from a base of operations in Mexico.  He told the FDA that he had marketed his sometimes-phony drugs into the US using Google Adwords, something supposedly expressly prohibited by Google’s policies.  All he had needed to do, it seems, was work directly with Google reps to tailor his website into something more ‘educational’—no ‘buy now’ buttons, no photos of drugs, that sort of thing—and, even though he made no attempt to conceal the true nature of his business, Google was happy to help.  The Feds were initially skeptical, but set up Whitaker in a new, bogus operation as a sting, to test whether he was telling the truth.  This time his venture would include sales of RU-486, the abortion pill, which is normally taken only under the supervision of a medical doctor.

A few months later it was abundantly clear that Whitaker was indeed being truthful.  The feds took legal action, and in 2011, Google settled out of court, paying a $500 million fine.  A brief statement from the company admitted that, “With hindsight, we shouldn’t have allowed these ads on Google in the first place.”

Jay Gould
Jay Gould

Great success brings great size, and with size comes a kind of corporate momentum that inevitably stresses sales over principles.  It’s not an across-the-board phenomenon—the post-CEO Bill Gates being the obvious exception—but it’s clear that many of today’s internet lords are not necessarily cut from cloth any different than were the notorious robber barons of the past, men like Jay Gould, who in 1869 attempted to corner the market in gold, hoping that the increase in price would increase the price of wheat, such that western farmers would then sell, causing a great amount of shipping of bread stuffs eastward, increasing freight business for the Erie railroad, which Gould was trying to take control of.  The ploy may have been complex, even ingenious, but it also brought Gould infamy, eventually forcing him out of an ownership position with the Railroad.

Today’s web oligarchs enjoy much higher approval ratings than did their 19th century corporate predecessors.  It’s been reported that some members of Occupy Wall Street stopped to mourn at the impromptu memorial site set up outside the Apple Store in Manhattan, following Steve Jobs’ death.

We serfs of the imperial internet realm can do better.  Great product does not always mean a worthy producer, and the ends never justify the means.  We can demand more from the web-based corporations and corporation heads who profit so handsomely from our purchases and our use of their services.  We can, at the very least, expect generosity.




The Apostate

Jaron Lanier is an interesting case.  He’s an author, musician, composer and computer scientist who, back in the 80s, was a pioneer in virtual reality soft and hardware.  (The company he founded after leaving Atari in 1985 was the first to sell VR goggles and gloves.)

Most interestingly, these days Lanier is a sharp critic of what he terms, “digital Maosim,” the open source groupthink which says that not only all information, but all creative content images-3should be free and subject to appropriation (for mashups, for instance).  As you might expect, he’s been subject to considerable blowback from the online community in taking this position, and Lanier freely admits that he was once a card-carrying member of this same club, believing that any musicians or journalists who were going broke because of the digital revolution were simply dinosaurs, unable to adapt to a new environment that would soon provide them other means of financial support, if only they were a little patient and properly innovative.  The problem is, as Lanier writes in his 2010 book You Are Not a Gadget, “None of us was ever able to give the dinosaurs any constructive advice about how to survive.”

And so, currently, we have a world where creators—be they artists, musicians, writers or filmmakers—face massive competition and constant downward pressure on what they can charge for their product.  This while a few of what Lanier labels the “Lords of the Clouds”—those very able but still very lucky entrepreneurs who were at the right place at the right time with the right idea (think the owners of Youtube and Google)—have amassed huge fortunes.

These conditions have delivered a new feudal world where, according to Lanier, we again have starving peasants and rich lords, where formerly middle-class creators struggle to survive in the face of competition from what he adroitly describes as those people ‘living in their van,’ or those who are mere hobbyists, creating art as an after-work pastime, or perhaps because they can pay their monthly bills with an inheritance.  Important artists find themselves, like Renaissance artists of old, looking to rely on the beneficence of wealthy patrons.  “Patrons gave us Bach and Michelangelo,” rues Lanier, “but it’s unlikely patrons would have given us Vladimir Nabokov, the Beatles, or Stanley Kubrick.”

There’s little doubt that the digital revolution has been fairly disastrous for the creative community, at least once you combine it with the global economic tanking that took place in 2008-09.  (See last week’s post: ‘DEP.’)  As is so often the case, however, the picture is not so simple.  Another huge factor in the plethora of creative product out there at rock bottom prices is the advent of new production technology.  It’s now a whole lot easier than it was back in the 1970s to make a movie, or record some music, or publish your book.  The means of production have evolved to where just about anyone can get their hands on those means and begin creating, then distributing.  More supply, less [or the same] demand means lower prices; the invisible, emotionally indiscriminate hand of capitalism at work.  The former gatekeepers—the major record labels, publishing houses and movie studios—have lost their decisive positions at the entryway, and this in the end has to be a good thing.  It’s just that the change has not come without a flip side, one Lanier does a nice job of illuminating.

Back in 1990, Francis Coppola was interviewed by his wife for the making of Heart of Darkness; A Filmmaker’s Apocalypse, a documentary she was shooting about the extraordinary travails Coppola had faced in completing his career-defining opus Apocalypse Now.  Coppola had this to say about the future of filmmaking: “My great hope is that … one day some little fat girl in Ohio is gonna be the new Mozart and make a beautiful film with her father’s camera, and for once the so-called ‘professionalism’ about movies will be destroyed forever, and it will become an art form.”

Be careful what you wish for.




I began teaching part-time at the Vancouver Film School in the mid-eighties, for what I then thought was fairly decent remuneration.  I still teach at Langara College part-time, for wages that are two dollars less per hour than I was paid in 1986.  In Canada, disposable income has increased by just 10% since 1990; this while inflation totaled about 60% over that time.  In the U.S., one of two recent college graduates was unemployed or underemployed in 2012.  It’s all because of DEP.

DEP is an acronym of my own invention, abbreviating Downward Economic Pressure.  You’re welcome to use it anytime; feel free.  If your daughter, having completed an expensive university degree, seems able to secure little more than a minimum wage service job, you can shrug and simply say, “It’s DEP.”  Those auto workers in Ontario who have recently watched their jobs waft gently over the border to Mexico and the southern States?  They can put it all into proper perspective by just mumbling, “More DEP.”

Economic (and with it political power) is shifting eastward to Asia, and southward, to places like Brazil.  The glory days of North American prosperity are waning.   It’s a trend we’ve all heard of, but it’s hard to appreciate just how significant that trend is, or how lasting its effects may be.  Can anyone say, ‘British Empire,’ once the largest the world has ever known?

The median salary in Mexico is less than 3000 Canadian dollars; in Canada the median wage is about $46,000.  That about explains it, but again it’s difficult to overestimate the long-term implications of this divide, once global capitalism has its way.  And yes, I know that a few American companies have of late moved their manufacturing facilities back to the States, and I’ve heard that some plants in China have recently relocated to Viet Nam, where labour costs are still lower, but in the end the net effect is the same.  We in the West are losing our position of comparative affluence, and it ain’t coming back, not in our lifetimes.

Because this is precisely what ‘globalization’ means—the leveling of economic benefits across the globe, like water syphoned from one bucket to another, the liquid eventually finds its own matching level.  If wages are rising in India, they are falling in Canada.  And so it must be.

Or does it?  Someone like Linda McQuaid, the Canadian writer and social critic, doesn’t think so.  McQuaid, in books like The Myth of Impotence, would suggest that it’s indeed possible for governments to counteract the free-market effects of globalization with tools like the so-called ‘Tobin Tax,’ a levy proposed by Nobel Laureate James Tobin on international currency conversions.  And it may be possible, with some obvious benefits, but it’s not likely to happen, given the political power of those folks involved in international currency conversions.

But then, in contemplating the moral high ground held by someone from the left end of the political spectrum like McQuaid, I’m given to recall a tale told in Granta Magazine, years back, by a former British shipbuilder.  When some sort of technological change came around in the shipbuilding trade, requiring a shift from one union sector to another—from iron to copper pipe or some such territorial advance—it meant that many members from the replaced union sect were contemplating unemployment.  The industry’s solution, under pressure from powerful unions, was to institute an arrangement where the former pipefitter stood over the shoulder of the new pipefitter, ensuring that the job was done correctly.  Both workers would be paid equally.  Needless to say, the British shipbuilding industry slipped quietly away.

British Finance Minister George Osborne, austerity champion and someone who, a recent study indicates, appears more often in British nightmares than any other public figure, goes about these days saying that western nations must “do or decline.”  He may be right about that, and it can all be a little daunting, but, as I say, the trend is not going away, and maybe it shouldn’t.  Morally, can we assert that Western peoples, with their consumer lifestyle and broad social safety net, have any sort of inherent right to the preferred position?  Certainly colonialism put this sort of east-to-west economic flow in place, and perhaps we’re now simply witness to the final, just outcome of colonialism.

Equally perplexing is the question of whether the planet’s environment can sustain the rise of the non-western world to the same accumulative lifestyle that exists in Europe and North America.  All those cars, paid holidays, and insured medical expenses.  Time will tell, and the telling of this tale of macro-economic adjustments will get your attention, you may be sure.  We are all, it seems, subject to the ancient Chinese curse; we are all living in interesting times.