Tag Archives: economy

Facetime

Last month the city of Nelson, BC, said no to drive-thrus. There’s only one in the town anyway, but city councilors voted to prevent any more appearing. Councillor Deb Kozak described it as “a very Nelson” thing to do.

Nelson may be slightly off the mean when it comes to small towns—many a draft dodger settled there back in the Vietnam War era, and pot-growing allowed Nelson to better weather the downturn of the forest industry that occurred back in the 80s—but at the same time, dumping on drive-thrus is something that could only happen in a smaller urban centre.

The move is in support of controlling carbon pollution of course; no more idling cars lined up down the block (Hello, Fort McMurray?!), but what I like about it is that the new by-law obliges people to get out of their cars, to enjoy a little facetime with another human being, instead of leaning out their car window, shouting into a tinny speaker mounted in a plastic sign.

For all the degree of change being generated by the digital revolution, and for all the noise I’ve made about that change in this blog, there are two revolutions of recent decades that have probably had greater effect: the revolution in settlement patterns that we call urbanization, and the revolution in economic scale that we call globalization. Both are probably more evident in smaller cities and towns than anywhere else.

Grain elevators, Milestone, Saskatchewan, about 1928
Grain elevators, Milestone, Saskatchewan,
about 1928

Both of my parents grew up in truly small prairie towns; my mother in Gilbert Plains, Manitoba, present population about 750; my father in Sedgewick, Alberta, present population about 850. Sedgewick’s population has dropped some 4% in recent years, despite a concurrent overall growth rate in Alberta of some 20%. Both these towns were among the hundreds arranged across the Canadian prairies, marked off by rust-coloured grain elevators rising above the horizon, set roughly every seven miles along the rail lines. This distance because half that far was gauged doable by horse and wagon for all the surrounding farmers.

I grew up in Grande Prairie, Alberta, a town which officially became a city while I still lived there. The three blocks of Main Street that I knew were anchored at one end by the Co-op Store, where all the farmers shopped, and at the other by the pool hall, where all the young assholes like me hung out. In between were Lilge Hardware, operated by the Lilge brothers, Wilf and Clem, Joe’s Corner Coffee Shop, and Ludbrooks, which offered “variety” as “the spice of life,” and where we as kids would shop for board games, after saving our allowance money for months at a time.

Grande Prairie is virtually unrecognizable to me now, that is it looks much like every other small and large city across the continent: the same ‘big box’ stores surround it as surround Prince George, and Regina and Billings, Montana, I’m willing to bet. Instead of Lilge Hardware, Joe’s Corner Coffee Shop and Ludbrooks we have Walmart, Starbucks and Costco. This is what globalization looks like, when it arrives in your own backyard.

80% of Canadians live in urban centres now, as opposed to less than 30% at the beginning of the 20th century. And those urban centres now look pretty much the same wherever you go, once the geography is removed. It’s a degree of change that snuck up on us far more stealthily than has the digital revolution, with its dizzying pace, but it’s a no less disruptive transformation.

I couldn’t wait to get out of Grande Prairie when I was a teenager. The big city beckoned with diversity, anonymity, and vigour. Maybe if I was young in Grande Prairie now I wouldn’t feel the same need, given that I could now access anything there that I could in the big city. A good thing? Bad thing?

There’s no saying. Certain opportunities still exist only in the truly big centres of course, cities like Tokyo, New York or London. If you want to make movies it’s still true that you better get yourself to Los Angeles. But they’re not about to ban drive-thrus in Los Angeles. And that’s too bad.

Brainstorming Becalmed

‘Brainstorming’ originated as a creative process back in the 50s, and it’s still remarkably popular today in both opinion and practice, especially within business circles.  The practice sees a number of people get together to ‘free associate’ and ‘toss out ideas’ in a fast-paced, noncritical context.  The emphasis is on quantity, not quality; the more ideas the better.

The belief behind brainstorming is that the group, once freed from the restraints of collective judgment, will come up with more and better ideas than will an individual working alone.

Except that it isn’t true.

Jessica Gale photo morgueFile
Jessica Gale photo
morgueFile

This is for me perhaps the single most intriguing point made by Susan Cain in her recent book Quiet: The Power of Introverts in a World That Can’t Stop Talking.  According to Cain, studies dating as far back as 1963 have quite conclusively shown that, when it comes to either creativity or efficiency, working in groups produces fewer and poorer results than when people work in quiet, concentrated solitude.

Go figure.  I’m reminded of the likewise commonly held misconception that the ‘venting’ of anger or resentment is good for us.  This belief holds that when we suppress feelings like anger, when we ‘bottle it up,’ the effort leads to all sorts of possible afflictions, from ulcers to insomnia.  Women are held to be particularly vulnerable, because of greater societal expectations of ‘ladylike’ behavior.

Well, once again, for quite some time now, science has been definitively showing that venting anger feeds rather than diminishes the flame.  Anger is generally far more destructive—of both our health and our relationships—when it is expressed than when it is suppressed, when it is allowed to diffuse over time.

The implications of the ‘brainstorming doesn’t work’ finding are especially significant when it comes to matters like the physical layout of the workplace.  Most of us know that when we take on a creative challenge, any form of distraction or interruption, whether it be background noise or a phone call, can be an impediment to our best work.  Thus if employers wish to get the best results from their employees, it follows that those employees should be provided with an environment where quiet concentration is possible.  Chocker block cubicles in a noisy workspace fall far short of this mark, I would suggest, never mind the kind of collective open-space chaos that one often sees in the high-tech working world.

There is, however, one equally interesting corollary to the fallacy of face-to-face brainstorming.  Electronic brainstorming does seem to work.  The so-called ‘hive mind’ has validity.  When academics work together on research projects, the results tend to be more influential than when they work in greater isolation or face-to-face.  Wikis are after all a kind of electronic brainstorming, and they have been shown to produce outcomes that no individual could hope to.

The key here is of course that such online collaboration is essentially ‘brainstorming in solitude.’  Online teamwork can be accomplished from individual places supporting both silence and focus.  It also tends to happen at a much slower pace than the classic brainstorming session.  Online brainstorming (if we can even properly call it that) may be the optimum balance between individual and group work.

Multitasking is a related practice that may also be the norm in the contemporary workplace, almost an admired skill.  We can proudly perform numerous tasks at once, keep various undertakings moving forward simultaneously.  It’s worth remembering however, that we can never in fact pay full attention to two things at once, much less several things.  We have simply learned to switch rapidly from one to the other.  Someone now needs to do a study as to whether multitasking—juggling numerous pieces of fruit at once—does in fact deliver better results than tossing one apple at a time into the air, and thus being able to pay full and close attention to the challenge.  All at once may look flashier than one thing at a time, but is it actually more productive?

The quiet, never mind silence, that allows for focused and full attention is a prized commodity in today’s accelerated world.  The lesson here, it seems to me, is that this precious commodity may not only be good for the soul; it’s good for business.

Income Inequality Is Increasing Everywhere… Except Latin America

Income inequality—wherein the rich get richer and poor poorer, relatively speaking—is increasing almost everywhere.  Even in the Asian ‘tiger’ economies of China and India, the gap is growing.

A recent report by the Conference Board of Canada confirms that the condition exists here as well.  The report notes that, after reaching a peak in the late 1990s, “even though higher commodity demand and prices helped Canada’s economy grow faster from 2000 to 2010 than most of its peers, including the United States, income inequality did not decline.”

3121469754_dfec4f184b
Les Chatfield photo

It seems the only general exceptions to this noxious trend are parts of southern Africa, and, interestingly, Latin America.  A 2012 study by the World Bank, as reported in the Guardian, offers some explanation: “For decades, Latin America was notorious for some of the widest income gaps in the world, but a combination of favourable economic conditions and interventionist policies by left-leaning governments in Brazil and other countries has brought it more closely in line with international norms.”

So as the income gap has been expanding in nearly all parts of the world, Central and South America have been tacking steadily into the winds of ‘free market forces’ which have been growing income disparity across the planet.

As usual, however, that’s not the end of the story.  Because while these two opposing trends have been underway, overall poverty in the world has been on the decrease.  Millions of people have in recent years been lifted up out of poverty, especially in countries like China and India.  In Brazil too, the last decade is reported to have seen 20 million people escape poverty.

And income inequality in many Latin American countries, including Brazil, is still high.  It’s just that it’s been getting better, whereas in the more ‘developed’ countries, the gap between rich and poor has widened in recent years.  Of the 16 countries which the Conference Board has designated as Canada’s peers, just five of them have seen income disparity shrink since the mid 90s.  If those 17 countries are ranked from lowest to highest growth in inequality, Canada comes 12th highest.  The U.S. ranks highest of all.  Between 1980 and 2007, the income of the richest 1% of Americans rose 197%.

In the States, there is of course heated debate as to why the gap has been growing so steadily.  In today’s world of what Al Gore calls “robosourcing,” where technology is displacing many low-skilled workers, the changes are often attributed to what have been traditionally—and fatalistically—labeled “market forces.”  Other, more progressive economists like Paul Klugman challenge that view, instead pointing to the decline of unions, stagnating minimum wage rates, deregulation, and government policies that favor the wealthy.

There is less debate as to why the gap has been expanding in India and China, where it’s generally recognized that the typical income level for those working in urban industries has been fast outpacing the average income of those who remain at work in rural, agricultural areas.

But if we consider the example of Latin America, where numerous decidedly “left leaning” governments have held power in recent times, the explanation as to growing North American income inequality coming from Klugman and his cohorts would seem to be more convincing.  Leaders like Evo Morales in Bolivia and the late Hugo Chavez in Venzuela, whatever your view of them, have not been shy about enacting programs of genuine income redistribution.  And their policies would seem to be at least part of the reason why revenue disparity has been improving in Latin America, while deteriorating elsewhere.

So as I’ve commented on elsewhere in this blog, the problems of a jobless recovery from the great recession of 2008, along with stagnating median incomes seem crucial for countries everywhere these days.  While it flies in the face of free market, anti-government views which seem to have held so much sway for so many years in the U.S. and Canada, the evidence emanating from Latin America would suggest that maybe it’s time we recognized that a more, not less interventionist government role is what increased economic fairness may require.

 

Marx Was Right

Those politicos who chant the competition-as-salvation mantra, especially those in America, may find it hard to believe, but not so long ago many prominent U.S. businessmen and politicians were singing the praises of corporate monopoly.  Incredibly, given America’s current climate of opinion—where the word government, never mind socialism, seems a dirty word—just 100 years ago, it was widely believed that there were four basic industries with “public callings”—telecommunications, transportation, banking and energy—that were best instituted as government sanctioned monopolies.  The most successful of the corporate entities to occupy this place of economic privilege was the American Telephone and Telegraph Company (AT&T), and here’s what its then President, Theodore Vail, had to say about the social value of competition, “In the long run… the public as a whole has never benefited by destructive competition.”

Groucho's older brother Karl (kidding)
Groucho’s older brother Karl (kidding)

Karl Marx may have been wrong about many things, including what best motivates the average human being, but he was certainly not wrong when he suggested that capitalism tends directly toward monopoly.  How could it not, when the most durable means of defeating the competition will always be to simply eliminate it?  In 1913, AT&T had been remarkably successful in doing just that, and its monopoly would survive undiminished until 1982, when the Reagan administration oversaw the breakup of AT&T into the seven so-called ‘Baby Bells.’

(Before you conclude that it’s only right-thinking, right-leaning governments, like Reagan’s, that can properly control corporate America, know that it was also a Republican administration, under President Taft, that condoned the ascendency to monopoly by AT&T in 1913.)

Tim Wu, in his book The Master Switch (cited last week in this blog), has postulated “the cycle” as continuously operative in the communications industries (all the way from telegraph to TV), whereby technical innovation gives birth to an initially wide-open trade, but where soon enough corporate consolidation leads to singular business empires.  It’s worth noting that by 2006, AT&T had, via some truly brutal business practices, essentially reunited its pre-breakup empire, leaving only two of the Baby Bells, Verizon and Qwest, still intact and independent.

The latest example of the tendency toward monopoly in Canada can be seen readily at play in the federal government’s efforts to boost competition among the oligopoly of this country’s big three telephone providers, Telus, Bell and Rogers.  Evidence suggests that, prior to the government’s most recent intervention—in 2008 reserving wireless spectrum for new companies like Mobilicity, Wind and Public Mobile—Canadians paid some of the highest mobile phone charges in the world.  Since their entry into the marketplace, these three rookie players, have—what a surprise—struggled to prosper, even survive in the face of fierce competition from the triad of telecom veterans.  All three ‘Canadian babies’ are now said to be up for sale, and the feds, to their credit, stepped in earlier this year to block a takeover of Wind Mobile by Telus Corp.

Former Baby Bell Verizon—now referred to in comparison to Canadian telecoms as “giant” or “huge”—is reported to be circling Canada’s wireless market, rumoured to be considering a bid on either of Wind Mobile or Mobilicity.  Facilitating this move—and setting off alarm bells (no pun intended) near the Canadian cultural core—is a recent legislative relaxation of formerly stringent foreign ownership rules to allow foreign takeovers of telecoms with less than 10 per cent of the market.

Wu’s book asks if the internet will succumb to the same cycle of amalgamation that so many other electronic media have.  His answer: too soon to tell, but history teaches us to keep a wary eye.  And if you consider Apple’s cozy relationship with AT&T over the iPhone, or the fact that Google and Verizon have courted, you’d have to agree with his concern.  Wu concludes his book with an advocacy of what he terms “The Separations Principle,” an enforced separation of “those who develop information, those who control the network infrastructure on which it travels, and those who control the tools or venues of access” to that information.

The internet, given its decentralized construction, is not easy to consolidate, but no one should feel confident that today’s corporate titans won’t try.  Nor should we underestimate their ability to succeed in that effort.

 

Copyright and Wrong

The digital revolution has ushered in a new culture war.  On one side are the creators: the artists, musicians, filmmakers and writers who, by law, hold copyright over the works they have originated.  Joining them—significantly—are the entities, most of them corporations, which have traditionally promoted and sold their creations: the major record labels, the publishers, the film and video distributors.

kellysfordLined up on the opposite side of the battlefield, thumping on their shields, are the digital audience legions, all of us who read, listen to or watch copyrighted material online, often without payment.  Joining this army are those corporations whose revenue is found in monetizing websites that traffic in copyrighted (as well as uncopyrighted) material, chief among them of course Google, which owns Youtube.

The history of this warfare is interesting, with many incremental skirmishes along the way.  Back in the days of vinyl, no one knew how to replicate an LP at home.  The record labels ruled, and their reputation as rapacious ogres was opportunistically well earned.  Tape came along, and with it rerecording, but the rerecorder still had to first buy the record.  Then music was digitized, processed into a computer file that could be copied indefinitely, with essentially no loss in quality.  And then came the nuclear bomb, although again the shock waves went out incrementally… the internet.  For the creators and their corporate allies, all hell broke loose.  And the market has never fully recovered.

A peace settlement for this war is remarkably difficult to achieve, and thus it drags on.  The defeat of Napster in 2001 seemed a clear victory for the creators, but as of 2011, music sales in the US were still at less than half of what they were in 1999.  As mentioned in an earlier post, Douglas and McIntyre, the largest independent publisher in Canada, filed for bankruptcy in 2012.  MGM, owners of the single most successful movie franchise in history—the James Bond films—sold for $5 billion in 2004, but is now evaluated at less than$2 billion.  The list of dead or wounded grows steadily.

One basic distinction that has to be upheld, if ever this war is to be over, is the one between those who ‘remix’ copyrighted material, mashing it up in their own artistic creations, and those who simply copy and sell unadulterated copies of other people’s work.  Remixers are artists unto themselves, complimenting rather than threatening the appropriated work.  Unlicensed resellers are simply thieves who deserve prosecution.

So too should we distinguish between individual downloaders of copyright material and the file-sharing and ‘locker’ sites who facilitate their downloading.  These sites have always employed the old dodge of, “We don’t actually do any illegal uploading or downloading ourselves, and we’re not responsible for any of our users who do.’  It doesn’t wash.  Here we should bust the dealers, not the users.

Too often even this distinction is not maintained by zealous corporations who, NRA-like in their paranoia, blindly prosecute any and all perceived violations of copyright laws, targeting even the smallest infraction.  Lawrence Lessig, in his book Remix: Making Art and Commerce Thrive in the Hybrid Economy, recounts how lawyers from Universal Music Group threatened the mother of a two-year-old with a $150,000 fine after she posted a 29-second video of her son dancing to a Prince song on Youtube.  Give me strength.

Most importantly I think, this war needs to be seen as a war of corporate interests, none of them more altruistically motivated than the other.   As Danny Goldberg, former head of Warner Records, is quoted as saying in Robert Levine’s Free Ride: How Digital Parasites Are Destroying the Culture Business and How the Culture Business Can Fight Back, a recent and effective rebuttal to Lessig’s book: “What happened is this extraordinarily powerful financial juggernaut entered society and changed the rules about intellectual property.  But that was not necessarily inevitable, and it wasn’t driven by all of these consumers—it was driven by people who made billions of dollars changing the rules.”

The creators are of course the ones who must be protected in this conflict.  The corporate interests who have sided with them have simply done so as technology changed and the commercial advantage swung from them to other corporate interests.  Just as Facebook lobbies mightily for weaker privacy laws, Google lobbies mightily for weaker copyright laws.  Why?  Because the effort serves their bottom lines.

There is very little moral high ground in the copyright culture war.  There is only the area off to the side of the main battleground, where the creators continue to suffer and die.

 

 

The Robots Are Coming! The Robots Are Coming!

“Technology will get to everybody eventually.”

Jarod Lanier said the above during an interview with a writer from Salon.com, the news and entertainment website; this during Lanier’s book tour for his latest publication: Who Owns the Future?  Lanier is the internet apostate I wrote about earlier this year who once championed open-source culture, but who now suggests that digital technology is economically undermining the entire middle-class.

He offers this startling example of as much in the ‘Prelude’ to his new book:

“At the height of its power, the photography company Kodak employed more than 140,000 people and was worth $28 billion. They even invented the first digital camera. But today Kodak is bankrupt, and the new face of digital photography has become Instagram. When Instagram was sold to Facebook for a billion dollars in 2012, it employed only thirteen people.”

images-1Lanier is suggesting that the musicians, video store clerks and journalists who have already seen their livelihoods erased or eroded by the internet are just the canaries in the coalmine, members of the first wave of economic casualties.  Soon the driverless Google cars will be taking down taxi drivers, caregivers will be replaced by robots, and all diagnoses of illness will be arrived at online.  The digital revolution is coming for us all, and it’s not a matter of if, but when.

The same chilling note is struck in a terrific article by Kevin Drum in the May/June 2013 issue of Mother Jones.  Drum points out that the development of Artificial Intelligence (AI) has been steady but not spectacular since the invention of the first programmable computer in 1940.  That’s because the human brain is an amazingly complex processor, and even today, after more than seven decades of exponential increase in the power of computers, they are still operating at about one thousandth of the power of a human brain.

The thing is, exponential is the key word here.  Anyone who has ever looked at an exponential growth curve plotted on a graph knows that for a long time the line runs fairly flat, but with the doubling effect that comes with exponential growth, the curve eventually begins a very steep climb.  Many people believe that we’re now at the base of that sharp rise.  Henry Markram, a neuroscientist working at the Swiss Federal Institute of Technology in Lausanne thinks he will be able to successfully model the human brain by 2020.

He may or may not be right in that prediction, but again, if he is wrong, it won’t be about if, only when.  And when we combine that eventual reality with Lanier’s telling Kodak-to-Instagram employment factoid above, there appears to be grounds for genuine concern.  Finally, after many years of dire (or celebratory) predictions, labor may be about to go into real oversupply.  If these ideas are at all accurate, robots will soon be displacing human employment just about everywhere you look.  Accountants, teachers, architects, the last of the assembly-line workers, even writers; we’re all vulnerable.  As Drum sees it, capital, not labor will be the commodity in short supply in the near future, and that bodes well only for those folks who already have plenty of capital.

One of the conditions that follows from an oversupply of labor and an increased demand for capital is of course that wealth will flow from those earning salaries to those holding the capital.  The proverbial rich will get richer, the poor poorer.  And this condition is of course extant and growing, especially in the U.S.—an escalating income inequality between the 99 and 1 per cents.

History tells us that times of high unemployment are times dangerous to us all, often leading to unrest that in turn leads to illusory socio-economic solutions— communism, fascism, anti-immigration laws, etc.  What to do?  Well, anticipate the problem, first of all.  Our leaders need to make some contingency plans.

A tax on capital?  Not likely any time soon, certainly not in America.  But if indeed the coming economic reality is that more and more people will be without work, while a select few citizens will be ever more wealthy, the concept of ‘income redistribution’ needs to come into play, one way or another.  And orderly, democratic economic reform beats the hell out of rioting in the streets.

 

Oligarchs of the Internet

Steve Jobs had no use for philanthropy.  There is no record of him having made any charitable donations during his lifetime, despite his immense wealth.  Jobs never signed Bill Gates and Warren Buffet’s ‘billionaire’s pledge,’ to give at least half of his fortune to charity, as have more than 100 other exceptionally wealthy individuals from around the globe.  He also condoned sweatshop conditions—for children—at Apple manufacturing sites in China.  Apple employs about 700,000 people via subcontractors, according to The New York Times, but almost none of them work in the U.S.  Steve had no problem with any of this.

(His wife, Laurene Powell Jobs, has a better record than Steve when it comes to giving back, having emerged from his shadow after his death to contribute actively to a number of worthy causes, especially education.)

Mark Zuckerberg did sign Buffet’s pledge, but it’s also the case that last year Facebook spent nearly $2.5 million lobbying in Washington against tougher privacy laws, and for immigration reform that would allow the employment of immigrant IT workers at lower wages.  Like Jobs, Zuckerberg is taxes-averse, and Facebook actually succeeded in paying no taxes last year, despite profits of more than a billion dollars.

How about Google, the ‘Don’t be evil’ corporation?  Sergey Brin and his wife have been generous, particularly in giving to the battle against Parkinson’s disease (Brin carries the flawed Parkinson’s gene), but a recent article in Wired magazine strikes a disturbing note. The piece recounts how, in 2009, a low-life drug dealer named David Whitaker was looking for leniency from the US Food and Drug Administration after being busted for selling steroids and human growth hormones online from a base of operations in Mexico.  He told the FDA that he had marketed his sometimes-phony drugs into the US using Google Adwords, something supposedly expressly prohibited by Google’s policies.  All he had needed to do, it seems, was work directly with Google reps to tailor his website into something more ‘educational’—no ‘buy now’ buttons, no photos of drugs, that sort of thing—and, even though he made no attempt to conceal the true nature of his business, Google was happy to help.  The Feds were initially skeptical, but set up Whitaker in a new, bogus operation as a sting, to test whether he was telling the truth.  This time his venture would include sales of RU-486, the abortion pill, which is normally taken only under the supervision of a medical doctor.

A few months later it was abundantly clear that Whitaker was indeed being truthful.  The feds took legal action, and in 2011, Google settled out of court, paying a $500 million fine.  A brief statement from the company admitted that, “With hindsight, we shouldn’t have allowed these ads on Google in the first place.”

Jay Gould
Jay Gould

Great success brings great size, and with size comes a kind of corporate momentum that inevitably stresses sales over principles.  It’s not an across-the-board phenomenon—the post-CEO Bill Gates being the obvious exception—but it’s clear that many of today’s internet lords are not necessarily cut from cloth any different than were the notorious robber barons of the past, men like Jay Gould, who in 1869 attempted to corner the market in gold, hoping that the increase in price would increase the price of wheat, such that western farmers would then sell, causing a great amount of shipping of bread stuffs eastward, increasing freight business for the Erie railroad, which Gould was trying to take control of.  The ploy may have been complex, even ingenious, but it also brought Gould infamy, eventually forcing him out of an ownership position with the Railroad.

Today’s web oligarchs enjoy much higher approval ratings than did their 19th century corporate predecessors.  It’s been reported that some members of Occupy Wall Street stopped to mourn at the impromptu memorial site set up outside the Apple Store in Manhattan, following Steve Jobs’ death.

We serfs of the imperial internet realm can do better.  Great product does not always mean a worthy producer, and the ends never justify the means.  We can demand more from the web-based corporations and corporation heads who profit so handsomely from our purchases and our use of their services.  We can, at the very least, expect generosity.

 

 

 

Luddites Unite!

‘Luddite’ has in recent years come to function as a generic pejorative, describing an unthinking, head-in-the sand type afraid of all new forms of technology.  It’s an unfair use of the term.

images-1‘Luddite’ originates with a short-lived (1811 to 1817) protest movement among British textile workers, men who went about, usually under cover of darkness, destroying the weaving frames then being introduced to newly emerging ‘factories.’  These were the early days of the industrial revolution, and the new manufacturing facilities being attacked were supplanting the cottage-based industry the Luddites were a part of, leading to widespread unemployment, and therefore genuine hardship.  It was an age long before the existence of any kind of social safety net, times when employers were free to hire children, and they did so, at reduced wages, since the new machines were much easier to operate, requiring little of the skill possessed by the adult artisans being left behind.  (Just as causative in the sufferings of these newly unemployed were the prolonged Napoleonic wars that the British government was incessantly engaged in back then, at great economic expense.)  My point being that the Luddites were not opposed to technology per se; they were simply striking back at machinery which was making their lives well and truly miserable.  People were literally starving.

The term ‘Neo-Luddite’ has emerged in our day, referring, in author Kirkpatrick Sale’s words, to “a leaderless movement of passive resistance to consumerism and the increasingly bizarre and frightening technologies of the Computer Age.”  The vast majority of the people involved in this modern-day movement eschew violence, counting among their members prominent academics like the late Theodore Roszak, and eminent men of letters like Wendell Berry.  Again, the movement is not anti-technology per se, only anti-certain-kinds-of-technology, that is technology which might be described as anti-community.

Back in the 1970s, I read, quite avidly I might add, Ivan Illich’s book Tools for Conviviality, which from its very title, you might construe to be Neo-Luddite in its intent.  And you’d be largely right.  Illich condemned the use of machines like the very ones the British Luddites were smashing back in the 19th century—factory-based, single-purpose machines meant first of all for greater generation of the owner’s monetary profit.  The interesting thing is that Illich considered the then-ubiquitous pay phone as a properly convivial tool.  Anyone could use it, as often or seldom as they chose, to their own end, and the phone facilitated communication between individuals, that is community.

It’s not hard to see where all this is going.  Illich’s book was directly influential upon Lee Felsenstein, considered by many to be father of the personal computer.  Felsenstein was a member the legendary Homebrew Computer Club, which first met in Silicon Valley in 1975, spawning various founders of various microcomputer companies, including Steve Wozniak of Apple.  The original ethos espoused by members of the Club stressed peer-to-peer support, open-source information, and the autonomous operation of an individually owned machine.

Were he still alive, Ivan Illich would undoubtedly think of the personal computer, and of the smart phone as convivial tools.  But Illich had another concern associated with current technology—the rise of a managerial class of experts, people who were in a position to co-opt technical knowledge and expertise, and eventually control industries like medicine, agriculture and education.  Would the Lords of the computer age—those who control Google, Facebook, Apple—be considered by Illich to be members of a new managerial elite?

It’s not easy to say.  I suspect Illich would indeed think of the CEOs of companies like Toyota, General Electrics, and Royal Dutch Shell as members of a managerial elite, ultimately alienating workers from their own employment.  But Larry Page, the CEO of Google, who claims the company motto as, “Don’t be evil?”; is he too one of the new internet overlords?

The Neo-Luddites are right in saying that what we must all do is carefully discriminate among new forms of technology.  We must consider the control, the intent, the final gain associated with each type.  Convivial technology adds to our independence, as well as our efficiency.  It informs and empowers the user, not an alternate owner, nor the cloud-based controller of the medium.  If we could all make the distinction between convivial and non-convivial technology, it might make Luddites of us all.

 

Surfing the Information Sea

Not long ago I was online looking for a pair of waterproof sandals.  My family was heading off to Costa Rica for a wedding the following week, and I’d read that the hiking trails through many of the national parks in that tropical country could get wet and muddy.  I didn’t buy or order anything in the end, but, sure enough, whenever I was online during the next few days, there were ads staring back at me for just that kind of footwear.  God bless the folks at Google.

This is nothing terribly new.  The change began back in December of 2009, when Google, quietly, unceremoniously, began customizing our search results according to whatever information they can garner about us by tracking our online activities.  We’re talking everything here from our social media pursuits, to our political leanings, to our ‘window’ shopping.  It’s a whole new era where the giants of the online world—Google, Apple, Facebook, Microsoft—are all engaged in a furious race to gather as much data as they can on us, so that they can then sell it to advertisers.

They’ve created what MoveOn.org board president Eli Pariser calls a “filter bubble” around each of us, and the implications of this process are profound.  You’d like to think that there’s an element of objectivity involved when you search for any particular information online.  You’re looking to discover the best electric lawnmower, or the dates for the War of the Roses, or how to make yogurt at home, and you assume that Google will simply bring you unbiased info from the most popular, or the most authenticated sites.  Don’t be too sure.  We are all increasingly living within our own unique information bubble, as determined by the Google algorithm, and that determination is made with money in mind.

It’s an unsettling prospect, especially when combined with what Nicholas Carr first suggested back in 2008 in an article in The Atlantic magazine—that Google is also making us stupid.  Carr feels that online reading has essentially become an ADD (Attention Deficit Disorder) experience, one where we skim, multitask, skip from one information bit to another, all in a disorderly process very different from the “deep reading” we used to do in the past.  Feeling guilty?  I confess.  I think I may have developed an online reading technique which has me reading mostly just the opening sentence of the paragraph in each article or post I arrive at on the net.  ‘Surfing’ may be an old but still perfect descriptor of the process—staying up on the surface of the article, moving fast, perhaps enjoying the ride, but rarely stopping to ponder or examine what lies beneath.

Carr quite rightly points out that science has in recent times proven that our brain is not a static entity, even in adulthood.  It regularly rewires itself according to the stimuli or exercise we give it.  Therefore the likelihood that, when reading online the way I do, I am training my brain to be skittish, incapable of sustained attention, and very easily bored.  The intellectual laziness made possible by the web means that my knowledge base remains shallow, if fairly diverse.

When the Internet first arrived in our lives, it seemed too good to be true, and I guess it was.  At least it was too good to last.  A medium that originally seemed entirely open and accessible, free of central control, there to simply accommodate the free flow of ideas and information, has since those heady days steadily closed in upon itself.  And it has done so under the sustained pressure of commercial capitalism, with all its many rewards and penalties.  Alas, just like the days before cyberspace, it seems there is no free surfboard ride.  As we skim the surface of the information sea, just like the surfer riding toward both rocks and sand, we should keep our head up.  Maybe we should even consider jumping off, into the deeper water, before we hit the shore.

 

 

 

MOOC Learning

Academia is said to be one of the societal institutions which has best resisted the changes that have come with the e-revolution.  MOOCs may be changing all that.  MOOC is the acronym for Massive Open Online Course—a post-secondary education course where anyone with an internet connection can sign up, complete the coursework online, receive feedback and ultimately recognition for finishing the course.  Class sizes can be huge, over 150,000 in some instances, and a MOOC is sometimes combined with regular bums-in-seats-in-the-lecture-hall students.

MOOCs (pronounce it the same way a cow pronounces everything, then add the hard ‘C’) made their first appearance in 2008, and institutions as venerable as Stanford and Harvard have since introduced them.  Why not, since, like so many web-based operations, they’re cheap to set-up, and ready-made for promotion.  Fees are usually not charged for MOOC students, but I suspect that will soon begin to change.

But think of it.  You too can enroll in a course from Harvard, presented by an eminent Harvard professor, if only virtually.  It’s more of the greater democratization so often brought about by the internet.  All good thus far.

As is so frequently the case with digital innovation, however, the picture is not a straightforward one.   There is little genuine accreditation that comes with completion of a MOOC.  You may receive some sort of ‘certificate of completion’ for a single course, but there’s no degree forthcoming from passing a set number of MOOCs.  Sorry folks; no Harvard degree available via your laptop set upon the kitchen table.

The attrition rate is also high for MOOCs.  Many students who have eagerly signed up find it difficult to stay with and succeed at an online course from the unstructured isolation of the kitchen table.  The potential for cheating is another obvious issue.

Back to the upside for a moment though.  With MOOCs, learners are engaged in an interactive form of schooling which, research tells us, is considerably better than the traditional bums-in-seats model.  MOOCs are typically constructed via modules, shorter lessons which require the passing of a concluding quiz to demonstrate that the student has grasped the modular content and is thus ready to move on.  If not, then the material is easily reviewed, and the quiz retaken.  It’s a form of individualized learning which has obvious advantages over the scenario where a student, having failed to comprehend the message being delivered orally by the sage professor at her lectern, is obliged to raise his hand and make his failure known to the entire student assemblage.

p.txtOne of the most interesting aspects to the MOOC phenomenon emerged with one of the early Stanford MOOCs, when the regular, in-class students began staying away from lectures, preferring to do the online modules along with their MOOC brethren.

But online learning is also, as we all know, hardly restricted to deliverance by august institutions of higher formal education.  Anyone who has ever typed in “How to…” in the Youtube search box knows that much.  The Khan Academy, started by MIT and Harvard grad Salman Khan in 2006, now has more than 3600 mini-lessons available via Youtube.  A website like Skillshare offers lessons on everything from how to make better meatballs, to how to create an Android app.  At Skillshare you can sign up as either a teacher or a student, although as a teacher your course proposal must be vetted by the Skillshare powers-that-be.  Nevertheless, Skillshare courses are a bargain.  For one course I recently looked at, the fee was just $15 for the first 250 students to sign up.

But here’s the real kicker from a Skillshare course on how to become a Skillshare teacher.  The course is presented in just two video modules over three weeks, including office hours, and the instructor advises that you’ll need to set aside an hour per week to complete the class.  “By the end of this workshop,” gushes the young woman offering this golden opportunity, “You will be able to teach an excellent class.”  Well, to employ a pre-revolutionary term, what utter codswallop.  No one, neither Ghandi nor Einstein, should be guaranteed the ability to teach an excellent class after taking a part-time, three-week workshop.  With the internet, especially when it comes to start-ups, you’ll always want to watch your wallet.

The most significant downside to online learning is of course that it lends itself far better to certain kinds of subject matter than it does to others.  It works best with subjects where there is one and only one right answer.   That or a very defined skill, say the proper way to prune a fruit tree.  Any subject where individual interpretation, subtle analysis, critique, or indeed genuine creativity is required is not so easily adapted to a MOOC template.  Whether a computer will ever write something so sublime as King Lear is one thing; whether a computer will ever be able to legitimately grade hundreds of essays on the same work is another.

Quite simply, and despite all those from C. P. Snow on down who have argued so persuasively for the melding of arts and sciences, there are certain studies—those where the goal is insight into and appreciation of the ineffable—that will never lend themselves well to the MOOC model.  Praise be.