Dark Matter

“The internet as we once knew it is officially dead.”                                                                                 Ronald Deibert, in Black Code

Although born of the military (see Origins, from the archives of this blog), in its infancy, the internet was seen as a force for democracy, transparency and the empowerment of individual citizens. The whole open source, ‘information wants to be free,’ advocacy ethos emerged and was optimistically seen by many as heralding a new age of increased ‘bottom up’ power.

Mike Licht photo

Mike Licht photo

And to a considerable extent this has proven to be the case. Political and economic authority has been undermined, greater public transparency has been achieved, and activist groups everywhere have found it easier to organize and exert influence. In more recent years, however, the dark, countervailing side of the internet has also become increasingly apparent, and all of us should be aware of its presence, and perhaps we should all be afraid.

Certainly Ronald Diebert’s 2013 book Black Code: Inside the Battle for Cyberspace should be required reading for anyone who still thinks the internet is a safe and free environment in which to privately gather information, exchange ideas, and find community. Diebert is Director of the Citizen Lab at the Munk School of Global Affairs, University of Toronto, and in that role he has had ample opportunity to peer into the frightening world of what he terms the “cyber-security industrial complex.” In an economy still operating under the shadow of the great recession, this complex is a growth industry that is estimated to now be worth as much as $150 billion annually.

It consists of firms like UK-based Gamma International, Endgame, headquartered in Atlanta, and Stockholm-based Ericsson, makers of Nokia phones. What these companies offer are software products that of course will bypass nearly all existing anti-virus systems to:

  • Monitor and record your emails, chats and IP communications, including Skype, once thought to be the most secure form of online communication.
  • Extract files from your hard drive and send them to the owners of the product, without you ever knowing it’s happened.
  • Activate the microphone or camera in your computer for surveillance of the room your computer sits in.
  • Pinpoint the geographic location of your wireless device.

These products can do all this and more, and they can do it in real time. Other software packages offered for sale by these companies will monitor social media networks, on a massive scale. As reported by the London Review of Books, one such company, ThorpeGlen, recently mined a week’s worth of call data from 50 million internet users in Indonesia. They did this as a kind of sales demo of their services.

The clients for these companies include, not surprisingly, oppressive regimes in countries like China, Iran and Egypt. And to offer some sense of why this market is so lucrative, The Wall Street Journal reported that a security hacking package was offered for sale in Egypt by Gamma for $559,279 US. Apparently the system also comes with a training staff of four.

Some of these services would be illegal if employed within Canada, but, for instance, if you are an Iranian émigré living in Canada who is active in opposition to the current Iranian regime, this legal restriction is of very little comfort. Those people interested in whom you’re corresponding with do not reside in Canada.

And even in countries like the US and Canada, as Edward Snowden has shown us, the national security agencies are not to be trusted to steer clear of our personal affairs. As Michael Hayden, former Director of the CIA, told documentary filmmaker Alex Gibney, “We steal secrets,” and none of us should be naïve enough to believe that the CIA, if they should have even the remotest interest, won’t steal our personal secrets.

All of us have to get over our collective fear of terrorist attacks and push back on the invasion of our privacy currently underway on the web. The justification for this invasion simply isn’t there. You are about as likely to die in a terrorist attack as you are as the result of a piano falling on your head.

Neither should any of us assume that, as we have ‘done nothing wrong,’ we need not be concerned with the vulnerability to surveillance that exists for all the information about us stored online. Twenty years ago, if we had thought that any agency, government or private, was looking to secretly tap our phone line, we would have been outraged, and then demanded an end to it. That sort of intervention took a search warrant, justified in court. It should be no different on the web.

An Education?

The conference was titled, “The Next New World.” It took place last month in San Francisco, and was hosted by Thomas Friedman, columnist for The New York Times and author of The World Is Flat. Friedman has been writing about the digital revolution for years now, and his thinking on the matter is wide-ranging and incisive.

In his keynote address, Friedman describes “an inflection” that occurred coincidental with the great recession of 2008—the technical transformations that began with the personal computer, continued with the internet, and are ongoing with smart phones and the cloud. Friedman is not the first to note that this transformation is the equivalent of what began in 1450 with the invention of the printing press, the so-called Gutenberg revolution. The difference is that the Gutenberg revolution took 200 years to sweep through society. The digital revolution has taken two decades.

5351622529_5d4c782817Friedman and his co-speakers at the conference are right in articulating that today’s revolution has meant that there is a new social contract extant, one based not upon high wages for middle skills (think auto manufacturing or accounting), but upon high wages for high skills (think data analysis or mobile programming). Everything from driving cars to teaching children to milking cows has been overtaken by digital technology in the last 20 years, and so the average employee is now faced with a work place where wages and benefits don’t flow from a commitment to steady long term work, but where constant innovation is required for jobs that last an average of 4.6 years. As Friedman adds—tellingly I think—in today’s next new world, “no one cares what you know.” They care only about what you can do.

Friedman adds in his address that the real focus of the discussions at the conference can be abridged by two questions: “How [in this new world] does my kid get a job?” and, “How does our local school or university need to adapt?’’

All well and good. Everyone has to eat, never mind grow a career or pay a mortgage. What bothers me however, in all these worthwhile discussions, is the underlying assumption that the education happening at schools and universities should essentially equate to job training. I’ve checked the Oxford; nowhere does that esteemed dictionary define education as training for a job. The closest it comes is to say that education can be training “in a particular subject,” not a skill.

I would contend that what a young person knows, as opposed to what they can do, should matter to an employer. What’s more, I think it should matter to all of us. Here’s a definitional point for education from the Oxford that I was delighted to see: “an enlightening experience.”

A better world requires a better educated populace, especially women. For the human race to progress (perhaps survive), more people need to understand the lessons of history. More people have to know how to think rationally, act responsibly, and honour compassion, courage and commitment. None of that necessarily comes with job training for a data analyst or mobile programmer.

And maybe, if the range of jobs available out there is narrowing to ever more specific, high technical-skills work, applicable to an ever more narrow set of industries, then that set of industries should be taking on a greater role in instituting the needed training regimes. Maybe as an addendum to what can be more properly termed ‘an education.’

I’m sure that Friedman and his conference colleagues would not disagree with the value of an education that stresses knowledge, not skills. And yes, universities have become too elitist and expensive everywhere, especially in America. But my daughter attends Quest University in Squamish, British Columbia, where, in addition to studying mathematics and biology, she is obliged to take courses in Rhetoric, Democracy and Justice, and Global Perspectives.

Not exactly the stuff that is likely to land her a job in Silicon Valley, you might say, and I would have to reluctantly agree. But then I would again argue that it should better qualify her for that job. Certainly those courses will make her a better citizen, something the world is in dire need of, but I would also argue that a degree in “Liberal Arts and Sciences” does in fact better qualify her for that job, because those courses will teach her how to better formulate an argument, better understand the empowerment (and therefore the greater job satisfaction) that comes with the democratic process, and better appreciate the global implications of practically all we do workwise these days.

Damn tootin’ that education in liberal arts and sciences better qualifies her for that job in Silicon Valley. That and every other job out there.

The Age of Surveillance

“Today’s world would have disturbed and astonished George Orwell.”                                        —David Lyon, Director, Surveillance Studies Centre, Queen’s University

When Orwell wrote 1984, he imagined a world where pervasive surveillance was visual, achieved by camera. Today’s surveillance is of course much more about gathering information, but it is every bit as all-encompassing as that depicted by Orwell in his dystopian novel. Whereas individual monitoring in 1984 was at the behest of a superstate personified as ‘Big Brother,’ today’s omnipresent watching comes via an unholy alliance of business and the state.

Most of it occurs when we are online. In 2011, Max Schrems, an Austrian studying law in Silicon Valley, asked Facebook to send him all the data the company had collected on him. (Facebook was by no means keen to meet his request; as a European, Schrems was able to take advantage of the fact that Facebook’s European headquarters are in Dublin, and Ireland has far stricter privacy laws than we have on this side of the Atlantic.) He was shocked to receive a CD containing more than 1200 individual PDFs. The information tracked every login, chat message, ‘poke’ and post Schram had ever made on Facebook, including those he had deleted. Additionally, a map showed the precise locations of all the photos tagging Schrem that a friend had posted from her iPhone while they were on vacation together.

Facebook accumulates this dossier of information in order to sell your digital persona to advertisers, as does Google, Skype, Youtube, Yahoo! and just about every other major corporate entity operating online. If ever there was a time when we wondered how and if the web would become monetized, we now know the answer. The web is an advertising medium, just as are the television and radio; it’s just that the advertising is ‘targeted’ at you via a comprehensive individual profile that these companies have collected and happily offered to their advertising clients, in exchange for their money.

How did our governments become involved? Well, the 9/11 terrorist attacks kicked off their participation most definitively. Those horrific events provided rationale for governments everywhere to begin monitoring online communication, and to pass laws making it legal wherever necessary. And now it seems they routinely ask the Googles and Facebooks of the world to hand over the information they’re interested in, and the Googles and Facebooks comply, without ever telling us they have. In one infamous incidence, Yahoo! complied with a Chinese government request to provide information on two dissidents, Wang Xiaoning and Shi Tao, and this complicity led directly to the imprisonment of both men. Sprint has now actually automated a system to handle requests from government agencies for information, one that charges a fee of course!

It’s all quite incredible, and we consent to it every time we toggle that “I agree” box under the “terms and conditions” of privacy policies we will never read. The terms of service you agree to on Skype, for instance, allow Skype to change those terms any time they wish to, without your notification or permission.

And here’s the real rub on today’s ‘culture of surveillance:’ we have no choice in the matter. Use of the internet is, for almost all of us, no longer a matter of socializing, or of seeking entertainment; it is where we work, where we carry out the myriad of tasks necessary to maintain the functioning of our daily life. The choice to not create an online profile that can then be sold by the corporations which happen to own the sites we operate within is about as realistic as is the choice to never leave home. Because here’s the other truly disturbing thing about surveillance in the coming days: it’s not going to remain within the digital domain.

Coming to a tree near you? BlackyShimSham photo

Coming to a tree near you?
BlackyShimSham photo

In May of this year Canadian Federal authorities used facial recognition software to bust a phony passport scheme being operated out of Quebec and BC by organized crime figures. It seems Passport Canada has been using the software since 2009, but it’s only become truly effective in the last few years. It’s not at all difficult to imagine that further advances in this software will soon have security cameras everywhere able to recognize you wherever you go. Already such cameras can read your car’s license plate number as you speed over a bridge, enabling the toll to be sent to your residence, for payment at your convenience. Thousands of these cameras continue to be installed in urban, suburban and yes, even rural areas every year.

Soon enough, evading surveillance will be nearly impossible, whether you’re online or walking in the woods. Big Brother meets Big Data.

What We Put Up With

The sky was new. It was a thick, uniform, misty grey, but I was told there were no clouds up there. I’d never seen this before, and was skeptical. How could this be? It was the humidity, I was told. It got like that around here on hot summer days.

The year was 1970; I was 17, part of a high school exchange program that had taken me and a fair number of my friends to the Trenton-Belleviille area of southern Ontario. We’d been squired about in buses for days, shuffling through various museums and historical sights, sometimes bored, sometimes behaving badly (my buddy Ken, blowing a spliff in the washroom cubicle at the back of the bus, would surely be considered bad form), sometimes, not often, left to our own devices. On this day we’d been driven to the sandy shores of Lake Ontario, where what was shockingly, appallingly new, much newer than the leaden sky, was out there in the shallow water.

Small signs were attached to stakes standing in the water, just offshore. They read, “Fish for Fun.”

I couldn’t believe it. How could this be allowed to happen? How could people put up with this? As a kid from a small town in northern Alberta, I’d never seen anything like it.

It was a kind of accelerated future shock, as if I had been suddenly propelled forward in time to a new, meta-industrialized world where this was the accepted reality. In this cowardly new world, lakes would be so polluted that eating fish caught in them was unsafe (at 17, I’d caught my share of fish, and always eaten them), and this was how people dealt with the problem. With a lame attempt at cheery acquiescence.

When I think about it, my 17-year-old self would have had a great deal of trouble believing numerous of the realities that we live with today. Setting aside all the literally incredible changes wrought by the digital revolution—where we walk around with tiny computers in our hand, able to instantly send and/or receive information from anywhere in the world—here are a few more mundane examples of contemporary realities that would have had me shaking my teenage head in utter disbelief:

  • Americans buy more than 200 bottles of water per person every year, spending more than $20 billion in the process.
  • People everywhere scoop up their dog’s excrement, deposit it into small plastic bags that they then carry with them to the nearest garbage receptacle. (Here’s a related—and very telling—factoid, first pointed out to me in a top-drawer piece by New York Times Columnist David Brooks: there are now more American homes with dogs than there are homes with children.)
  • On any given night in Canada, some 30,000 people are homeless. One in 50 of them is a child.

There are more examples I could give of current actualities my teen incarnation would scarcely have believed, but, to backtrack for a moment in the interests of fairness, pollution levels in Lake Ontario are in fact lower today than they were in 1970, although the lake can hardly be considered pristine. As the redoubtable Elizabeth May, head of Canada’s Green Party, points out in a recent statement, many of the worst environmental problems of the 70s have been effectively dealt with—toxic pesticides, acid rain, depletion of the ozone layer—but only because worthy activists like her fought long and hard for those solutions.

jronaldlee photo

jronaldlee photo

The fact is that we are a remarkably adaptable species, able to adjust to all manner of hardships, injustice and environmental degradation, so long as those changes come about slowly, and we are given to believe there’s not much we as individuals can do about it. Never has the metaphor of the frog in the slowly heating pot of water been more apropos than it is to the prospect of man-made climate change, for instance.

It’s not the cataclysmic changes that are going to get us. It’s the incremental ones.

 

Where Have All the Dollars Gone

Sir Robert Borden addressing the troops, 1917 Biblioarchives

Sir Robert Borden addressing the troops, 1917
Biblioarchives

In March of this year, lawyers acting on behalf of the Canadian government asserted that the government has no special obligation to Afghan military veterans as a result of a pledge made by Prime Minister Robert Borden back in 1917, on the eve of The Battle of Vimy Ridge. Borden assured the soldiers then preparing for battle in France (more than 10,000 of them would be killed or wounded) that none of them would subsequently “have just cause to reproach the government for having broken faith with the men who won and the men who died.”

The March court filing came about as a result of Canadian soldiers earlier suing over the government’s new “veterans charter,” which changes the pension-for-life settlements provided to soldiers from previous wars to a system where Afghan veterans receive lump sum payments for non-economic losses, such as losing limbs. It’s not difficult for any of us to understand that this change is all about our government saving money.

Also in March of this year, the Vancouver School Board announced a budget shortfall of $18.2 million. Reluctantly, the Board is considering an array of cutbacks, including school closures, ending music programs, and keeping schools closed for the entire week surrounding Remembrance Day. My kids have now moved on past public schools, but I clearly remember, for all the years they were enrolled in Vancouver schools, a steady and discouraging series of annual cutbacks; librarians disappearing, field trips becoming rare events indeed; at one point even the announcement that the temperature in schools would be turned down.

I lack the expertise to offer any detailed economic analysis as to why our governments are these days unable to meet obligations to veterans and school children that they were able to meet in the past, but here’s one bit of crude economic breakdown that causes even greater wonder. The Gross Domestic Product per capita in Canada in 1960 averaged $12,931 US; in 2012 it averaged $35,992 US, adjusted for inflation. In other words, the country is today producing three times as much in the way of goods and services per citizen as it was back in 1960, presumably enabling the government to collect far more in the way of taxes, per person, than it could four-plus decades ago. And yet we can no longer support veterans and school children in the way we did back then.

A large part of the explanation is of course that governments these days are addressing a whole lot of social concerns that they weren’t in 1960, never mind in 1917, everything from drug and alcohol treatment centres, to the parents of autistic children, to modern dance troupes. It may well be that this growing list of demands outstrips the three-times-bigger ratio of available government funds. It may even be enough for one to lament what Washington Post columnist Charles Krauthammer (an example of that rare beast, the genuinely thoughtful conservative pundit) calls “the ever-growing Leviathan state.” It may… or it may not.

One theory has it that, in the post-war decades, right up until the 70s, the Canadian economy was legitimately growing, more products, more services, more jobs. Since the 80s, any increase in or even maintaining of profit ratios (and thus disposable incomes) has come as the result of increased ‘efficiency,’ fewer people producing more goods and services through better technology and less waste. (More cutbacks anyone?) If that’s true, then things are only going to get worse, as these finite efficiencies produce ever-diminishing returns.

Whatever the final explanation, it’s not likely a simple one, and whatever the economic answer, it’s not likely to be easily achieved. Too often a current government has only to promise fewer taxes for voters to flock in their direction, regardless of known scandal, evident mean-spiritedness, or obvious cronyism. We tend to assume that the ensuing government cutbacks won’t arrive at our door. And so long as they don’t we remain generally unrepentant for our self-centeredness. The moment they do—the moment an alcoholic, or autistic child, or modern dancer appears inside our door—our attitudes tend to shift.

Thus, as we stand witnessing a time of waning of western economic domination (see DEP, from the archives of this blog), it seems we can be sure of only one thing: it’s a matter of priorities. If school-age children and wounded veterans are not our priority, then who is?

 

 

Fear of Identity Erosion

A few weeks ago, I finally got around to watching Sound and Fury, the 2000-released, Academy award-nominated documentary film about two families struggling with the impact of having their deaf children receive cochlear implants. These tiny electronic devices are surgically implanted, and will usually improve hearing in deaf patients, but—it is feared by the families featured in Sound and Fury—this improvement will come at the expense of “deaf culture.”

McLuhanThe film is an absorbing exploration of what we mean by culture and identity, and how critically important these concepts are to us. Because here’s the thing—the parents of one of the children being considered for cochlear implants (who are themselves deaf) choose not to have the operation, even though their child has asked for it, and even though it will in all likelihood significantly improve their young daughter’s hearing.

Why? Because improved hearing will negatively affect their daughter’s inclusion in the deaf tribe. I use that word advisedly, because it seems that is what identification comes down to for nearly all of us—inclusion in a group, or tribe. We identify ourselves via gender, language, race, nation, occupation, family role, sexual orientation, etc.—ever more narrowed groupings—until we arrive at that final, fairly specific definition of who we are. And these labels are incredibly valued by us. We will fight wars over these divisions, enact discriminatory laws, and cleave families apart, all in order to preserve them.

And here’s the other point that the film makes abundantly clear: technology forces change. I’m told that American Sign Language (ASL) is the equivalent of any other, fully developed spoken language, even to the point where there are separate dialects within ASL. The anxiety felt by the parents of the deaf daughter about the loss of deaf culture is entirely justified—to the extent that cochlear implant technology could potentially eradicate ASL, and this language (like any other language) is currently a central component of deaf culture. With the steady advance of implant technology, the need for deaf children to learn ASL could steadily decrease, to the point where the language eventually atrophies and dies. And with it deaf culture?

Possibly, yes, at least in terms of how deaf culture is presently defined. To their credit, it seems that the parents featured in Sound and Fury eventually relented, granting their child the surgery, but they did so only after fierce and sustained resistance to the idea. And so it goes with ‘identity groupings.’ We are threatened by their erosion, and we will do all manner of irrational, at times selfish and destructive things to prevent that erosion.

My friend Rafi, in a recent and fascinating blog post, announced that this year, he and his family will mostly forego the Passover rituals which have for so long been a defining Jewish tradition. He writes that, after a sustained re-reading and contemplation of ‘The Haggadah,’ the text meant to be read aloud during the Passover celebrations, he found the message simply too cruel, too “constructed to promote fear and exclusion.” “I’m done with it,” he announces.

Well, at the risk of offending many Jewish people in many places, more power to him. He does a courageous and generous thing when he says no more “us and them,” no more segregation, no more division.

All cultures, all traditions can bring with them a wonderful richness—great music, food, dance, costumes, all of it. But they can also bring insecurity, antipathy and conflict, conflict which can often result directly in people suffering.

Everyone benefits from knowing who they are, where they came from culturally. But no one should fear revising traditions; no one should slavishly accept that all cultural practices or group identities must continue exactly as they are, and have been. Technology may force change upon you, but regardless, recognize that change whatever its source is relentless. Anyone who thinks they can preserve cultural traditions perfectly intact within that relentless context of change is fooling themselves. And neither should anyone think that all cultural traditions are worth preserving.

New identities are always possible. Acceptance and inclusion are the goals, not exclusion and fear. It takes time, careful thought, and sometimes courage, but every human being can arrive at a clear individual understanding of who they are and what is important to them. Choose traditions which welcome others and engender the greater good. Reject those which don’t. If you can do this, and I don’t mean to diminish the challenge involved, you’ll know who you are, and you’ll undoubtedly enjoy a rich cultural life.

Storytelling 3.0 – Part 2

We tend to forget—at least I do—that, in the history of storytelling, movies came before radio. By about 15 years. The first theatre devoted exclusively to showing motion picture entertainment opened in Pittsburgh in 1905. It was called The Nickelodeon. The name became generic, and by 1910, about 26 million Americans visited a nickelodeon every week. It was a veritable techno-entertainment explosion.

The thing is, anyone at all—if they could either buy or create the product—could rent a hall, then charge admission to see a movie. To this very day, you are free to do this.

When radio rolled around—about 1920—this arrangement was obviously not on. It’s a challenge to charge admission to a radio broadcast. In fact, the first radio broadcasts were intended to sell radios; this was their original economic raison d’être.

Sadly, very quickly it became illegal to broadcast without a government granted license. (Oddly enough, the first licensed radio broadcast again originated from Pittsburgh.) And almost as quickly, sponsorship became a part of radio broadcasting. The price of admission was the passive audio receipt of an advertisement for a product or service.

An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds. Maia C photo

An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds.
Maia C photo

Radio shows were much easier and cheaper to produce than movies, and they weren’t always communal in the way movies were, that is they were not always a shared experience. (Although they could be—many a family sat around the radio in the mid part of the 20th century, engrossed in stories about Superman or The Black Museum.)

More importantly, as with book publishing, the gatekeepers were back with radio, and they were both public and private. No one could operate a radio station without a government license, and no one could gain access to a radio studio without permission from the station owner.

Then came television with the same deal in place, only more so. TV shows were more expensive to produce, but like radio, they lent themselves to a more private viewing, and access to the medium for storytellers was fully restricted, from the outset. As with radio, and until recently, TV was ‘free;’ the only charge was willing exposure to an interruptive ‘commercial.’

With the advent of each of these storytelling mediums, the experience has changed, for both storyteller and audience member. Live theatre has retained some of the immediate connection with an audience that began back in the caves (For my purposes, the storyteller in theatre is the playwright.), and radio too has kept some of that immediacy, given that so much of it is still produced live. But the true face-to-face storytelling connection is gone with electronic media, and whenever the audience member is alone as opposed to in a group, the experience is qualitatively different. The kind of community that is engendered by electronic media—say fans of a particular TV show—is inevitably more isolated, more disparate than that spawned within a theatre.

The first commercial internet providers came into being in the late 1980s, and we have since lived through a revolution as profound as was the Gutenberg. Like reading, the internet consumer experience is almost always private, but like movies, the access to the medium is essentially unrestricted, for both storyteller and story receiver.

And that, in the end, is surprising and wonderful. Economics aside for a moment, I think it’s undeniably true that never, in all our history, has the storyteller been in a more favorable position than today.

What does this mean for you and I? Well, many things, but let me climb onto an advocacy box for a minute to stress what I think is the most significant benefit for all of us. Anyone can now be a storyteller, in the true sense of the word, that is a person with a story to tell and an audience set to receive it. For today’s storyteller, because of the internet, the world is your oyster, ready to shuck.

Everyone has a story to tell, that much is certain. If you’ve been alive long enough to gain control of grunt and gesture, you have a story to tell. If you have learned to set down words, you’re good to go on the internet. And I’m suggesting that all of us should. Specifically what I’m advocating is that you write a blog, a real, regular blog like this one, or something as marvelously simple as my friend Rafi’s. Sure, tweeting or updating your Facebook page is mini-blogging, but no, you can do better than that.

Start a real blog—lots of sites offer free hosting—then keep it up. Tell the stories of your life, past and present; tell them for yourself, your family, your friends. Your family for one will be grateful, later if not right away. If you gain an audience beyond yourself, your family and friends, great, but it doesn’t matter a hoot. Blog because you now can; it’s free and essentially forever. Celebrate the nature of the new storytelling medium by telling a story, your story.

Storytelling 3.0 – Part 1

Leighton Cooke photo

Leighton Cooke photo

With apologies to Marshall McLuhan, when it comes to story, the medium is not the message. Yet the medium certainly affects reception of the message. As I’ve written earlier in this blog, storytelling began even before we had language. Back in our species days in caves, whether it was the events of the day’s hunt, or what was to be discovered beyond the distant mountain, I’m quite certain our ancestors told stories to one another with grunt and gesture.

Once we began to label things with actual words, oral language developed rapidly and disparately, into many languages. The medium was as dynamic as it’s ever been. It was immediate, face-to-face, and personal. Stories became ways in which we could explain things, like how we got here, or why life was so arbitrary, or what the bleep that big bright orb was which sometimes rose in the night sky and sometimes didn’t. Or stories became a way in which we could scare our children away from wandering into the forest alone and getting either lost or eaten.

Then, somewhere back about 48 centuries, in Egypt, it occurred to some bright soul that words could be represented by symbols. Hieroglyphics—the first alphabet—appeared. The art of communication has never been the same. The great oral tradition of storytelling began to wane, superseded by written language, a medium that is both more rigid and exclusive. To learn to read and understand, as opposed to listen and understand, was more arduous, difficult enough that it had to be taught, and then not until a child was old enough to grasp the meaning and system behind written words.

It was not until about 1000 BC that the Phoenicians developed a more phonetic alphabet, which in turn became the basis for the Greek, Hebrew and Aramaic alphabets, and thus the alphabet I use to type this word. The Phoenician alphabet was wildly successful, spreading quickly into Africa and Europe, in part because the Phoenicians were so adept at sailing and trading all around the Mediterranean Sea. More importantly though, it was successful because it was much more easily learned, and it could be adapted to different languages.

We are talking a revolutionary change here. Prior to this time, written language was, to echo Steven Fischer in A History of Writing, an instrument of power used by the ruling class to control access to information. The larger population had been, for some 38 centuries—and to employ a modern term—illiterate, and thus royalty and the priesthood had been able to communicate secretively and exclusively among themselves, to their great advantage. It’s not hard to imagine how the common folk back then must have at times regarded written language as nearly magical, as comprised of mysterious symbols imbued with supernatural powers.

We are arriving at the nub of it now, aren’t we? Every medium of communication, whether it be used for telling stories or not, brings people together, but some media do it better than others. Stories build communities, and this is a point not lost on writers of language as divergent as Joseph Conrad and Rebecca Solnit. In his luminous Preface to The Nigger of Narcissus, published in 1897, Conrad writes that the novelist speaks to “the subtle but invincible conviction of solidarity that knits together the loneliness of innumerable hearts.” For a story to succeed, we must identify with the characters in it, and Solnit writes in 2013, in The Faraway Nearby, that we mean by identification that “I extend solidarity to you, and who and what you identify with builds your own identity.”

Stories are powerful vehicles, with profound potential benefits for humanity. But they can also bring evil. As Solnit has also written, stories can be used “to justify taking lives, even our own, by violence or by numbness and the failure to live.” The Nazis had a story to tell, all about why life was difficult, and who was to blame, and how we might make life better.

The content of the story matters; the intent of the storyteller matters. And the medium by which the story is told has its effect. As storytelling media have evolved through time, the story is received differently, by different people. Sometimes that’s a good thing; sometimes it isn’t.

To be continued…

Facetime

Last month the city of Nelson, BC, said no to drive-thrus. There’s only one in the town anyway, but city councilors voted to prevent any more appearing. Councillor Deb Kozak described it as “a very Nelson” thing to do.

Nelson may be slightly off the mean when it comes to small towns—many a draft dodger settled there back in the Vietnam War era, and pot-growing allowed Nelson to better weather the downturn of the forest industry that occurred back in the 80s—but at the same time, dumping on drive-thrus is something that could only happen in a smaller urban centre.

The move is in support of controlling carbon pollution of course; no more idling cars lined up down the block (Hello, Fort McMurray?!), but what I like about it is that the new by-law obliges people to get out of their cars, to enjoy a little facetime with another human being, instead of leaning out their car window, shouting into a tinny speaker mounted in a plastic sign.

For all the degree of change being generated by the digital revolution, and for all the noise I’ve made about that change in this blog, there are two revolutions of recent decades that have probably had greater effect: the revolution in settlement patterns that we call urbanization, and the revolution in economic scale that we call globalization. Both are probably more evident in smaller cities and towns than anywhere else.

Grain elevators, Milestone, Saskatchewan, about 1928

Grain elevators, Milestone, Saskatchewan,
about 1928

Both of my parents grew up in truly small prairie towns; my mother in Gilbert Plains, Manitoba, present population about 750; my father in Sedgewick, Alberta, present population about 850. Sedgewick’s population has dropped some 4% in recent years, despite a concurrent overall growth rate in Alberta of some 20%. Both these towns were among the hundreds arranged across the Canadian prairies, marked off by rust-coloured grain elevators rising above the horizon, set roughly every seven miles along the rail lines. This distance because half that far was gauged doable by horse and wagon for all the surrounding farmers.

I grew up in Grande Prairie, Alberta, a town which officially became a city while I still lived there. The three blocks of Main Street that I knew were anchored at one end by the Co-op Store, where all the farmers shopped, and at the other by the pool hall, where all the young assholes like me hung out. In between were Lilge Hardware, operated by the Lilge brothers, Wilf and Clem, Joe’s Corner Coffee Shop, and Ludbrooks, which offered “variety” as “the spice of life,” and where we as kids would shop for board games, after saving our allowance money for months at a time.

Grande Prairie is virtually unrecognizable to me now, that is it looks much like every other small and large city across the continent: the same ‘big box’ stores surround it as surround Prince George, and Regina and Billings, Montana, I’m willing to bet. Instead of Lilge Hardware, Joe’s Corner Coffee Shop and Ludbrooks we have Walmart, Starbucks and Costco. This is what globalization looks like, when it arrives in your own backyard.

80% of Canadians live in urban centres now, as opposed to less than 30% at the beginning of the 20th century. And those urban centres now look pretty much the same wherever you go, once the geography is removed. It’s a degree of change that snuck up on us far more stealthily than has the digital revolution, with its dizzying pace, but it’s a no less disruptive transformation.

I couldn’t wait to get out of Grande Prairie when I was a teenager. The big city beckoned with diversity, anonymity, and vigour. Maybe if I was young in Grande Prairie now I wouldn’t feel the same need, given that I could now access anything there that I could in the big city. A good thing? Bad thing?

There’s no saying. Certain opportunities still exist only in the truly big centres of course, cities like Tokyo, New York or London. If you want to make movies it’s still true that you better get yourself to Los Angeles. But they’re not about to ban drive-thrus in Los Angeles. And that’s too bad.

Handprints in the Digital Cave

There are now more than 150 million blogs on the internet. 150 million! That’s as if every second American is writing a blog; every single Russian is blogging in this imaginary measure, plus about another seven million.

The explosion seems to have come back in 2003, when, according to Technorati, there were just 100, 000 “web-logs.” Six months later there were a million. A year later there were more than four million. And on it has gone. Today, according to Blogging.com, more than half a million new blog posts go up everyday.

doozle photo

doozle photo

Why do bloggers blog? Well, it’s not for the money. I’ve written on numerous occasions in this blog about how the digital revolution has undermined the monetization of all manner of modern practices, whether it be medicine, or music or car mechanics. And writing, as we all know, is no different. Over the last year or so, I slightly revised several of my blog posts to submit them to Digital Journal, a Toronto-based online news service which prides itself on being  “a pioneer” in revenue-sharing with its contributors. I’ve submitted six articles thus far and seen them all published. My earnings to date: $4.14.

It ain’t a living. In fact, Blogging.com tells us that only eight per cent of bloggers make enough from their blogs to feed their family, and that more than 80% of bloggers never make as much as $100 from their blogging.

Lawrence Lessig, Harvard Law Professor and regular blogger since 2002, writes in his book Remix: Making Art and Commerce Thrive in a Hybrid Economy, that, “much of the time, I have no idea why I [blog].” He goes on to suggest that, when he blogs, it has to do with an “RW’ (ReWrite) ethic made possible by the internet, as opposed to the “RO” (Read Only) media ethic predating the internet. For Lessig, the introduction of the capacity to ‘comment’ was a critical juncture in the historical development of blogs, enabling an exchange between bloggers and their blog readers that, to this day, Lessig finds both captivating and “insanely difficult.”

I’d agree with Lessig that the interactive nature of blog writing is new and important and critical to the growth of blogging, but I’d also narrow the rationale down some. The final click in posting to a blog comes atop the ‘publish’ button. Now some may view that term as slightly pretentious, even a bit of braggadocio, but here’s the thing. It isn’t. That act of posting is very much an act of publishing, now that we live in a digital age. That post goes public, globally so, and likely forever. How often could that be said about a bit of writing ‘published’ in the traditional sense, on paper?

Sure that post is delivered into a sea of online content that likely and immediately floods it with unread information, but nevertheless that post now has a potential readership of billions, and its existence is essentially permanent. If that isn’t publishing, I don’t know what is.

I really don’t care much if any one reads my blog. As many of my friends and family members like to remind me, I suck at promoting my blog, and that’s because, like too many writers, I find the act of self-promotion uncomfortable. Neither do I expect to ever make any amount of money from this blog. I blog as a creative outlet, and in order to press my blackened hand against the wall of the digital cave. And I take comfort in knowing that the chances of my handprint surviving through the ages are far greater than all those of our ancestors who had to employ an actual cave wall, gritty and very soon again enveloped in darkness.

I suspect that there are now more people writing—and a good many of them writing well, if not brilliantly—than at any time in our history. And that is because of the opportunity to publish on the web. No more hidebound gatekeepers to circumvent. No more expensive and difficult distribution systems to navigate. Direct access to a broad audience, at basically no cost, and in a medium that in effect will never deteriorate.

More people writing—expressing themselves in a fully creative manner—than ever before. That’s a flipping wonderful thing.