Storytelling 3.0 – Part 2

We tend to forget—at least I do—that, in the history of storytelling, movies came before radio. By about 15 years. The first theatre devoted exclusively to showing motion picture entertainment opened in Pittsburgh in 1905. It was called The Nickelodeon. The name became generic, and by 1910, about 26 million Americans visited a nickelodeon every week. It was a veritable techno-entertainment explosion.

The thing is, anyone at all—if they could either buy or create the product—could rent a hall, then charge admission to see a movie. To this very day, you are free to do this.

When radio rolled around—about 1920—this arrangement was obviously not on. It’s a challenge to charge admission to a radio broadcast. In fact, the first radio broadcasts were intended to sell radios; this was their original economic raison d’être.

Sadly, very quickly it became illegal to broadcast without a government granted license. (Oddly enough, the first licensed radio broadcast again originated from Pittsburgh.) And almost as quickly, sponsorship became a part of radio broadcasting. The price of admission was the passive audio receipt of an advertisement for a product or service.

An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds. Maia C photo

An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds.
Maia C photo

Radio shows were much easier and cheaper to produce than movies, and they weren’t always communal in the way movies were, that is they were not always a shared experience. (Although they could be—many a family sat around the radio in the mid part of the 20th century, engrossed in stories about Superman or The Black Museum.)

More importantly, as with book publishing, the gatekeepers were back with radio, and they were both public and private. No one could operate a radio station without a government license, and no one could gain access to a radio studio without permission from the station owner.

Then came television with the same deal in place, only more so. TV shows were more expensive to produce, but like radio, they lent themselves to a more private viewing, and access to the medium for storytellers was fully restricted, from the outset. As with radio, and until recently, TV was ‘free;’ the only charge was willing exposure to an interruptive ‘commercial.’

With the advent of each of these storytelling mediums, the experience has changed, for both storyteller and audience member. Live theatre has retained some of the immediate connection with an audience that began back in the caves (For my purposes, the storyteller in theatre is the playwright.), and radio too has kept some of that immediacy, given that so much of it is still produced live. But the true face-to-face storytelling connection is gone with electronic media, and whenever the audience member is alone as opposed to in a group, the experience is qualitatively different. The kind of community that is engendered by electronic media—say fans of a particular TV show—is inevitably more isolated, more disparate than that spawned within a theatre.

The first commercial internet providers came into being in the late 1980s, and we have since lived through a revolution as profound as was the Gutenberg. Like reading, the internet consumer experience is almost always private, but like movies, the access to the medium is essentially unrestricted, for both storyteller and story receiver.

And that, in the end, is surprising and wonderful. Economics aside for a moment, I think it’s undeniably true that never, in all our history, has the storyteller been in a more favorable position than today.

What does this mean for you and I? Well, many things, but let me climb onto an advocacy box for a minute to stress what I think is the most significant benefit for all of us. Anyone can now be a storyteller, in the true sense of the word, that is a person with a story to tell and an audience set to receive it. For today’s storyteller, because of the internet, the world is your oyster, ready to shuck.

Everyone has a story to tell, that much is certain. If you’ve been alive long enough to gain control of grunt and gesture, you have a story to tell. If you have learned to set down words, you’re good to go on the internet. And I’m suggesting that all of us should. Specifically what I’m advocating is that you write a blog, a real, regular blog like this one, or something as marvelously simple as my friend Rafi’s. Sure, tweeting or updating your Facebook page is mini-blogging, but no, you can do better than that.

Start a real blog—lots of sites offer free hosting—then keep it up. Tell the stories of your life, past and present; tell them for yourself, your family, your friends. Your family for one will be grateful, later if not right away. If you gain an audience beyond yourself, your family and friends, great, but it doesn’t matter a hoot. Blog because you now can; it’s free and essentially forever. Celebrate the nature of the new storytelling medium by telling a story, your story.

Storytelling 3.0 – Part 1

Leighton Cooke photo

Leighton Cooke photo

With apologies to Marshall McLuhan, when it comes to story, the medium is not the message. Yet the medium certainly affects reception of the message. As I’ve written earlier in this blog, storytelling began even before we had language. Back in our species days in caves, whether it was the events of the day’s hunt, or what was to be discovered beyond the distant mountain, I’m quite certain our ancestors told stories to one another with grunt and gesture.

Once we began to label things with actual words, oral language developed rapidly and disparately, into many languages. The medium was as dynamic as it’s ever been. It was immediate, face-to-face, and personal. Stories became ways in which we could explain things, like how we got here, or why life was so arbitrary, or what the bleep that big bright orb was which sometimes rose in the night sky and sometimes didn’t. Or stories became a way in which we could scare our children away from wandering into the forest alone and getting either lost or eaten.

Then, somewhere back about 48 centuries, in Egypt, it occurred to some bright soul that words could be represented by symbols. Hieroglyphics—the first alphabet—appeared. The art of communication has never been the same. The great oral tradition of storytelling began to wane, superseded by written language, a medium that is both more rigid and exclusive. To learn to read and understand, as opposed to listen and understand, was more arduous, difficult enough that it had to be taught, and then not until a child was old enough to grasp the meaning and system behind written words.

It was not until about 1000 BC that the Phoenicians developed a more phonetic alphabet, which in turn became the basis for the Greek, Hebrew and Aramaic alphabets, and thus the alphabet I use to type this word. The Phoenician alphabet was wildly successful, spreading quickly into Africa and Europe, in part because the Phoenicians were so adept at sailing and trading all around the Mediterranean Sea. More importantly though, it was successful because it was much more easily learned, and it could be adapted to different languages.

We are talking a revolutionary change here. Prior to this time, written language was, to echo Steven Fischer in A History of Writing, an instrument of power used by the ruling class to control access to information. The larger population had been, for some 38 centuries—and to employ a modern term—illiterate, and thus royalty and the priesthood had been able to communicate secretively and exclusively among themselves, to their great advantage. It’s not hard to imagine how the common folk back then must have at times regarded written language as nearly magical, as comprised of mysterious symbols imbued with supernatural powers.

We are arriving at the nub of it now, aren’t we? Every medium of communication, whether it be used for telling stories or not, brings people together, but some media do it better than others. Stories build communities, and this is a point not lost on writers of language as divergent as Joseph Conrad and Rebecca Solnit. In his luminous Preface to The Nigger of Narcissus, published in 1897, Conrad writes that the novelist speaks to “the subtle but invincible conviction of solidarity that knits together the loneliness of innumerable hearts.” For a story to succeed, we must identify with the characters in it, and Solnit writes in 2013, in The Faraway Nearby, that we mean by identification that “I extend solidarity to you, and who and what you identify with builds your own identity.”

Stories are powerful vehicles, with profound potential benefits for humanity. But they can also bring evil. As Solnit has also written, stories can be used “to justify taking lives, even our own, by violence or by numbness and the failure to live.” The Nazis had a story to tell, all about why life was difficult, and who was to blame, and how we might make life better.

The content of the story matters; the intent of the storyteller matters. And the medium by which the story is told has its effect. As storytelling media have evolved through time, the story is received differently, by different people. Sometimes that’s a good thing; sometimes it isn’t.

To be continued…

Facetime

Last month the city of Nelson, BC, said no to drive-thrus. There’s only one in the town anyway, but city councilors voted to prevent any more appearing. Councillor Deb Kozak described it as “a very Nelson” thing to do.

Nelson may be slightly off the mean when it comes to small towns—many a draft dodger settled there back in the Vietnam War era, and pot-growing allowed Nelson to better weather the downturn of the forest industry that occurred back in the 80s—but at the same time, dumping on drive-thrus is something that could only happen in a smaller urban centre.

The move is in support of controlling carbon pollution of course; no more idling cars lined up down the block (Hello, Fort McMurray?!), but what I like about it is that the new by-law obliges people to get out of their cars, to enjoy a little facetime with another human being, instead of leaning out their car window, shouting into a tinny speaker mounted in a plastic sign.

For all the degree of change being generated by the digital revolution, and for all the noise I’ve made about that change in this blog, there are two revolutions of recent decades that have probably had greater effect: the revolution in settlement patterns that we call urbanization, and the revolution in economic scale that we call globalization. Both are probably more evident in smaller cities and towns than anywhere else.

Grain elevators, Milestone, Saskatchewan, about 1928

Grain elevators, Milestone, Saskatchewan,
about 1928

Both of my parents grew up in truly small prairie towns; my mother in Gilbert Plains, Manitoba, present population about 750; my father in Sedgewick, Alberta, present population about 850. Sedgewick’s population has dropped some 4% in recent years, despite a concurrent overall growth rate in Alberta of some 20%. Both these towns were among the hundreds arranged across the Canadian prairies, marked off by rust-coloured grain elevators rising above the horizon, set roughly every seven miles along the rail lines. This distance because half that far was gauged doable by horse and wagon for all the surrounding farmers.

I grew up in Grande Prairie, Alberta, a town which officially became a city while I still lived there. The three blocks of Main Street that I knew were anchored at one end by the Co-op Store, where all the farmers shopped, and at the other by the pool hall, where all the young assholes like me hung out. In between were Lilge Hardware, operated by the Lilge brothers, Wilf and Clem, Joe’s Corner Coffee Shop, and Ludbrooks, which offered “variety” as “the spice of life,” and where we as kids would shop for board games, after saving our allowance money for months at a time.

Grande Prairie is virtually unrecognizable to me now, that is it looks much like every other small and large city across the continent: the same ‘big box’ stores surround it as surround Prince George, and Regina and Billings, Montana, I’m willing to bet. Instead of Lilge Hardware, Joe’s Corner Coffee Shop and Ludbrooks we have Walmart, Starbucks and Costco. This is what globalization looks like, when it arrives in your own backyard.

80% of Canadians live in urban centres now, as opposed to less than 30% at the beginning of the 20th century. And those urban centres now look pretty much the same wherever you go, once the geography is removed. It’s a degree of change that snuck up on us far more stealthily than has the digital revolution, with its dizzying pace, but it’s a no less disruptive transformation.

I couldn’t wait to get out of Grande Prairie when I was a teenager. The big city beckoned with diversity, anonymity, and vigour. Maybe if I was young in Grande Prairie now I wouldn’t feel the same need, given that I could now access anything there that I could in the big city. A good thing? Bad thing?

There’s no saying. Certain opportunities still exist only in the truly big centres of course, cities like Tokyo, New York or London. If you want to make movies it’s still true that you better get yourself to Los Angeles. But they’re not about to ban drive-thrus in Los Angeles. And that’s too bad.

Handprints in the Digital Cave

There are now more than 150 million blogs on the internet. 150 million! That’s as if every second American is writing a blog; every single Russian is blogging in this imaginary measure, plus about another seven million.

The explosion seems to have come back in 2003, when, according to Technorati, there were just 100, 000 “web-logs.” Six months later there were a million. A year later there were more than four million. And on it has gone. Today, according to Blogging.com, more than half a million new blog posts go up everyday.

doozle photo

doozle photo

Why do bloggers blog? Well, it’s not for the money. I’ve written on numerous occasions in this blog about how the digital revolution has undermined the monetization of all manner of modern practices, whether it be medicine, or music or car mechanics. And writing, as we all know, is no different. Over the last year or so, I slightly revised several of my blog posts to submit them to Digital Journal, a Toronto-based online news service which prides itself on being  “a pioneer” in revenue-sharing with its contributors. I’ve submitted six articles thus far and seen them all published. My earnings to date: $4.14.

It ain’t a living. In fact, Blogging.com tells us that only eight per cent of bloggers make enough from their blogs to feed their family, and that more than 80% of bloggers never make as much as $100 from their blogging.

Lawrence Lessig, Harvard Law Professor and regular blogger since 2002, writes in his book Remix: Making Art and Commerce Thrive in a Hybrid Economy, that, “much of the time, I have no idea why I [blog].” He goes on to suggest that, when he blogs, it has to do with an “RW’ (ReWrite) ethic made possible by the internet, as opposed to the “RO” (Read Only) media ethic predating the internet. For Lessig, the introduction of the capacity to ‘comment’ was a critical juncture in the historical development of blogs, enabling an exchange between bloggers and their blog readers that, to this day, Lessig finds both captivating and “insanely difficult.”

I’d agree with Lessig that the interactive nature of blog writing is new and important and critical to the growth of blogging, but I’d also narrow the rationale down some. The final click in posting to a blog comes atop the ‘publish’ button. Now some may view that term as slightly pretentious, even a bit of braggadocio, but here’s the thing. It isn’t. That act of posting is very much an act of publishing, now that we live in a digital age. That post goes public, globally so, and likely forever. How often could that be said about a bit of writing ‘published’ in the traditional sense, on paper?

Sure that post is delivered into a sea of online content that likely and immediately floods it with unread information, but nevertheless that post now has a potential readership of billions, and its existence is essentially permanent. If that isn’t publishing, I don’t know what is.

I really don’t care much if any one reads my blog. As many of my friends and family members like to remind me, I suck at promoting my blog, and that’s because, like too many writers, I find the act of self-promotion uncomfortable. Neither do I expect to ever make any amount of money from this blog. I blog as a creative outlet, and in order to press my blackened hand against the wall of the digital cave. And I take comfort in knowing that the chances of my handprint surviving through the ages are far greater than all those of our ancestors who had to employ an actual cave wall, gritty and very soon again enveloped in darkness.

I suspect that there are now more people writing—and a good many of them writing well, if not brilliantly—than at any time in our history. And that is because of the opportunity to publish on the web. No more hidebound gatekeepers to circumvent. No more expensive and difficult distribution systems to navigate. Direct access to a broad audience, at basically no cost, and in a medium that in effect will never deteriorate.

More people writing—expressing themselves in a fully creative manner—than ever before. That’s a flipping wonderful thing.

Words

My own view on the ‘proper’ use of language is radical, though not so radical as some. I am told of a UBC professor who believes that, “If you used it, it’s a word.” I would amend that statement to read, “If you used it—and it was understood by the listener in the way you intended it to be understood—it’s a word.”

Rafel Miro photo

Rafel Miro photo

I’m employing the classic communication model here, where sender, message and receiver must all be present in order for communication to take place, and I do believe that clarity is the prime consideration when attempting to communicate with the written or spoken word. Honesty might be my second consideration, and all the niceties of language—the elements of style—would follow, a distant third.

Words are meant to communicate, and communication is meant to move you somehow, either intellectually or emotionally, depending upon the kind of writing or speaking being done. But nowhere should it be maintained that there is a proper way to communicate with words, that there is one and only one correct way to string words together.

And yet of course there is. We have the rules of grammar, and we have the dictionary. The dictionary tells us that there is one and only one correct way to spell a word, and the rules of grammar tell us that there is only one way to correctly construct sentences.

Well, to not put too fine a word upon it, hogwash. Shakespeare never had a dictionary or grammar text to refer to, and most of us would agree that no fellow has ever strung English words together better than he, and he invented some dillys (How about “fell swoop?”). It makes no more sense to say that there are rules to govern writing than it does to say there are rules to govern painting, or sculpture, or theatre. Writing is an artform like any other, and to impose rules upon it is an act of stultification.

I’m with Bill Bissett, subversive poet of deserved renown whose work can be found on his “offishul web site,” work like this pithy gem (from scars on th seehors):

IT USD 2 B

yu cud get sum toilet papr

nd a newspapr both 4

a dollr fiftee

 

now yu cant  

yu gotta make a chois 

Bissett points out in his essay why I write like ths that it was the invention of the printing press that precipitated the standardization of language:

previous to that era peopul wud spell th same words diffrentlee  evn in th same correspondens  chek th lettrs btween qween elizabeth first n sir waltr raleigh  different spellings  different tones  different emphasis  sound  all part uv th changing meenings  

Once again it seems it was technology determining change, change which in this case undoubtedly impoverished words as a creative tool.

It was the Victorians who truly imposed a final set of rules upon the English language—the first Oxford Dictionary appeared in 1884—and generically speaking, there has rarely been a more noxious bunch populating the earth.

The French have the Académie française, “official moderator of the French language,” there “to work, with all possible care and diligence, to give our language definite rules.” The Academy of course admits a few new words to the French language each year, mostly to replace odious English words that have crept into use in French, but again, it is hard to imagine a more officious and objectionable pomp of bureaucrats than these self-appointed jury members. (Did you catch me inventing “pomp,” and, more importantly, did you grasp my meaning?)

Language evolves, daily, as must any art if it is to remain an art. It must constantly be in search of the novel, for there is precious little else remaining when it comes to the recognition of art than that it be new. Those who would stand in opposition to this evolution stand with those charming Victorians who offered up as their sole necessary justification, “It’s not done.”

Yes, the too-indulgent use of words can be tedious and problematic (Has anyone actually read Finnegan’s Wake?), but even more problematically tendentious are the language police manning the checkpoints in defense of a hopeless, conservative cause. If you want to say, “There is data to support my argument,” as opposed to “There are data…”, go ahead. Those who would condemn you for it are snobs, snobs with a fascist bent, and not the least deserving of the respect they seek. If you consider it a word, and you think it likely to be understood in the way you intend, go ahead, fire away, use it. Feel free.

Exponential End

Computers are now more than a million times faster than they were when the first hand calculator appeared back in the 1960s. (An engineer working at Texas Instruments, Jack Kilby, had invented the first integrated circuit, or semiconductor, in 1957.) This incredible, exponential increase was predicted via ‘Moore’s Law,’ first formulated in 1965: that is that the number of transistors in a semiconductor doubles approximately every two years.

Another way to state this Law (which is not a natural ‘law’ at all, but an observational prediction) is to say that each generation of transistors will be half the size of the last. This is obviously a finite process, with an end in sight.  Well, in our imaginations at least.

The implications of this end are not so small. As we all know, rapidly evolving digital technology has hugely impacted nearly every sector of our economy, and with those changes has come disruptive social change, but also rapid economic growth. The two largest arenas of economic growth in the U.S. in recent years have been Wall Street and Silicon Valley, and Wall Street has prospered on the manipulation of money, via computers, while Silicon Valley (Silicon is the ‘plate’ upon which a semiconductor is usually built.) has prospered upon the growing ubiquity of computers themselves.

Intel has predicted that the end of this exponential innovation will come anywhere between 2013 and 2018. Moore’s Law itself predicts the end at 2020. Gordon Moore himself—he who formulated the Law—said in a 2005 interview that, “In terms of size [of transistors] you can see that we’re approaching the size of atoms, which is a fundamental barrier.” Well, in 2012 a team working at the University of New South Wales announced the development of the first working transistor consisting of a single atom. That sounds a lot like the end of the line.

In November of last year, a group of eminent semiconductor experts met in Washington to discuss the current state of semiconductor innovation, as well as its worrisome future. These men (alas, yes, all men) are worried about the future of semiconductor innovation because it seems that there are a number of basic ideas about how innovation can continue past the coming ‘end,’ but none of these ideas has emerged as more promising than the others, and any one of them is going to be very expensive. We’re talking a kind of paradigm shift, from microelectronics to nanoelectronics, and, as is often the case, the early stages of a fundamentally new technology are much more costly than the later stages, when the new technology has been scaled up.

And of course research dollars are more difficult to secure these days than they have been in the past. Thus the additional worry that the U.S., which has for decades led the world in digital innovation, is going to be eclipsed by countries like China and Korea that are now investing more in R&D than is the U.S. The 2013 budget sequestration cuts have, for instance, directly impacted certain university research budgets, causing programs to be cancelled and researchers to be laid off.

Bell Labs 1934

Bell Labs 1934

One of the ironies of the situation, for all those of us who consider corporate monopoly to be abhorrent, is evident when a speaker at the conference mentioned working at the Bell Labs back in the day when Ma Bell (AT&T) operated as a monopoly and funds at the Labs were virtually unlimited. Among the technologies originating at the Bell Labs are the transistor, the laser, and the UNIX operating system.

It’s going to be interesting, because the need is not going away. The runaway train that is broadband appetite, for instance, is not slowing down; by 2015 it’s estimated that there will be 16 times the amount of video clamoring to get online than there is today.

It’s worth noting that predictions about Moore’s law lasting only about another decade have been made for the last 30 years. And futurists like Ray Kurzweil and Bruce Sterling believe that exponential innovation will continue on past the end of its current course due in large part to a ‘Law of Accelerating Technical Returns,’ leading ultimately to ‘The Singularity,’ where computers surpass human intelligence.

Someone should tell those anxious computer scientists who convened last November in Washington: not to worry. Computers will solve this problem for us.

Guns

The Gaiety Theatre became a church, then a parking lot. pinkmoose photo

The Gaiety Theatre became a church, then a parking lot.
pinkmoose photo

The game was derived directly from ‘the westerns’ we watched every Saturday afternoon at the Gaiety Theatre in downtown Grande Prairie, wherein the final act of every movie consisted of the good guy and bad guys (the baddies always outnumbered our hero) running around and shooting at one another. “Guns” we called it. “Let’s play guns!” we would shout, and soon we’d be lurking/sneaking around the immediate neighbourhood houses, blasting away at one another with toy weapons, inciting many an argument as to whether I had or had not “Got ya!” If indeed you were struck by an imaginary bullet, a dramatic tumble to the ground was required, followed by rapid expiration.

Let no one ever doubt the influential power of the ascendant mass medium of the day. As I’ve written elsewhere on this blog, I grew up without television, but those Saturday matinees were more than enough to have us pretending at the gun violence that is all too real in the adult world. Video games seem an even more powerful enactment of the gun fantasy that can grip children, but the difference may be marginal. I doubt that movies have lost much influence over young people today, and I further suspect that in the majority of Hollywood movies today at least one gun still appears. Check out how many of today’s movie ads or posters feature menacing men with guns, with those guns usually prominent in foreground. Sex sells, but so it seems do guns.

And of course the rest of the world, including those of us in Canada, looks with horror upon the pervasive, implacable gun culture in the U.S., wondering how it is that even the slaughter of twenty elementary school children isn’t enough to curb the ready availability of guns. Because, from a rational perspective, the facts are incontrovertible: more guns do not mean greater safety, quite the opposite. You are far more likely to die of a gunshot in the U.S. than you are in any other developed country. Roughly 90% of Americans own a gun. The next closest is Serbia at 58%. In Canada it’s about 30%. Australia 15%. Russia 9%. And a higher rate of mental illness does not mean greater gun violence. It’s pure and it’s simple: more guns mean more gun violence, more people being shot and killed.

But we are, by and large, not rational animals, and no amount of logical argument is going to convince members of the gun lobby that gun ownership should be restricted. It’s an emotional and psychological attachment that cannot be broken without causing increased resentment, anger, anxiety and a sense of humiliating diminution. Guns are fetishes to those who desire them, sacred objects that allow the owner to feel elevated in status, elevated to a position of greater independence and potency. After all a gun will allow you to induce fear in others.

And yes the American obsession with guns has historical roots, the revolution and the second amendment to the constitution and all that, but, as Michael Moore so brilliantly pointed out in this animated sequence in Bowling for Columbine, much more essentially it has to do with fear. People enamored of gun ownership feel threatened; without a gun they feel powerless in the face of threats from people they view as dangerously different from themselves. And nothing but nothing empowers like a gun.

You might think that people who love guns do not wish to play with them. Guns are not toys to these people, you might say; they are genuine tools used to protect their owners, mostly from all those other people out there who also own guns. But just down the road from where we live on Galiano is a shooting range. On quiet Sunday afternoons we invariably hear the sound of gunfire echoing through the trees, as gun aficionados shoot repeatedly at targets, trying to do exactly the same thing over and over again, hit the bull’s eye. Those people are indeed playing with their guns; they are recreating with their guns. Why? Because it makes them feel better.

Successful movie genres are manifestations of broadly felt inner conflicts; in the case of westerns those conflicts are around issues of freedom and oppression. And the western may still be the most successful of all movie genres, remaining dominant from the very birth of dramatic film (The Great Train Robbery, 1903), right through to the 1970s (McCabe and Mrs. Miller, 1971). The problem is that the western offered ‘gunplay’ as the answer to oppression, and therefore the suggestion that everyone should have a gun. But once everyone has a gun, everyone is afraid. And once you are afraid, no one is taking away your gun.

Intermittent

I began this blog in January of this year, with the intent that I would post regularly, weekly in fact. Having done so since then, with the exception of few holiday hiatuses—and about to begin another one today—I will resume posting in the New Year, but on a more intermittent basis. Intermittent better reflects life I think, on Galiano and elsewhere.  There may be a pattern to our behaviour, and repeated seasons to life’s larger arc, but the flow of our daily lives is an irregular one, interrupted by surprises, nice and otherwise.  Intermittent is a better fit.

Cars

I drove a car just as soon as I was legally able to.  Couldn’t wait.  A learner’s permit was obtainable in Alberta at age 14 back then, so within days of my 14th birthday I was happily out on the road, behind the wheel of a freedom machine.  I owned my first car, a light blue Volkswagon Fastback, by the time I was 18.

epSos.de photo

epSos.de photo

My own son, who is now 24, has never owned a car, and professes no interest in doing so.  It was my suggestion, not his, that he obtain a driver’s license, since I believed, perhaps naively, that it enhanced his chances for gainful employment.  My son’s cousin, same age, similarly has no interest in driving.  His friend Mendel, a year younger, has never bothered with the driver’s license.

They all own mobile devices of course, and if they ever had to choose between a car and a smart phone it would not be a difficult choice, and the auto industry would not be the beneficiary.

Times change.  And yet, more than ever, Canada is a suburban, car-dependent nation.  Two-thirds of us live in suburban neighbourhoods and three-quarters of us still drive to work, most of the time alone.  Vancouver, where I spend most of my time, now has the worst traffic congestion in all of North America, this year finally overtaking perennial frontrunner Los Angeles.

If ever a technology is in need of a revolution it has to be cars.  As uber venture capitalist (and Netscape co-founder) Marc Andreeson has been pointing out of late, most cars sit idle most of the time, like 90% of the time.  And the actual figure for occupancy on car trips is 1.2 persons per journey.

Car co-ops, and car-sharing companies like Zip Car of Car2Go point the way.  Many people have begun sharing, rather than owning a car.  But if you take the numbers mentioned above and add in the coming phenomenon of the Google robot car, the potential transportation picture becomes truly intriguing.

Driverless cars are now legal on public roads in Nevada, California and Florida.  Since 2011, there have been two collisions involving Google’s robot cars.  In one incident, the car was under human control at the time; in the other the robotic car was rear-ended while stopped at a traffic light.  We might assume that a human was driving the car that rear-ended the robot.

What if no one owned a car?  What if you could simply order up a driverless car ride on your smart phone any time, anywhere?  Your robot car would arrive at your door, it might stop to pick someone else up en route, but it would then drop you off at the entranceway to wherever it is you’re wishing to go to.  You would pay a fee for this service of course, but it would be minor in comparison to what you now pay if you own and regularly drive a car.

And of course the need for cars would nosedive, because these robotic cars would be in use nearly all of the time, say 90% of the time.  Car numbers would plummet, meaning traffic congestion would be a thing of the past.  And it keeps going: garages, driveways, parking lots would begin to disappear.  Our urban landscape, which has essentially been designed to accommodate cars, would begin to transform.  A lot more green space would become available.

And I haven’t even mentioned the reduction in carbon pollution that would ensue with the reduction in cars, carbon pollution being a problem which just may threaten the stability of civilization in the coming years.

Cars have been with us for about 100 years now.  Our relationship with them over that period has at times been tender, at times belligerent, at times top-down, sun-in-your face, wind-in-your-hair fabulous, at times utterly savage.  For those people who love cars, who fuss over them, restore them, take them out for careful drives only on sunny Sunday afternoons; I hope those people keep their cars, as an expensive hobby.  For the rest of us, those of us who use cars simply to get from A to B, for whom cars are just a form of convenient transport, the days when we need to own a car are disappearing.  For my money, the sooner the better.

Clicktivism

I first joined Amnesty International back in the early 80s.  I still have a thickish file containing carbon copies of the letters I wrote and sent back then, thwacked out over the hum of my portable electric typewriter.  Despite my efforts to keep them informed, A.I. didn’t do a particularly good job of tracking me as I moved about from place to place in the following years, but, nevertheless, on and off, I’ve been sending protest messages under their aegis for some 30 years now.

Scott Schrantz photo

Scott Schrantz photo

But these days it’s a whole lot easier.  These days I receive an email from them, outlining another outrage by an oppressive government somewhere, and I’m asked to simply ‘sign’ a petition.  They have my details on hand already, so all I need do is click to the petition page and click one more time.  Done.

It’s called ‘clicktivism,’ and, quite rightly, its comparative value is questionable.  In the 2011 book, The Googlization of Everything (And Why We Should Worry), Siva Vaidhyanathan took this somewhat indirect swipe at the practice: “… instead of organizing, lobbying and campaigning… we rely on expressions of disgruntlement as a weak proxy for real political action.  Starting or joining a Facebook protest group suffices for many as political action.”

Writing in The Guardian a year earlier, Micah White made a much more direct attack: “In promoting the illusion that surfing the web can change the world, clicktivism is to activism as McDonalds is to a slow-cooked meal.  It may look like food, but the life-giving nutrients are long gone.”  White points out that clicktivism is largely activism co-opted by the techniques of online marketing.  The greater the emphasis on click-rates, bloated petition numbers and other marketing metrics, the cheaper the value of the message, according to White, resulting in “a race to the bottom of political engagement.”

One thing that hasn’t changed is that organizations like Amnesty pass their contact lists to other like organizations, presumably for compensation, without soliciting consent.  I did sign on with Avaaz, but I’ve never asked to receive emails from SumOfUs, Care2 Action Alerts, the World Society for the Protection of Animals, Plan Canada, the Council of Canadians, All Out, Change.org or Care Canada, but I do.  I will readily admit that many of those emails go unopened.

It’s a difficult phenomenon to come to terms with ethically.  These organizations are undoubtedly staffed by well-meaning people who genuinely believe they are making a difference.  And I’m sure that sometimes they do.  Yet there is also no doubt that the greatly facilitated process that clicktivism represents degrades more on-the-ground forms of political protest, and allows people like myself to make essentially meaningless contributions to worthy causes.  ‘Facilitate’ may be the operative word here, as in facile, meaning, according to Merriam Webster, “too simple; not showing enough thought or effort.”

December 10 is International Human Rights Day, as first proclaimed by the United Nations General Assembly in 1950.  Last year Amnesty International organized the sending of more than 1.8 million messages to governments everywhere on that date, asking them to respect the rights of people and communities under threat of persecution.  To their credit, in addition to urging their members to send messages, Amnesty is encouraging its members to organize or attend an event in their community or workplace on December 10.  They have targeted seven different cases of human rights abuse from around the globe for action.  These include Dr. Ten Aung, who was given a 17-year jail sentence in Myanmar last year after attempting to keep the peace between Rakhine Buddhists and Rohinyga Muslims; Ihar Tsikhanyuk, who has faced threats, intimidation and beatings in Belarus for attempting to register a group in support of LGBTI rights, and Badia East, from Nigeria, who, along with many of her neighbours, was left destitute and without compensation after authorities destroyed her home last February.

The problem with my problem with clicktivism is that it pales in comparison to the problems faced by these brave people on a daily basis.  And like so many other new processes made possible by digital technology, the change represented by online activism is not about to reverse itself.  We keep our eyes forward, think critically, and do what we can.  I’ll try to write a letter on December 10.