Fact Not Fiction

“The cool kids are making docs.”

                                            —David Edelstein

When I attended film school, back in the ancient 80s, there was not a single documentary program to be found anywhere across the educational landscape. We attendees were all keenly intent upon becoming the next Martin Scorsese or Francis Coppola; those most successful fictional moviemakers from the first generation of film school brats. Documentary film was seen by us as slightly dusty, quaint, more often suited to arid academia than the edgy dramatic territory we meant to occupy.

Otrocuenta Desarollo photo

Otrocuenta Desarollo photo

These days documentary programs abound in film schools everywhere, and documentary film is seen as a highly relevant form aggressively focusing our attention upon social and economic issues of immediate concern to all of us.

It’s interesting to consider why this change.

Certainly the greatly increased availability of production and post production technology (think cameras and computers) has a lot to do with it. Today’s media audience maintains a more forgiving expectation of documentary ‘production values’ (the quality of the sound and picture) than that expectation which remains for dramatic film. In the documentary world, content rules, and so if you have captured a terrific story using a comparatively cheap digital camera, then edited on your laptop, you may well be good to go in the marketplace. Searching for Sugarman would be a prime example. Not so much in the dramatic sphere, where a low-budget look is still likely to prevent you from ever hitting the theatres.

But there’s more to it than that I think. Today’s generation of film school students is far more determined to effect change then we ever were. We were interested first of all in making films; today’s doc filmmakers seem first of all interested in making a difference. Where filmmaking was an end for us, it is a means to them. Caught up as we were in the countercultural ethos of 70s ‘anti-hero’ movies like Scarecrow or Straight Time, we were willing to focus our lenses upon the downtrodden, the misfits, but we were rarely inclined to take direct aim at problems we nevertheless knew were all around us, problems like air pollution or economic inequality. Contemporary docs like An Inconvenient Truth and Inequality For All show no such reluctance.

And let me be perfectly clear; this change is much for the better. We humans have a ravenous need for stories, and one of the reasons for that is because we understand, sometimes unconsciously, that stories offer us ‘life lessons.’ They offer us insights into how we should or should not behave in the face of common human problems. To a lesser or greater degree mind you. Some stories are so simple minded that whatever insight they may offer is utterly generic, if not banal.

And documentaries, by their very nature, offer us better insights than do dramas. As good as the storytelling is in a dramatic series like Breaking Bad, for instance—and it is very good—it doesn’t necessarily hold any greater relevance to real life than does your typical comic book movie. Walter White is only marginally more real than is Spiderman.

Not so with Michael Morton, the Texan who spent 25 years in prison before finally being exonerated on all charges, and is the protagonist of a documentary entitled An Unreal Dream. Morton is the real deal, a genuine American hero.

Conventional TV broadcasters operating right now have badly dropped the ball on the burgeoning audience interest in documentaries, as evidenced by a recent Hot Docs study. Despite that fumble however, because of the rise of the internet, and because of their own commitment, the film school students of right now who are drawn to documentary are likely to succeed at making an impact, at changing the world, however incrementally. They are perhaps not entirely typical of the current generation, but they undoubtedly represent a new, different and very worthwhile slice of that generation. And more power to them.

Let the Machines Decide

The GPS device in my car knows the speed limit for the road I’m driving on, and displays that information for me on its screen. Nice. Nobody needs another speeding ticket. But what if my ‘smart car’ refused to go over that limit, even if I wanted it to? You know, the wife shouting from the backseat, about to give birth, the hospital four blocks away, that sort of thing.

David Hilowitz photo

David Hilowitz photo

It’s a scenario not far removed from reality. Google’s robotic car has inspired many futurists to imagine a computer that controls not only the speed of your car, but also where it goes, diverting your car away from congestion points toward alternate routes to your destination. Evgeny Morozov is among these futurists, and in a recent article in The Observer, he suggests that computers may soon be in a position to usurp many functions that we have traditionally assigned to government. “Algorithmic regulation,” he calls it. We can imagine government bureaucrats joining the unemployment line to fill out a form that will allow a computer to judge whether they are worthy of benefits or no.

Examples of machines making decisions previously assigned to humans are already easily found. If the ebook downloaded to my Kobo has a hold placed on it, the Vancouver Public Library’s computer will unceremoniously retrieve it from my e-reader upon its due date, regardless of whether I have just 10 more pages to read, and would be willing to pay the overdue fine in order to do so.

But Morozov’s cautionary critique is about a wider phenomenon, and it’s largely the ‘internet of things’ which is fuelling his concern. The internet of things is most pointedly about the process which will see digital chips migrate out of electronic devices, into those things which we have until now tended to consider inanimate, non-electronic objects, things like your door, or your mattress. It may well be that in future a computer somewhere will be informed about it when you don’t spend the night at home.

Maybe you spent the night on a friend’s couch, after one too many. Maybe you ate some greasy fast food that night too. And maybe you haven’t worked out at your club’s gym for more than six months now. The data gathering upshot of this at least arguably unhealthy behavior is that you may be considered higher risk by a life insurance company, and so proffered a higher premium.

Presumably there is a human being at the end of this theoretical decision-making chain, but I think we’ve all learned that it’s never safe to assume that digital tech won’t take over any particular role, and certainly whatever the imagined final decision taken as to your insurance risk, certainly it will be informed by data collection done by digital machines.

The most chilling note struck in Morozov’s piece comes, for me, when he quotes Tim O’Reilly, technology publisher and venture capitalist, referring to precisely this industry: “I think that insurance is going to be the native business model for the internet of things.”

Now isn’t that heartening. Corporate insurance as the business model of the near future.

The gist of what is alarming about the prospect of digital machines taking increasing control of our lives is that it suggests that the ‘depersonalization’ we have all been living through for the last three-plus decades is only the beginning. It’s “day one,” as Jeff Bezos likes to say about the digital revolution. It suggests that we can look forward to feeling like a true speck of dust in the infinite cosmic universe of corporate society, with absolutely no living being to talk to should we ever wish to take an unnecessary risk, diverge from the chosen route, or pay the fine instead.

For all the libertarian noise that folks from Silicon Valley make about efficiency and disruption, let no one be fooled: the slick algorithmic regulation that replaces decisions made by people, whether government bureaucrats or not, may be more objective, but it will not bring greater freedom.

Dark Matter

“The internet as we once knew it is officially dead.”                                                                                 Ronald Deibert, in Black Code

Although born of the military (see Origins, from the archives of this blog), in its infancy, the internet was seen as a force for democracy, transparency and the empowerment of individual citizens. The whole open source, ‘information wants to be free,’ advocacy ethos emerged and was optimistically seen by many as heralding a new age of increased ‘bottom up’ power.

Mike Licht photo

Mike Licht photo

And to a considerable extent this has proven to be the case. Political and economic authority has been undermined, greater public transparency has been achieved, and activist groups everywhere have found it easier to organize and exert influence. In more recent years, however, the dark, countervailing side of the internet has also become increasingly apparent, and all of us should be aware of its presence, and perhaps we should all be afraid.

Certainly Ronald Diebert’s 2013 book Black Code: Inside the Battle for Cyberspace should be required reading for anyone who still thinks the internet is a safe and free environment in which to privately gather information, exchange ideas, and find community. Diebert is Director of the Citizen Lab at the Munk School of Global Affairs, University of Toronto, and in that role he has had ample opportunity to peer into the frightening world of what he terms the “cyber-security industrial complex.” In an economy still operating under the shadow of the great recession, this complex is a growth industry that is estimated to now be worth as much as $150 billion annually.

It consists of firms like UK-based Gamma International, Endgame, headquartered in Atlanta, and Stockholm-based Ericsson, makers of Nokia phones. What these companies offer are software products that of course will bypass nearly all existing anti-virus systems to:

  • Monitor and record your emails, chats and IP communications, including Skype, once thought to be the most secure form of online communication.
  • Extract files from your hard drive and send them to the owners of the product, without you ever knowing it’s happened.
  • Activate the microphone or camera in your computer for surveillance of the room your computer sits in.
  • Pinpoint the geographic location of your wireless device.

These products can do all this and more, and they can do it in real time. Other software packages offered for sale by these companies will monitor social media networks, on a massive scale. As reported by the London Review of Books, one such company, ThorpeGlen, recently mined a week’s worth of call data from 50 million internet users in Indonesia. They did this as a kind of sales demo of their services.

The clients for these companies include, not surprisingly, oppressive regimes in countries like China, Iran and Egypt. And to offer some sense of why this market is so lucrative, The Wall Street Journal reported that a security hacking package was offered for sale in Egypt by Gamma for $559,279 US. Apparently the system also comes with a training staff of four.

Some of these services would be illegal if employed within Canada, but, for instance, if you are an Iranian émigré living in Canada who is active in opposition to the current Iranian regime, this legal restriction is of very little comfort. Those people interested in whom you’re corresponding with do not reside in Canada.

And even in countries like the US and Canada, as Edward Snowden has shown us, the national security agencies are not to be trusted to steer clear of our personal affairs. As Michael Hayden, former Director of the CIA, told documentary filmmaker Alex Gibney, “We steal secrets,” and none of us should be naïve enough to believe that the CIA, if they should have even the remotest interest, won’t steal our personal secrets.

All of us have to get over our collective fear of terrorist attacks and push back on the invasion of our privacy currently underway on the web. The justification for this invasion simply isn’t there. You are about as likely to die in a terrorist attack as you are as the result of a piano falling on your head.

Neither should any of us assume that, as we have ‘done nothing wrong,’ we need not be concerned with the vulnerability to surveillance that exists for all the information about us stored online. Twenty years ago, if we had thought that any agency, government or private, was looking to secretly tap our phone line, we would have been outraged, and then demanded an end to it. That sort of intervention took a search warrant, justified in court. It should be no different on the web.

An Education?

The conference was titled, “The Next New World.” It took place last month in San Francisco, and was hosted by Thomas Friedman, columnist for The New York Times and author of The World Is Flat. Friedman has been writing about the digital revolution for years now, and his thinking on the matter is wide-ranging and incisive.

In his keynote address, Friedman describes “an inflection” that occurred coincidental with the great recession of 2008—the technical transformations that began with the personal computer, continued with the internet, and are ongoing with smart phones and the cloud. Friedman is not the first to note that this transformation is the equivalent of what began in 1450 with the invention of the printing press, the so-called Gutenberg revolution. The difference is that the Gutenberg revolution took 200 years to sweep through society. The digital revolution has taken two decades.

5351622529_5d4c782817Friedman and his co-speakers at the conference are right in articulating that today’s revolution has meant that there is a new social contract extant, one based not upon high wages for middle skills (think auto manufacturing or accounting), but upon high wages for high skills (think data analysis or mobile programming). Everything from driving cars to teaching children to milking cows has been overtaken by digital technology in the last 20 years, and so the average employee is now faced with a work place where wages and benefits don’t flow from a commitment to steady long term work, but where constant innovation is required for jobs that last an average of 4.6 years. As Friedman adds—tellingly I think—in today’s next new world, “no one cares what you know.” They care only about what you can do.

Friedman adds in his address that the real focus of the discussions at the conference can be abridged by two questions: “How [in this new world] does my kid get a job?” and, “How does our local school or university need to adapt?’’

All well and good. Everyone has to eat, never mind grow a career or pay a mortgage. What bothers me however, in all these worthwhile discussions, is the underlying assumption that the education happening at schools and universities should essentially equate to job training. I’ve checked the Oxford; nowhere does that esteemed dictionary define education as training for a job. The closest it comes is to say that education can be training “in a particular subject,” not a skill.

I would contend that what a young person knows, as opposed to what they can do, should matter to an employer. What’s more, I think it should matter to all of us. Here’s a definitional point for education from the Oxford that I was delighted to see: “an enlightening experience.”

A better world requires a better educated populace, especially women. For the human race to progress (perhaps survive), more people need to understand the lessons of history. More people have to know how to think rationally, act responsibly, and honour compassion, courage and commitment. None of that necessarily comes with job training for a data analyst or mobile programmer.

And maybe, if the range of jobs available out there is narrowing to ever more specific, high technical-skills work, applicable to an ever more narrow set of industries, then that set of industries should be taking on a greater role in instituting the needed training regimes. Maybe as an addendum to what can be more properly termed ‘an education.’

I’m sure that Friedman and his conference colleagues would not disagree with the value of an education that stresses knowledge, not skills. And yes, universities have become too elitist and expensive everywhere, especially in America. But my daughter attends Quest University in Squamish, British Columbia, where, in addition to studying mathematics and biology, she is obliged to take courses in Rhetoric, Democracy and Justice, and Global Perspectives.

Not exactly the stuff that is likely to land her a job in Silicon Valley, you might say, and I would have to reluctantly agree. But then I would again argue that it should better qualify her for that job. Certainly those courses will make her a better citizen, something the world is in dire need of, but I would also argue that a degree in “Liberal Arts and Sciences” does in fact better qualify her for that job, because those courses will teach her how to better formulate an argument, better understand the empowerment (and therefore the greater job satisfaction) that comes with the democratic process, and better appreciate the global implications of practically all we do workwise these days.

Damn tootin’ that education in liberal arts and sciences better qualifies her for that job in Silicon Valley. That and every other job out there.

The Age of Surveillance

“Today’s world would have disturbed and astonished George Orwell.”                                        —David Lyon, Director, Surveillance Studies Centre, Queen’s University

When Orwell wrote 1984, he imagined a world where pervasive surveillance was visual, achieved by camera. Today’s surveillance is of course much more about gathering information, but it is every bit as all-encompassing as that depicted by Orwell in his dystopian novel. Whereas individual monitoring in 1984 was at the behest of a superstate personified as ‘Big Brother,’ today’s omnipresent watching comes via an unholy alliance of business and the state.

Most of it occurs when we are online. In 2011, Max Schrems, an Austrian studying law in Silicon Valley, asked Facebook to send him all the data the company had collected on him. (Facebook was by no means keen to meet his request; as a European, Schrems was able to take advantage of the fact that Facebook’s European headquarters are in Dublin, and Ireland has far stricter privacy laws than we have on this side of the Atlantic.) He was shocked to receive a CD containing more than 1200 individual PDFs. The information tracked every login, chat message, ‘poke’ and post Schram had ever made on Facebook, including those he had deleted. Additionally, a map showed the precise locations of all the photos tagging Schrem that a friend had posted from her iPhone while they were on vacation together.

Facebook accumulates this dossier of information in order to sell your digital persona to advertisers, as does Google, Skype, Youtube, Yahoo! and just about every other major corporate entity operating online. If ever there was a time when we wondered how and if the web would become monetized, we now know the answer. The web is an advertising medium, just as are the television and radio; it’s just that the advertising is ‘targeted’ at you via a comprehensive individual profile that these companies have collected and happily offered to their advertising clients, in exchange for their money.

How did our governments become involved? Well, the 9/11 terrorist attacks kicked off their participation most definitively. Those horrific events provided rationale for governments everywhere to begin monitoring online communication, and to pass laws making it legal wherever necessary. And now it seems they routinely ask the Googles and Facebooks of the world to hand over the information they’re interested in, and the Googles and Facebooks comply, without ever telling us they have. In one infamous incidence, Yahoo! complied with a Chinese government request to provide information on two dissidents, Wang Xiaoning and Shi Tao, and this complicity led directly to the imprisonment of both men. Sprint has now actually automated a system to handle requests from government agencies for information, one that charges a fee of course!

It’s all quite incredible, and we consent to it every time we toggle that “I agree” box under the “terms and conditions” of privacy policies we will never read. The terms of service you agree to on Skype, for instance, allow Skype to change those terms any time they wish to, without your notification or permission.

And here’s the real rub on today’s ‘culture of surveillance:’ we have no choice in the matter. Use of the internet is, for almost all of us, no longer a matter of socializing, or of seeking entertainment; it is where we work, where we carry out the myriad of tasks necessary to maintain the functioning of our daily life. The choice to not create an online profile that can then be sold by the corporations which happen to own the sites we operate within is about as realistic as is the choice to never leave home. Because here’s the other truly disturbing thing about surveillance in the coming days: it’s not going to remain within the digital domain.

Coming to a tree near you? BlackyShimSham photo

Coming to a tree near you?
BlackyShimSham photo

In May of this year Canadian Federal authorities used facial recognition software to bust a phony passport scheme being operated out of Quebec and BC by organized crime figures. It seems Passport Canada has been using the software since 2009, but it’s only become truly effective in the last few years. It’s not at all difficult to imagine that further advances in this software will soon have security cameras everywhere able to recognize you wherever you go. Already such cameras can read your car’s license plate number as you speed over a bridge, enabling the toll to be sent to your residence, for payment at your convenience. Thousands of these cameras continue to be installed in urban, suburban and yes, even rural areas every year.

Soon enough, evading surveillance will be nearly impossible, whether you’re online or walking in the woods. Big Brother meets Big Data.

What We Put Up With

The sky was new. It was a thick, uniform, misty grey, but I was told there were no clouds up there. I’d never seen this before, and was skeptical. How could this be? It was the humidity, I was told. It got like that around here on hot summer days.

The year was 1970; I was 17, part of a high school exchange program that had taken me and a fair number of my friends to the Trenton-Belleviille area of southern Ontario. We’d been squired about in buses for days, shuffling through various museums and historical sights, sometimes bored, sometimes behaving badly (my buddy Ken, blowing a spliff in the washroom cubicle at the back of the bus, would surely be considered bad form), sometimes, not often, left to our own devices. On this day we’d been driven to the sandy shores of Lake Ontario, where what was shockingly, appallingly new, much newer than the leaden sky, was out there in the shallow water.

Small signs were attached to stakes standing in the water, just offshore. They read, “Fish for Fun.”

I couldn’t believe it. How could this be allowed to happen? How could people put up with this? As a kid from a small town in northern Alberta, I’d never seen anything like it.

It was a kind of accelerated future shock, as if I had been suddenly propelled forward in time to a new, meta-industrialized world where this was the accepted reality. In this cowardly new world, lakes would be so polluted that eating fish caught in them was unsafe (at 17, I’d caught my share of fish, and always eaten them), and this was how people dealt with the problem. With a lame attempt at cheery acquiescence.

When I think about it, my 17-year-old self would have had a great deal of trouble believing numerous of the realities that we live with today. Setting aside all the literally incredible changes wrought by the digital revolution—where we walk around with tiny computers in our hand, able to instantly send and/or receive information from anywhere in the world—here are a few more mundane examples of contemporary realities that would have had me shaking my teenage head in utter disbelief:

  • Americans buy more than 200 bottles of water per person every year, spending more than $20 billion in the process.
  • People everywhere scoop up their dog’s excrement, deposit it into small plastic bags that they then carry with them to the nearest garbage receptacle. (Here’s a related—and very telling—factoid, first pointed out to me in a top-drawer piece by New York Times Columnist David Brooks: there are now more American homes with dogs than there are homes with children.)
  • On any given night in Canada, some 30,000 people are homeless. One in 50 of them is a child.

There are more examples I could give of current actualities my teen incarnation would scarcely have believed, but, to backtrack for a moment in the interests of fairness, pollution levels in Lake Ontario are in fact lower today than they were in 1970, although the lake can hardly be considered pristine. As the redoubtable Elizabeth May, head of Canada’s Green Party, points out in a recent statement, many of the worst environmental problems of the 70s have been effectively dealt with—toxic pesticides, acid rain, depletion of the ozone layer—but only because worthy activists like her fought long and hard for those solutions.

jronaldlee photo

jronaldlee photo

The fact is that we are a remarkably adaptable species, able to adjust to all manner of hardships, injustice and environmental degradation, so long as those changes come about slowly, and we are given to believe there’s not much we as individuals can do about it. Never has the metaphor of the frog in the slowly heating pot of water been more apropos than it is to the prospect of man-made climate change, for instance.

It’s not the cataclysmic changes that are going to get us. It’s the incremental ones.

 

Where Have All the Dollars Gone

Sir Robert Borden addressing the troops, 1917 Biblioarchives

Sir Robert Borden addressing the troops, 1917
Biblioarchives

In March of this year, lawyers acting on behalf of the Canadian government asserted that the government has no special obligation to Afghan military veterans as a result of a pledge made by Prime Minister Robert Borden back in 1917, on the eve of The Battle of Vimy Ridge. Borden assured the soldiers then preparing for battle in France (more than 10,000 of them would be killed or wounded) that none of them would subsequently “have just cause to reproach the government for having broken faith with the men who won and the men who died.”

The March court filing came about as a result of Canadian soldiers earlier suing over the government’s new “veterans charter,” which changes the pension-for-life settlements provided to soldiers from previous wars to a system where Afghan veterans receive lump sum payments for non-economic losses, such as losing limbs. It’s not difficult for any of us to understand that this change is all about our government saving money.

Also in March of this year, the Vancouver School Board announced a budget shortfall of $18.2 million. Reluctantly, the Board is considering an array of cutbacks, including school closures, ending music programs, and keeping schools closed for the entire week surrounding Remembrance Day. My kids have now moved on past public schools, but I clearly remember, for all the years they were enrolled in Vancouver schools, a steady and discouraging series of annual cutbacks; librarians disappearing, field trips becoming rare events indeed; at one point even the announcement that the temperature in schools would be turned down.

I lack the expertise to offer any detailed economic analysis as to why our governments are these days unable to meet obligations to veterans and school children that they were able to meet in the past, but here’s one bit of crude economic breakdown that causes even greater wonder. The Gross Domestic Product per capita in Canada in 1960 averaged $12,931 US; in 2012 it averaged $35,992 US, adjusted for inflation. In other words, the country is today producing three times as much in the way of goods and services per citizen as it was back in 1960, presumably enabling the government to collect far more in the way of taxes, per person, than it could four-plus decades ago. And yet we can no longer support veterans and school children in the way we did back then.

A large part of the explanation is of course that governments these days are addressing a whole lot of social concerns that they weren’t in 1960, never mind in 1917, everything from drug and alcohol treatment centres, to the parents of autistic children, to modern dance troupes. It may well be that this growing list of demands outstrips the three-times-bigger ratio of available government funds. It may even be enough for one to lament what Washington Post columnist Charles Krauthammer (an example of that rare beast, the genuinely thoughtful conservative pundit) calls “the ever-growing Leviathan state.” It may… or it may not.

One theory has it that, in the post-war decades, right up until the 70s, the Canadian economy was legitimately growing, more products, more services, more jobs. Since the 80s, any increase in or even maintaining of profit ratios (and thus disposable incomes) has come as the result of increased ‘efficiency,’ fewer people producing more goods and services through better technology and less waste. (More cutbacks anyone?) If that’s true, then things are only going to get worse, as these finite efficiencies produce ever-diminishing returns.

Whatever the final explanation, it’s not likely a simple one, and whatever the economic answer, it’s not likely to be easily achieved. Too often a current government has only to promise fewer taxes for voters to flock in their direction, regardless of known scandal, evident mean-spiritedness, or obvious cronyism. We tend to assume that the ensuing government cutbacks won’t arrive at our door. And so long as they don’t we remain generally unrepentant for our self-centeredness. The moment they do—the moment an alcoholic, or autistic child, or modern dancer appears inside our door—our attitudes tend to shift.

Thus, as we stand witnessing a time of waning of western economic domination (see DEP, from the archives of this blog), it seems we can be sure of only one thing: it’s a matter of priorities. If school-age children and wounded veterans are not our priority, then who is?

 

 

Fear of Identity Erosion

A few weeks ago, I finally got around to watching Sound and Fury, the 2000-released, Academy award-nominated documentary film about two families struggling with the impact of having their deaf children receive cochlear implants. These tiny electronic devices are surgically implanted, and will usually improve hearing in deaf patients, but—it is feared by the families featured in Sound and Fury—this improvement will come at the expense of “deaf culture.”

McLuhanThe film is an absorbing exploration of what we mean by culture and identity, and how critically important these concepts are to us. Because here’s the thing—the parents of one of the children being considered for cochlear implants (who are themselves deaf) choose not to have the operation, even though their child has asked for it, and even though it will in all likelihood significantly improve their young daughter’s hearing.

Why? Because improved hearing will negatively affect their daughter’s inclusion in the deaf tribe. I use that word advisedly, because it seems that is what identification comes down to for nearly all of us—inclusion in a group, or tribe. We identify ourselves via gender, language, race, nation, occupation, family role, sexual orientation, etc.—ever more narrowed groupings—until we arrive at that final, fairly specific definition of who we are. And these labels are incredibly valued by us. We will fight wars over these divisions, enact discriminatory laws, and cleave families apart, all in order to preserve them.

And here’s the other point that the film makes abundantly clear: technology forces change. I’m told that American Sign Language (ASL) is the equivalent of any other, fully developed spoken language, even to the point where there are separate dialects within ASL. The anxiety felt by the parents of the deaf daughter about the loss of deaf culture is entirely justified—to the extent that cochlear implant technology could potentially eradicate ASL, and this language (like any other language) is currently a central component of deaf culture. With the steady advance of implant technology, the need for deaf children to learn ASL could steadily decrease, to the point where the language eventually atrophies and dies. And with it deaf culture?

Possibly, yes, at least in terms of how deaf culture is presently defined. To their credit, it seems that the parents featured in Sound and Fury eventually relented, granting their child the surgery, but they did so only after fierce and sustained resistance to the idea. And so it goes with ‘identity groupings.’ We are threatened by their erosion, and we will do all manner of irrational, at times selfish and destructive things to prevent that erosion.

My friend Rafi, in a recent and fascinating blog post, announced that this year, he and his family will mostly forego the Passover rituals which have for so long been a defining Jewish tradition. He writes that, after a sustained re-reading and contemplation of ‘The Haggadah,’ the text meant to be read aloud during the Passover celebrations, he found the message simply too cruel, too “constructed to promote fear and exclusion.” “I’m done with it,” he announces.

Well, at the risk of offending many Jewish people in many places, more power to him. He does a courageous and generous thing when he says no more “us and them,” no more segregation, no more division.

All cultures, all traditions can bring with them a wonderful richness—great music, food, dance, costumes, all of it. But they can also bring insecurity, antipathy and conflict, conflict which can often result directly in people suffering.

Everyone benefits from knowing who they are, where they came from culturally. But no one should fear revising traditions; no one should slavishly accept that all cultural practices or group identities must continue exactly as they are, and have been. Technology may force change upon you, but regardless, recognize that change whatever its source is relentless. Anyone who thinks they can preserve cultural traditions perfectly intact within that relentless context of change is fooling themselves. And neither should anyone think that all cultural traditions are worth preserving.

New identities are always possible. Acceptance and inclusion are the goals, not exclusion and fear. It takes time, careful thought, and sometimes courage, but every human being can arrive at a clear individual understanding of who they are and what is important to them. Choose traditions which welcome others and engender the greater good. Reject those which don’t. If you can do this, and I don’t mean to diminish the challenge involved, you’ll know who you are, and you’ll undoubtedly enjoy a rich cultural life.

Storytelling 3.0 – Part 2

We tend to forget—at least I do—that, in the history of storytelling, movies came before radio. By about 15 years. The first theatre devoted exclusively to showing motion picture entertainment opened in Pittsburgh in 1905. It was called The Nickelodeon. The name became generic, and by 1910, about 26 million Americans visited a nickelodeon every week. It was a veritable techno-entertainment explosion.

The thing is, anyone at all—if they could either buy or create the product—could rent a hall, then charge admission to see a movie. To this very day, you are free to do this.

When radio rolled around—about 1920—this arrangement was obviously not on. It’s a challenge to charge admission to a radio broadcast. In fact, the first radio broadcasts were intended to sell radios; this was their original economic raison d’être.

Sadly, very quickly it became illegal to broadcast without a government granted license. (Oddly enough, the first licensed radio broadcast again originated from Pittsburgh.) And almost as quickly, sponsorship became a part of radio broadcasting. The price of admission was the passive audio receipt of an advertisement for a product or service.

An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds. Maia C photo

An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds.
Maia C photo

Radio shows were much easier and cheaper to produce than movies, and they weren’t always communal in the way movies were, that is they were not always a shared experience. (Although they could be—many a family sat around the radio in the mid part of the 20th century, engrossed in stories about Superman or The Black Museum.)

More importantly, as with book publishing, the gatekeepers were back with radio, and they were both public and private. No one could operate a radio station without a government license, and no one could gain access to a radio studio without permission from the station owner.

Then came television with the same deal in place, only more so. TV shows were more expensive to produce, but like radio, they lent themselves to a more private viewing, and access to the medium for storytellers was fully restricted, from the outset. As with radio, and until recently, TV was ‘free;’ the only charge was willing exposure to an interruptive ‘commercial.’

With the advent of each of these storytelling mediums, the experience has changed, for both storyteller and audience member. Live theatre has retained some of the immediate connection with an audience that began back in the caves (For my purposes, the storyteller in theatre is the playwright.), and radio too has kept some of that immediacy, given that so much of it is still produced live. But the true face-to-face storytelling connection is gone with electronic media, and whenever the audience member is alone as opposed to in a group, the experience is qualitatively different. The kind of community that is engendered by electronic media—say fans of a particular TV show—is inevitably more isolated, more disparate than that spawned within a theatre.

The first commercial internet providers came into being in the late 1980s, and we have since lived through a revolution as profound as was the Gutenberg. Like reading, the internet consumer experience is almost always private, but like movies, the access to the medium is essentially unrestricted, for both storyteller and story receiver.

And that, in the end, is surprising and wonderful. Economics aside for a moment, I think it’s undeniably true that never, in all our history, has the storyteller been in a more favorable position than today.

What does this mean for you and I? Well, many things, but let me climb onto an advocacy box for a minute to stress what I think is the most significant benefit for all of us. Anyone can now be a storyteller, in the true sense of the word, that is a person with a story to tell and an audience set to receive it. For today’s storyteller, because of the internet, the world is your oyster, ready to shuck.

Everyone has a story to tell, that much is certain. If you’ve been alive long enough to gain control of grunt and gesture, you have a story to tell. If you have learned to set down words, you’re good to go on the internet. And I’m suggesting that all of us should. Specifically what I’m advocating is that you write a blog, a real, regular blog like this one, or something as marvelously simple as my friend Rafi’s. Sure, tweeting or updating your Facebook page is mini-blogging, but no, you can do better than that.

Start a real blog—lots of sites offer free hosting—then keep it up. Tell the stories of your life, past and present; tell them for yourself, your family, your friends. Your family for one will be grateful, later if not right away. If you gain an audience beyond yourself, your family and friends, great, but it doesn’t matter a hoot. Blog because you now can; it’s free and essentially forever. Celebrate the nature of the new storytelling medium by telling a story, your story.