Tag Archives: technology

Fear of Identity Erosion

A few weeks ago, I finally got around to watching Sound and Fury, the 2000-released, Academy award-nominated documentary film about two families struggling with the impact of having their deaf children receive cochlear implants. These tiny electronic devices are surgically implanted, and will usually improve hearing in deaf patients, but—it is feared by the families featured in Sound and Fury—this improvement will come at the expense of “deaf culture.”

McLuhanThe film is an absorbing exploration of what we mean by culture and identity, and how critically important these concepts are to us. Because here’s the thing—the parents of one of the children being considered for cochlear implants (who are themselves deaf) choose not to have the operation, even though their child has asked for it, and even though it will in all likelihood significantly improve their young daughter’s hearing.

Why? Because improved hearing will negatively affect their daughter’s inclusion in the deaf tribe. I use that word advisedly, because it seems that is what identification comes down to for nearly all of us—inclusion in a group, or tribe. We identify ourselves via gender, language, race, nation, occupation, family role, sexual orientation, etc.—ever more narrowed groupings—until we arrive at that final, fairly specific definition of who we are. And these labels are incredibly valued by us. We will fight wars over these divisions, enact discriminatory laws, and cleave families apart, all in order to preserve them.

And here’s the other point that the film makes abundantly clear: technology forces change. I’m told that American Sign Language (ASL) is the equivalent of any other, fully developed spoken language, even to the point where there are separate dialects within ASL. The anxiety felt by the parents of the deaf daughter about the loss of deaf culture is entirely justified—to the extent that cochlear implant technology could potentially eradicate ASL, and this language (like any other language) is currently a central component of deaf culture. With the steady advance of implant technology, the need for deaf children to learn ASL could steadily decrease, to the point where the language eventually atrophies and dies. And with it deaf culture?

Possibly, yes, at least in terms of how deaf culture is presently defined. To their credit, it seems that the parents featured in Sound and Fury eventually relented, granting their child the surgery, but they did so only after fierce and sustained resistance to the idea. And so it goes with ‘identity groupings.’ We are threatened by their erosion, and we will do all manner of irrational, at times selfish and destructive things to prevent that erosion.

My friend Rafi, in a recent and fascinating blog post, announced that this year, he and his family will mostly forego the Passover rituals which have for so long been a defining Jewish tradition. He writes that, after a sustained re-reading and contemplation of ‘The Haggadah,’ the text meant to be read aloud during the Passover celebrations, he found the message simply too cruel, too “constructed to promote fear and exclusion.” “I’m done with it,” he announces.

Well, at the risk of offending many Jewish people in many places, more power to him. He does a courageous and generous thing when he says no more “us and them,” no more segregation, no more division.

All cultures, all traditions can bring with them a wonderful richness—great music, food, dance, costumes, all of it. But they can also bring insecurity, antipathy and conflict, conflict which can often result directly in people suffering.

Everyone benefits from knowing who they are, where they came from culturally. But no one should fear revising traditions; no one should slavishly accept that all cultural practices or group identities must continue exactly as they are, and have been. Technology may force change upon you, but regardless, recognize that change whatever its source is relentless. Anyone who thinks they can preserve cultural traditions perfectly intact within that relentless context of change is fooling themselves. And neither should anyone think that all cultural traditions are worth preserving.

New identities are always possible. Acceptance and inclusion are the goals, not exclusion and fear. It takes time, careful thought, and sometimes courage, but every human being can arrive at a clear individual understanding of who they are and what is important to them. Choose traditions which welcome others and engender the greater good. Reject those which don’t. If you can do this, and I don’t mean to diminish the challenge involved, you’ll know who you are, and you’ll undoubtedly enjoy a rich cultural life.

Handprints in the Digital Cave

There are now more than 150 million blogs on the internet. 150 million! That’s as if every second American is writing a blog; every single Russian is blogging in this imaginary measure, plus about another seven million.

The explosion seems to have come back in 2003, when, according to Technorati, there were just 100, 000 “web-logs.” Six months later there were a million. A year later there were more than four million. And on it has gone. Today, according to Blogging.com, more than half a million new blog posts go up everyday.

doozle photo
doozle photo

Why do bloggers blog? Well, it’s not for the money. I’ve written on numerous occasions in this blog about how the digital revolution has undermined the monetization of all manner of modern practices, whether it be medicine, or music or car mechanics. And writing, as we all know, is no different. Over the last year or so, I slightly revised several of my blog posts to submit them to Digital Journal, a Toronto-based online news service which prides itself on being  “a pioneer” in revenue-sharing with its contributors. I’ve submitted six articles thus far and seen them all published. My earnings to date: $4.14.

It ain’t a living. In fact, Blogging.com tells us that only eight per cent of bloggers make enough from their blogs to feed their family, and that more than 80% of bloggers never make as much as $100 from their blogging.

Lawrence Lessig, Harvard Law Professor and regular blogger since 2002, writes in his book Remix: Making Art and Commerce Thrive in a Hybrid Economy, that, “much of the time, I have no idea why I [blog].” He goes on to suggest that, when he blogs, it has to do with an “RW’ (ReWrite) ethic made possible by the internet, as opposed to the “RO” (Read Only) media ethic predating the internet. For Lessig, the introduction of the capacity to ‘comment’ was a critical juncture in the historical development of blogs, enabling an exchange between bloggers and their blog readers that, to this day, Lessig finds both captivating and “insanely difficult.”

I’d agree with Lessig that the interactive nature of blog writing is new and important and critical to the growth of blogging, but I’d also narrow the rationale down some. The final click in posting to a blog comes atop the ‘publish’ button. Now some may view that term as slightly pretentious, even a bit of braggadocio, but here’s the thing. It isn’t. That act of posting is very much an act of publishing, now that we live in a digital age. That post goes public, globally so, and likely forever. How often could that be said about a bit of writing ‘published’ in the traditional sense, on paper?

Sure that post is delivered into a sea of online content that likely and immediately floods it with unread information, but nevertheless that post now has a potential readership of billions, and its existence is essentially permanent. If that isn’t publishing, I don’t know what is.

I really don’t care much if any one reads my blog. As many of my friends and family members like to remind me, I suck at promoting my blog, and that’s because, like too many writers, I find the act of self-promotion uncomfortable. Neither do I expect to ever make any amount of money from this blog. I blog as a creative outlet, and in order to press my blackened hand against the wall of the digital cave. And I take comfort in knowing that the chances of my handprint surviving through the ages are far greater than all those of our ancestors who had to employ an actual cave wall, gritty and very soon again enveloped in darkness.

I suspect that there are now more people writing—and a good many of them writing well, if not brilliantly—than at any time in our history. And that is because of the opportunity to publish on the web. No more hidebound gatekeepers to circumvent. No more expensive and difficult distribution systems to navigate. Direct access to a broad audience, at basically no cost, and in a medium that in effect will never deteriorate.

More people writing—expressing themselves in a fully creative manner—than ever before. That’s a flipping wonderful thing.

Words

My own view on the ‘proper’ use of language is radical, though not so radical as some. I am told of a UBC professor who believes that, “If you used it, it’s a word.” I would amend that statement to read, “If you used it—and it was understood by the listener in the way you intended it to be understood—it’s a word.”

Rafel Miro photo
Rafel Miro photo

I’m employing the classic communication model here, where sender, message and receiver must all be present in order for communication to take place, and I do believe that clarity is the prime consideration when attempting to communicate with the written or spoken word. Honesty might be my second consideration, and all the niceties of language—the elements of style—would follow, a distant third.

Words are meant to communicate, and communication is meant to move you somehow, either intellectually or emotionally, depending upon the kind of writing or speaking being done. But nowhere should it be maintained that there is a proper way to communicate with words, that there is one and only one correct way to string words together.

And yet of course there is. We have the rules of grammar, and we have the dictionary. The dictionary tells us that there is one and only one correct way to spell a word, and the rules of grammar tell us that there is only one way to correctly construct sentences.

Well, to not put too fine a word upon it, hogwash. Shakespeare never had a dictionary or grammar text to refer to, and most of us would agree that no fellow has ever strung English words together better than he, and he invented some dillys (How about “fell swoop?”). It makes no more sense to say that there are rules to govern writing than it does to say there are rules to govern painting, or sculpture, or theatre. Writing is an artform like any other, and to impose rules upon it is an act of stultification.

I’m with Bill Bissett, subversive poet of deserved renown whose work can be found on his “offishul web site,” work like this pithy gem (from scars on th seehors):

IT USD 2 B

yu cud get sum toilet papr

nd a newspapr both 4

a dollr fiftee

 

now yu cant  

yu gotta make a chois 

Bissett points out in his essay why I write like ths that it was the invention of the printing press that precipitated the standardization of language:

previous to that era peopul wud spell th same words diffrentlee  evn in th same correspondens  chek th lettrs btween qween elizabeth first n sir waltr raleigh  different spellings  different tones  different emphasis  sound  all part uv th changing meenings  

Once again it seems it was technology determining change, change which in this case undoubtedly impoverished words as a creative tool.

It was the Victorians who truly imposed a final set of rules upon the English language—the first Oxford Dictionary appeared in 1884—and generically speaking, there has rarely been a more noxious bunch populating the earth.

The French have the Académie française, “official moderator of the French language,” there “to work, with all possible care and diligence, to give our language definite rules.” The Academy of course admits a few new words to the French language each year, mostly to replace odious English words that have crept into use in French, but again, it is hard to imagine a more officious and objectionable pomp of bureaucrats than these self-appointed jury members. (Did you catch me inventing “pomp,” and, more importantly, did you grasp my meaning?)

Language evolves, daily, as must any art if it is to remain an art. It must constantly be in search of the novel, for there is precious little else remaining when it comes to the recognition of art than that it be new. Those who would stand in opposition to this evolution stand with those charming Victorians who offered up as their sole necessary justification, “It’s not done.”

Yes, the too-indulgent use of words can be tedious and problematic (Has anyone actually read Finnegan’s Wake?), but even more problematically tendentious are the language police manning the checkpoints in defense of a hopeless, conservative cause. If you want to say, “There is data to support my argument,” as opposed to “There are data…”, go ahead. Those who would condemn you for it are snobs, snobs with a fascist bent, and not the least deserving of the respect they seek. If you consider it a word, and you think it likely to be understood in the way you intend, go ahead, fire away, use it. Feel free.

Exponential End

Computers are now more than a million times faster than they were when the first hand calculator appeared back in the 1960s. (An engineer working at Texas Instruments, Jack Kilby, had invented the first integrated circuit, or semiconductor, in 1957.) This incredible, exponential increase was predicted via ‘Moore’s Law,’ first formulated in 1965: that is that the number of transistors in a semiconductor doubles approximately every two years.

Another way to state this Law (which is not a natural ‘law’ at all, but an observational prediction) is to say that each generation of transistors will be half the size of the last. This is obviously a finite process, with an end in sight.  Well, in our imaginations at least.

The implications of this end are not so small. As we all know, rapidly evolving digital technology has hugely impacted nearly every sector of our economy, and with those changes has come disruptive social change, but also rapid economic growth. The two largest arenas of economic growth in the U.S. in recent years have been Wall Street and Silicon Valley, and Wall Street has prospered on the manipulation of money, via computers, while Silicon Valley (Silicon is the ‘plate’ upon which a semiconductor is usually built.) has prospered upon the growing ubiquity of computers themselves.

Intel has predicted that the end of this exponential innovation will come anywhere between 2013 and 2018. Moore’s Law itself predicts the end at 2020. Gordon Moore himself—he who formulated the Law—said in a 2005 interview that, “In terms of size [of transistors] you can see that we’re approaching the size of atoms, which is a fundamental barrier.” Well, in 2012 a team working at the University of New South Wales announced the development of the first working transistor consisting of a single atom. That sounds a lot like the end of the line.

In November of last year, a group of eminent semiconductor experts met in Washington to discuss the current state of semiconductor innovation, as well as its worrisome future. These men (alas, yes, all men) are worried about the future of semiconductor innovation because it seems that there are a number of basic ideas about how innovation can continue past the coming ‘end,’ but none of these ideas has emerged as more promising than the others, and any one of them is going to be very expensive. We’re talking a kind of paradigm shift, from microelectronics to nanoelectronics, and, as is often the case, the early stages of a fundamentally new technology are much more costly than the later stages, when the new technology has been scaled up.

And of course research dollars are more difficult to secure these days than they have been in the past. Thus the additional worry that the U.S., which has for decades led the world in digital innovation, is going to be eclipsed by countries like China and Korea that are now investing more in R&D than is the U.S. The 2013 budget sequestration cuts have, for instance, directly impacted certain university research budgets, causing programs to be cancelled and researchers to be laid off.

Bell Labs 1934
Bell Labs 1934

One of the ironies of the situation, for all those of us who consider corporate monopoly to be abhorrent, is evident when a speaker at the conference mentioned working at the Bell Labs back in the day when Ma Bell (AT&T) operated as a monopoly and funds at the Labs were virtually unlimited. Among the technologies originating at the Bell Labs are the transistor, the laser, and the UNIX operating system.

It’s going to be interesting, because the need is not going away. The runaway train that is broadband appetite, for instance, is not slowing down; by 2015 it’s estimated that there will be 16 times the amount of video clamoring to get online than there is today.

It’s worth noting that predictions about Moore’s law lasting only about another decade have been made for the last 30 years. And futurists like Ray Kurzweil and Bruce Sterling believe that exponential innovation will continue on past the end of its current course due in large part to a ‘Law of Accelerating Technical Returns,’ leading ultimately to ‘The Singularity,’ where computers surpass human intelligence.

Someone should tell those anxious computer scientists who convened last November in Washington: not to worry. Computers will solve this problem for us.

Cars

I drove a car just as soon as I was legally able to.  Couldn’t wait.  A learner’s permit was obtainable in Alberta at age 14 back then, so within days of my 14th birthday I was happily out on the road, behind the wheel of a freedom machine.  I owned my first car, a light blue Volkswagon Fastback, by the time I was 18.

epSos.de photo
epSos.de photo

My own son, who is now 24, has never owned a car, and professes no interest in doing so.  It was my suggestion, not his, that he obtain a driver’s license, since I believed, perhaps naively, that it enhanced his chances for gainful employment.  My son’s cousin, same age, similarly has no interest in driving.  His friend Mendel, a year younger, has never bothered with the driver’s license.

They all own mobile devices of course, and if they ever had to choose between a car and a smart phone it would not be a difficult choice, and the auto industry would not be the beneficiary.

Times change.  And yet, more than ever, Canada is a suburban, car-dependent nation.  Two-thirds of us live in suburban neighbourhoods and three-quarters of us still drive to work, most of the time alone.  Vancouver, where I spend most of my time, now has the worst traffic congestion in all of North America, this year finally overtaking perennial frontrunner Los Angeles.

If ever a technology is in need of a revolution it has to be cars.  As uber venture capitalist (and Netscape co-founder) Marc Andreeson has been pointing out of late, most cars sit idle most of the time, like 90% of the time.  And the actual figure for occupancy on car trips is 1.2 persons per journey.

Car co-ops, and car-sharing companies like Zip Car of Car2Go point the way.  Many people have begun sharing, rather than owning a car.  But if you take the numbers mentioned above and add in the coming phenomenon of the Google robot car, the potential transportation picture becomes truly intriguing.

Driverless cars are now legal on public roads in Nevada, California and Florida.  Since 2011, there have been two collisions involving Google’s robot cars.  In one incident, the car was under human control at the time; in the other the robotic car was rear-ended while stopped at a traffic light.  We might assume that a human was driving the car that rear-ended the robot.

What if no one owned a car?  What if you could simply order up a driverless car ride on your smart phone any time, anywhere?  Your robot car would arrive at your door, it might stop to pick someone else up en route, but it would then drop you off at the entranceway to wherever it is you’re wishing to go to.  You would pay a fee for this service of course, but it would be minor in comparison to what you now pay if you own and regularly drive a car.

And of course the need for cars would nosedive, because these robotic cars would be in use nearly all of the time, say 90% of the time.  Car numbers would plummet, meaning traffic congestion would be a thing of the past.  And it keeps going: garages, driveways, parking lots would begin to disappear.  Our urban landscape, which has essentially been designed to accommodate cars, would begin to transform.  A lot more green space would become available.

And I haven’t even mentioned the reduction in carbon pollution that would ensue with the reduction in cars, carbon pollution being a problem which just may threaten the stability of civilization in the coming years.

Cars have been with us for about 100 years now.  Our relationship with them over that period has at times been tender, at times belligerent, at times top-down, sun-in-your face, wind-in-your-hair fabulous, at times utterly savage.  For those people who love cars, who fuss over them, restore them, take them out for careful drives only on sunny Sunday afternoons; I hope those people keep their cars, as an expensive hobby.  For the rest of us, those of us who use cars simply to get from A to B, for whom cars are just a form of convenient transport, the days when we need to own a car are disappearing.  For my money, the sooner the better.

The Eggs in Google’s Basket

Back in the third century BC, the largest and most significant library in the world was in Alexandria, Egypt.  Its mission was to hold all the written knowledge of the known world, and so scribes from this library were regularly sent out to every other identified information repository to borrow, copy and return (well, most of the time) every ‘book,’ (mostly papyrus scrolls) in existence.  We don’t know the precise extent of the collection, but there is no doubt that its value was essentially priceless.

And then the library was destroyed.  The circumstances of the destruction are unclear—fire and religious rivalries had a lot to do with it—but by the birth of Jesus Christ, the library was no more.  The irretrievable loss of public knowledge was incalculable.

In those days, access to knowledge was limited and expensive; today such access is ubiquitous and free, via the World Wide Web.

MicroAssist photo
MicroAssist photo

Except that there’s this singular middleman.  A corporate entity actually, called Google, that acts as a nearly monopolistic conduit for all today’s abundant information.  As Siva Vaidhyanatha has pointed out in his recent book, The Googlization of Everything (And Why We Should Worry), the extent of Google’s domination of the Web is such that, “Google is on the verge of becoming indistinguishable from the Web itself.”

The corporation operates in excess of one million servers in various data centers spread across the planet, processing more than a billion search requests every day.  And then there are Google’s other services: email (Gmail), office software (Google Drive), blog hosting (Blogger), social networking (Google+ and Orkut), VoIP (Google Voice), online payment (Google Checkout), Web browsing (Chrome), ebooks (Google Books), mobile (Android), online video (YouTube), and real world navigation (Google Maps, Street View, Google Earth).  There’s more.

It’s a unique and amazing situation.  As Vaidhyanathan adds, “The scope of Google’s mission sets it apart from any company that has ever existed in any medium.”  Its leaders blithely assure us that Google will always operate consistent with its unofficial slogan of “Don’t be evil,” but it’s difficult to imagine how we should accept this assurance without some degree of caution.  The company is only about 17 years old, and every entity in this world, big and small, is subject to constant change.

Google is less dominant in Asia and Russia, with about 40% of the search market, but in places like Europe, North America and much of South America, Google controls fully 90 to 95% of Web search traffic.  For most of us, this gigantic private utility has taken over the most powerful communication, commercial and information medium in the world, and is now telling us, ‘Not to worry; we’re in control but we’re friendly.’  Well, maybe, but it behooves all of us to ask, ‘Who exactly appointed you Czar?’

For one thing, as many of us know, Google is not neutral in its search function; a search for “globalization” in Argentina does not deliver the same results as a search for the same term in the U.K.  Google is now making decisions on our behalf as to what search results we actually want to see.  It does this based upon its ability to mine our online data.  Why does Google do this?  Because that data is also valuable to advertisers.  Perhaps the most important point Vaidhyanathan makes in his book is that Google is not at its core a search engine company; its core business is advertising.

You might argue this is win-win.  Google makes money (lots and lots of it); we get a remarkably effective, personalized service.  At least we usually don’t have to wait for the advertising to conclude, as we do on TV, before we can continue our use of the medium.

Vaidhyanathan argues in his book for creation of what he calls the Human Knowledge Project (akin to the Human Genome Project).  This would deliver an “information ecosystem” that would supplant and outlive Google—essentially a global electronic network of public libraries that would be universally accessible and forever within the communal domain.

It’s an idea worthy of consideration, because, once again, we seem to be vulnerable to the loss or change of a single, monopolistic source of information.  As with the Alexandria library, there are too many eggs in Google’s basket.

Requiem for a Cinema Pioneer

The great Quebec filmmaker Michel Brault died last month, and while he and his career were fully appreciated in his home province—Premier Pauline Marois attended his funeral on October 4, and the flag at the Quebec City Parliament building flew at half-mast for the occasion—we in English-speaking North America know too little of the profound contribution this film artist made to cinema.

Especially in the realm of documentary, Brault’s influence can hardly be overstated.  He was among the very first to take up the new lightweight film cameras that began appearing in the late 1950s, and when he co-shot and co-directed the short film Les Raquetteurs (The Snowshoers) for The National Film Board of Canada in 1958, documentary filmmaking was forever changed.  The 15-minute film focused on a convention of cheery showshoers in rural Quebec, employing a fluid, hand-held shooting style, synchronous sound, and no voice-over narration whatsoever.  The dominant documentary visual style in previous years had been the ponderous look made necessary by the bulk of 35 mm cameras, a style frequently accompanied by somber ‘voice of God’ narration.  Subject matter was often ‘exotic’ and distant; say Inuit people in the Canadian Arctic, or dark-skinned Natives in Papua New Guinea.  Reenactment was, almost of necessity, the preferred manner of recording events.

12675326_102622376eIn 1960, the French anthropologist-filmmakers Jean Rouch and Edgar Morin were shooting Chronique d’un Ete (Chronicle of A Summer) in Paris, turning their cameras for the first time upon their own ‘tribe.’  When they saw Les Raquetteurs, they immediately fired their cameraman and brought Brault in to complete the work.  Rouch went on to label Chronique “cinema verité” (literally ‘truth cinema’), and an entire new genre of documentary film began to appear everywhere in the West.

Robert Drew and his Associates (chief among them D.A. Pennebaker, Richard Leacock and Albert Maysles) took up the cause in the United States, labeling their work ‘direct cinema,’ and delivering films like Primary, about the 1960 Wisconsin primary election between Hubert Humphrey and the largely unknown John F, Kennedy, and Don’t Look Back, about a young folksinger named Bob Dylan on his 1965 tour of the United Kingdom.  Both films would have a marked impact upon the subsequent rise of these two pivotal political/cultural figures.

Brault himself was slightly less grandiose in describing the filmic techniques he pioneered, saying, “I don’t know what truth is.  We can’t think we’re creating truth with a camera.  But what we can do is reveal something to viewers that allows them to discover their own truth.”

He would later turn to fictional filmmaking, writing and directing, among other works, Les Ordres in 1974, a smoldering indictment of the abuse of power which transpired during the ‘October Crisis’ of 1970 in Quebec.  Les Ordres was scripted, but the script was based upon a series of interviews done with a number of people who were in fact arrested and imprisoned during the crisis.  As such, it was considered ‘docudrama,’ another area where Brault’s influence was seminal.  Brault won the Best Director Award at the Cannes Film Festival in 1975 for Les Ordres, and he remains the only Canadian to have ever done so.

These days, with video cameras in every smart phone and tablet, the idea that we should turn our cinematic attention to our own people is taken for granted, as every police department now teaches its members.  But in Brault’s early career, that we should observe, at close quarters, those immediately around us, and do so in an unobtrusive but sustained way, then make that prolonged cinematic observation available to the public, well, that was an almost revolutionary notion.  We could stay close to home, and let the camera reveal what it would.  The process may not have unavoidably presented ‘the truth,’ certainly not in any genuinely objective way, but observational documentary filmmaking granted us new understanding, new insight into people both with and without power.  And we were the better for it.

If the goal is to leave a lasting impression, to press a permanent handprint onto the wall of the cave where we live, Michel Brault can rest in peace.  He made his mark.

The End of the Movies

I grew up without television.  It never arrived in the small town where I lived until I was about ten.  So I grew up watching the movies, initially Saturday afternoon matinees, which my older brother begrudgingly escorted me to under firm orders from my mother, who was looking for brief respite from the burden of three disorderly boys.  Admission was ten cents, popcorn five cents.  (If these prices seem unbelievable to you, all I can say is… me too.)

file2791245784270Movies were it, a prime cultural (and for me eventually professional) mover, right through my adolescence and early adulthood.  For me, TV has tended to be a kind of entertainment sideline, something to be watched when a new show came around with some serious buzz, but more often just a passive filler of downtime, material to unwind with at the end of a busy day.

That has of course all changed in recent years, and not just for me.  I don’t go to the movies much anymore—that is I don’t go to the movie houses—and, what’s more, movies don’t seem to matter much anymore.  These days movies are mostly noisy spectacle, big, flashy events, but events with very little to offer other than raucous entertainment.  Comic book movies are the dominant genre of today, and, no matter how I slice it, those comic book characters don’t really connect with life as I’m living it, day to day.  And, as I say, it’s not just me, as someone from an older demographic.  Today, unfortunately, the audience for the movies is smaller, and more narrow than it’s ever been.

OLYMPUS DIGITAL CAMERAMovie audiences peaked in 1946, the year The Best Years of Our Lives, The Big Sleep, and It’s A Wonderful Life were released, and 100 million tickets were sold every week.  By 1955—when Guys and Dolls, Rebel Without A Cause, and The Seven Year Itch were released—with the advent of television, that audience had dropped to less than half that.

But the movies survived television and found a ‘silver’ age (‘gold’ being the studio-dominated 40s) in the decade from 1965 to 1975, when we watched movies like The Godfather I and II, Midnight Cowboy and Chinatown, and the works of Ingmar Bergman, Federico Fellini and Francois Truffaut enjoyed theatrical release right across North America.  It was a time when movies did seem to have something to say; they spoke to me about the changing world I was in direct daily contact with.

Then came the blockbusters—Jaws and Star Wars—and the realization that Hollywood could spend hundreds, not tens of millions of dollars on a movie and garner just as large an increase in returns.  Movies have never been the same.

Today less than 40 million people in North America go to see a movie once a month.  In a 2012 poll done by Harris International, 61% of respondents said they rarely or never go to the movies.  Why would you when you have that wide screen at home, ad-free, with the pause button at your disposal?  The most you’ll likely pay to watch in private is half of what you would at the movie house.

And then, this year, we had a summer of blockbuster flops.  The worst was The Lone Ranger, made for $225 million and about to cost Walt Disney at least $100 million.  Both Steven Speilberg and George Lucas have said that the industry is set to “implode,” with the distribution side morphing into something closer to a Broadway model where fewer movies are released; they stay in theatres longer, but with much higher ticket prices.  Movies as spectacle.

(If you’re interested in reading more, an elegant, elegiac tribute to the run of the movies is The Big Screen, published last year and written by David Thomson, a critic born in 1941 who has thus been around for a good-sized chunk of film history.)

It may well be that movies, as the shared public experience that I’ve known, are coming to the end of a roughly 100-year run.  It was rapid, glamorous, often tawdry, sometimes brilliant, once in a while even significant, but technology is quickly trampling the movies.  If you were there for even a part it, you might feel blessed.

Text vs. Talk

Nobody answers the phone anymore.  Not unless the call is from a member of that small, select group who qualify for as much in your life, and not unless call display tells you it’s them.  And maybe not even then.

Do you remember when you would unhesitatingly pick up the phone, not knowing who was calling, but confident that you would then be able to talk with someone you cared to?

file4421234854056Not if you’re among the technologically ‘native’ generations you don’t.  For anyone born from the late 80s on, texting rules.  A phone call is intrusive, burdensome to manage, and difficult to exit.  For my generation, on the other hand, texting seems like a throwback in technology, slow and cumbersome, like going back to the typewriter after enjoying the benefits of word processing.

Texting is all about control of course, carefully crafting, on your own time, a message that may appear casual but is in fact considered, strategic, probably revised.  A phone call is unpredictable, volatile even, and calling for that skill so prized by the Victorians—conversation.

But texting also directly reflects the hermetic quality of digital technology, that quality which allows us, in Sherry Turkle’s words, to be “alone together.”  Simply put, it’s more isolating.  The contact you make with another human being via texting is removed, with very real—not virtual—time and space set between sender and receiver.

As I’ve said elsewhere in this blog, isolation can be a good thing, something that most of us don’t get enough of.  Isolation where it leads to the opportunity for quiet contemplation, for thought, for listening to near silence; this sort of isolation is therapeutic, spiritual shoring up which should be a recurring part of our lives.  I spent a couple of summers alone on a very isolated fire tower when I was young, watching for smoke rising from the wilderness forests surrounding me, but really just being with myself, being with myself until I had no choice but to accept myself, and then move on.  The experience changed my life forever; it may be the single smartest thing I’ve ever done.

But that isolation, alone with the wondrous beauty of the Canadian wilds, also regularly made otherwise fully sane, well-grounded individuals slip off the edge of sanity.  “Bushed” it was called then, and probably still is.  There were scads of stories, but the best one I heard personally was from a forest ranger who, driving past a towerman’s hilltop station one evening, decided to pay a visit.  As he drove the winding road up to the tower, he glimpsed the lighted windows of the cabin a few times, but, when he pulled into the yard, those lights were off.  Surmising that the man had just gone to bed, he turned around, headed back down.  But from the road now twisting down the hill, he saw that the lights were back on.

His suspicions aroused, he returned to the cabin, again found the lights off, but this time knocked on the door.  With no response, he let himself in to find the man hiding timorously under his bed.

We are social animals who need regular human contact, and the more social contacts we have, the more likely it is that we are happy.  A little time spent in the company of others was all that was needed by those afflicted like the towerman above, in order to return to a healthy mental state.   Because here’s the thing; too much isolation leads to the isolated avoiding, not seeking out human contact.  I can attest to as much.  After enough days spent alone, you no longer wish to associate with other people.  It’s too much effort, requiring skills that are too corroded.

Now there is obviously a vast, vast degree of difference between mountaintop and telephone isolation, but that’s my point; it is only a difference in degree.  The direction of the impact is the same, toward insecurity and deteriorated social skills.

Like most digital technology, there’s no going back.  Innovation has again created a need we didn’t even know we had.  I won’t be picking up my phone any more often in future, but engagement is the price I’m paying, and engagement is precisely what makes us more alive.

 

Marx Was Right

Those politicos who chant the competition-as-salvation mantra, especially those in America, may find it hard to believe, but not so long ago many prominent U.S. businessmen and politicians were singing the praises of corporate monopoly.  Incredibly, given America’s current climate of opinion—where the word government, never mind socialism, seems a dirty word—just 100 years ago, it was widely believed that there were four basic industries with “public callings”—telecommunications, transportation, banking and energy—that were best instituted as government sanctioned monopolies.  The most successful of the corporate entities to occupy this place of economic privilege was the American Telephone and Telegraph Company (AT&T), and here’s what its then President, Theodore Vail, had to say about the social value of competition, “In the long run… the public as a whole has never benefited by destructive competition.”

Groucho's older brother Karl (kidding)
Groucho’s older brother Karl (kidding)

Karl Marx may have been wrong about many things, including what best motivates the average human being, but he was certainly not wrong when he suggested that capitalism tends directly toward monopoly.  How could it not, when the most durable means of defeating the competition will always be to simply eliminate it?  In 1913, AT&T had been remarkably successful in doing just that, and its monopoly would survive undiminished until 1982, when the Reagan administration oversaw the breakup of AT&T into the seven so-called ‘Baby Bells.’

(Before you conclude that it’s only right-thinking, right-leaning governments, like Reagan’s, that can properly control corporate America, know that it was also a Republican administration, under President Taft, that condoned the ascendency to monopoly by AT&T in 1913.)

Tim Wu, in his book The Master Switch (cited last week in this blog), has postulated “the cycle” as continuously operative in the communications industries (all the way from telegraph to TV), whereby technical innovation gives birth to an initially wide-open trade, but where soon enough corporate consolidation leads to singular business empires.  It’s worth noting that by 2006, AT&T had, via some truly brutal business practices, essentially reunited its pre-breakup empire, leaving only two of the Baby Bells, Verizon and Qwest, still intact and independent.

The latest example of the tendency toward monopoly in Canada can be seen readily at play in the federal government’s efforts to boost competition among the oligopoly of this country’s big three telephone providers, Telus, Bell and Rogers.  Evidence suggests that, prior to the government’s most recent intervention—in 2008 reserving wireless spectrum for new companies like Mobilicity, Wind and Public Mobile—Canadians paid some of the highest mobile phone charges in the world.  Since their entry into the marketplace, these three rookie players, have—what a surprise—struggled to prosper, even survive in the face of fierce competition from the triad of telecom veterans.  All three ‘Canadian babies’ are now said to be up for sale, and the feds, to their credit, stepped in earlier this year to block a takeover of Wind Mobile by Telus Corp.

Former Baby Bell Verizon—now referred to in comparison to Canadian telecoms as “giant” or “huge”—is reported to be circling Canada’s wireless market, rumoured to be considering a bid on either of Wind Mobile or Mobilicity.  Facilitating this move—and setting off alarm bells (no pun intended) near the Canadian cultural core—is a recent legislative relaxation of formerly stringent foreign ownership rules to allow foreign takeovers of telecoms with less than 10 per cent of the market.

Wu’s book asks if the internet will succumb to the same cycle of amalgamation that so many other electronic media have.  His answer: too soon to tell, but history teaches us to keep a wary eye.  And if you consider Apple’s cozy relationship with AT&T over the iPhone, or the fact that Google and Verizon have courted, you’d have to agree with his concern.  Wu concludes his book with an advocacy of what he terms “The Separations Principle,” an enforced separation of “those who develop information, those who control the network infrastructure on which it travels, and those who control the tools or venues of access” to that information.

The internet, given its decentralized construction, is not easy to consolidate, but no one should feel confident that today’s corporate titans won’t try.  Nor should we underestimate their ability to succeed in that effort.