Tag Archives: digital revolution

The Age of Surveillance

“Today’s world would have disturbed and astonished George Orwell.”                                        —David Lyon, Director, Surveillance Studies Centre, Queen’s University

When Orwell wrote 1984, he imagined a world where pervasive surveillance was visual, achieved by camera. Today’s surveillance is of course much more about gathering information, but it is every bit as all-encompassing as that depicted by Orwell in his dystopian novel. Whereas individual monitoring in 1984 was at the behest of a superstate personified as ‘Big Brother,’ today’s omnipresent watching comes via an unholy alliance of business and the state.

Most of it occurs when we are online. In 2011, Max Schrems, an Austrian studying law in Silicon Valley, asked Facebook to send him all the data the company had collected on him. (Facebook was by no means keen to meet his request; as a European, Schrems was able to take advantage of the fact that Facebook’s European headquarters are in Dublin, and Ireland has far stricter privacy laws than we have on this side of the Atlantic.) He was shocked to receive a CD containing more than 1200 individual PDFs. The information tracked every login, chat message, ‘poke’ and post Schram had ever made on Facebook, including those he had deleted. Additionally, a map showed the precise locations of all the photos tagging Schrem that a friend had posted from her iPhone while they were on vacation together.

Facebook accumulates this dossier of information in order to sell your digital persona to advertisers, as does Google, Skype, Youtube, Yahoo! and just about every other major corporate entity operating online. If ever there was a time when we wondered how and if the web would become monetized, we now know the answer. The web is an advertising medium, just as are the television and radio; it’s just that the advertising is ‘targeted’ at you via a comprehensive individual profile that these companies have collected and happily offered to their advertising clients, in exchange for their money.

How did our governments become involved? Well, the 9/11 terrorist attacks kicked off their participation most definitively. Those horrific events provided rationale for governments everywhere to begin monitoring online communication, and to pass laws making it legal wherever necessary. And now it seems they routinely ask the Googles and Facebooks of the world to hand over the information they’re interested in, and the Googles and Facebooks comply, without ever telling us they have. In one infamous incidence, Yahoo! complied with a Chinese government request to provide information on two dissidents, Wang Xiaoning and Shi Tao, and this complicity led directly to the imprisonment of both men. Sprint has now actually automated a system to handle requests from government agencies for information, one that charges a fee of course!

It’s all quite incredible, and we consent to it every time we toggle that “I agree” box under the “terms and conditions” of privacy policies we will never read. The terms of service you agree to on Skype, for instance, allow Skype to change those terms any time they wish to, without your notification or permission.

And here’s the real rub on today’s ‘culture of surveillance:’ we have no choice in the matter. Use of the internet is, for almost all of us, no longer a matter of socializing, or of seeking entertainment; it is where we work, where we carry out the myriad of tasks necessary to maintain the functioning of our daily life. The choice to not create an online profile that can then be sold by the corporations which happen to own the sites we operate within is about as realistic as is the choice to never leave home. Because here’s the other truly disturbing thing about surveillance in the coming days: it’s not going to remain within the digital domain.

Coming to a tree near you? BlackyShimSham photo
Coming to a tree near you?
BlackyShimSham photo

In May of this year Canadian Federal authorities used facial recognition software to bust a phony passport scheme being operated out of Quebec and BC by organized crime figures. It seems Passport Canada has been using the software since 2009, but it’s only become truly effective in the last few years. It’s not at all difficult to imagine that further advances in this software will soon have security cameras everywhere able to recognize you wherever you go. Already such cameras can read your car’s license plate number as you speed over a bridge, enabling the toll to be sent to your residence, for payment at your convenience. Thousands of these cameras continue to be installed in urban, suburban and yes, even rural areas every year.

Soon enough, evading surveillance will be nearly impossible, whether you’re online or walking in the woods. Big Brother meets Big Data.

What We Put Up With

The sky was new. It was a thick, uniform, misty grey, but I was told there were no clouds up there. I’d never seen this before, and was skeptical. How could this be? It was the humidity, I was told. It got like that around here on hot summer days.

The year was 1970; I was 17, part of a high school exchange program that had taken me and a fair number of my friends to the Trenton-Belleviille area of southern Ontario. We’d been squired about in buses for days, shuffling through various museums and historical sights, sometimes bored, sometimes behaving badly (my buddy Ken, blowing a spliff in the washroom cubicle at the back of the bus, would surely be considered bad form), sometimes, not often, left to our own devices. On this day we’d been driven to the sandy shores of Lake Ontario, where what was shockingly, appallingly new, much newer than the leaden sky, was out there in the shallow water.

Small signs were attached to stakes standing in the water, just offshore. They read, “Fish for Fun.”

I couldn’t believe it. How could this be allowed to happen? How could people put up with this? As a kid from a small town in northern Alberta, I’d never seen anything like it.

It was a kind of accelerated future shock, as if I had been suddenly propelled forward in time to a new, meta-industrialized world where this was the accepted reality. In this cowardly new world, lakes would be so polluted that eating fish caught in them was unsafe (at 17, I’d caught my share of fish, and always eaten them), and this was how people dealt with the problem. With a lame attempt at cheery acquiescence.

When I think about it, my 17-year-old self would have had a great deal of trouble believing numerous of the realities that we live with today. Setting aside all the literally incredible changes wrought by the digital revolution—where we walk around with tiny computers in our hand, able to instantly send and/or receive information from anywhere in the world—here are a few more mundane examples of contemporary realities that would have had me shaking my teenage head in utter disbelief:

  • Americans buy more than 200 bottles of water per person every year, spending more than $20 billion in the process.
  • People everywhere scoop up their dog’s excrement, deposit it into small plastic bags that they then carry with them to the nearest garbage receptacle. (Here’s a related—and very telling—factoid, first pointed out to me in a top-drawer piece by New York Times Columnist David Brooks: there are now more American homes with dogs than there are homes with children.)
  • On any given night in Canada, some 30,000 people are homeless. One in 50 of them is a child.

There are more examples I could give of current actualities my teen incarnation would scarcely have believed, but, to backtrack for a moment in the interests of fairness, pollution levels in Lake Ontario are in fact lower today than they were in 1970, although the lake can hardly be considered pristine. As the redoubtable Elizabeth May, head of Canada’s Green Party, points out in a recent statement, many of the worst environmental problems of the 70s have been effectively dealt with—toxic pesticides, acid rain, depletion of the ozone layer—but only because worthy activists like her fought long and hard for those solutions.

jronaldlee photo
jronaldlee photo

The fact is that we are a remarkably adaptable species, able to adjust to all manner of hardships, injustice and environmental degradation, so long as those changes come about slowly, and we are given to believe there’s not much we as individuals can do about it. Never has the metaphor of the frog in the slowly heating pot of water been more apropos than it is to the prospect of man-made climate change, for instance.

It’s not the cataclysmic changes that are going to get us. It’s the incremental ones.

 

Storytelling 3.0 – Part 2

We tend to forget—at least I do—that, in the history of storytelling, movies came before radio. By about 15 years. The first theatre devoted exclusively to showing motion picture entertainment opened in Pittsburgh in 1905. It was called The Nickelodeon. The name became generic, and by 1910, about 26 million Americans visited a nickelodeon every week. It was a veritable techno-entertainment explosion.

The thing is, anyone at all—if they could either buy or create the product—could rent a hall, then charge admission to see a movie. To this very day, you are free to do this.

When radio rolled around—about 1920—this arrangement was obviously not on. It’s a challenge to charge admission to a radio broadcast. In fact, the first radio broadcasts were intended to sell radios; this was their original economic raison d’être.

Sadly, very quickly it became illegal to broadcast without a government granted license. (Oddly enough, the first licensed radio broadcast again originated from Pittsburgh.) And almost as quickly, sponsorship became a part of radio broadcasting. The price of admission was the passive audio receipt of an advertisement for a product or service.

An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds. Maia C photo
An exhibit in the Henry Ford Museum, furnished as a 1930s living room, commemorating the radio broadcast by Orson Welles of H. G. Wells’ The War of the Worlds.
Maia C photo

Radio shows were much easier and cheaper to produce than movies, and they weren’t always communal in the way movies were, that is they were not always a shared experience. (Although they could be—many a family sat around the radio in the mid part of the 20th century, engrossed in stories about Superman or The Black Museum.)

More importantly, as with book publishing, the gatekeepers were back with radio, and they were both public and private. No one could operate a radio station without a government license, and no one could gain access to a radio studio without permission from the station owner.

Then came television with the same deal in place, only more so. TV shows were more expensive to produce, but like radio, they lent themselves to a more private viewing, and access to the medium for storytellers was fully restricted, from the outset. As with radio, and until recently, TV was ‘free;’ the only charge was willing exposure to an interruptive ‘commercial.’

With the advent of each of these storytelling mediums, the experience has changed, for both storyteller and audience member. Live theatre has retained some of the immediate connection with an audience that began back in the caves (For my purposes, the storyteller in theatre is the playwright.), and radio too has kept some of that immediacy, given that so much of it is still produced live. But the true face-to-face storytelling connection is gone with electronic media, and whenever the audience member is alone as opposed to in a group, the experience is qualitatively different. The kind of community that is engendered by electronic media—say fans of a particular TV show—is inevitably more isolated, more disparate than that spawned within a theatre.

The first commercial internet providers came into being in the late 1980s, and we have since lived through a revolution as profound as was the Gutenberg. Like reading, the internet consumer experience is almost always private, but like movies, the access to the medium is essentially unrestricted, for both storyteller and story receiver.

And that, in the end, is surprising and wonderful. Economics aside for a moment, I think it’s undeniably true that never, in all our history, has the storyteller been in a more favorable position than today.

What does this mean for you and I? Well, many things, but let me climb onto an advocacy box for a minute to stress what I think is the most significant benefit for all of us. Anyone can now be a storyteller, in the true sense of the word, that is a person with a story to tell and an audience set to receive it. For today’s storyteller, because of the internet, the world is your oyster, ready to shuck.

Everyone has a story to tell, that much is certain. If you’ve been alive long enough to gain control of grunt and gesture, you have a story to tell. If you have learned to set down words, you’re good to go on the internet. And I’m suggesting that all of us should. Specifically what I’m advocating is that you write a blog, a real, regular blog like this one, or something as marvelously simple as my friend Rafi’s. Sure, tweeting or updating your Facebook page is mini-blogging, but no, you can do better than that.

Start a real blog—lots of sites offer free hosting—then keep it up. Tell the stories of your life, past and present; tell them for yourself, your family, your friends. Your family for one will be grateful, later if not right away. If you gain an audience beyond yourself, your family and friends, great, but it doesn’t matter a hoot. Blog because you now can; it’s free and essentially forever. Celebrate the nature of the new storytelling medium by telling a story, your story.

Facetime

Last month the city of Nelson, BC, said no to drive-thrus. There’s only one in the town anyway, but city councilors voted to prevent any more appearing. Councillor Deb Kozak described it as “a very Nelson” thing to do.

Nelson may be slightly off the mean when it comes to small towns—many a draft dodger settled there back in the Vietnam War era, and pot-growing allowed Nelson to better weather the downturn of the forest industry that occurred back in the 80s—but at the same time, dumping on drive-thrus is something that could only happen in a smaller urban centre.

The move is in support of controlling carbon pollution of course; no more idling cars lined up down the block (Hello, Fort McMurray?!), but what I like about it is that the new by-law obliges people to get out of their cars, to enjoy a little facetime with another human being, instead of leaning out their car window, shouting into a tinny speaker mounted in a plastic sign.

For all the degree of change being generated by the digital revolution, and for all the noise I’ve made about that change in this blog, there are two revolutions of recent decades that have probably had greater effect: the revolution in settlement patterns that we call urbanization, and the revolution in economic scale that we call globalization. Both are probably more evident in smaller cities and towns than anywhere else.

Grain elevators, Milestone, Saskatchewan, about 1928
Grain elevators, Milestone, Saskatchewan,
about 1928

Both of my parents grew up in truly small prairie towns; my mother in Gilbert Plains, Manitoba, present population about 750; my father in Sedgewick, Alberta, present population about 850. Sedgewick’s population has dropped some 4% in recent years, despite a concurrent overall growth rate in Alberta of some 20%. Both these towns were among the hundreds arranged across the Canadian prairies, marked off by rust-coloured grain elevators rising above the horizon, set roughly every seven miles along the rail lines. This distance because half that far was gauged doable by horse and wagon for all the surrounding farmers.

I grew up in Grande Prairie, Alberta, a town which officially became a city while I still lived there. The three blocks of Main Street that I knew were anchored at one end by the Co-op Store, where all the farmers shopped, and at the other by the pool hall, where all the young assholes like me hung out. In between were Lilge Hardware, operated by the Lilge brothers, Wilf and Clem, Joe’s Corner Coffee Shop, and Ludbrooks, which offered “variety” as “the spice of life,” and where we as kids would shop for board games, after saving our allowance money for months at a time.

Grande Prairie is virtually unrecognizable to me now, that is it looks much like every other small and large city across the continent: the same ‘big box’ stores surround it as surround Prince George, and Regina and Billings, Montana, I’m willing to bet. Instead of Lilge Hardware, Joe’s Corner Coffee Shop and Ludbrooks we have Walmart, Starbucks and Costco. This is what globalization looks like, when it arrives in your own backyard.

80% of Canadians live in urban centres now, as opposed to less than 30% at the beginning of the 20th century. And those urban centres now look pretty much the same wherever you go, once the geography is removed. It’s a degree of change that snuck up on us far more stealthily than has the digital revolution, with its dizzying pace, but it’s a no less disruptive transformation.

I couldn’t wait to get out of Grande Prairie when I was a teenager. The big city beckoned with diversity, anonymity, and vigour. Maybe if I was young in Grande Prairie now I wouldn’t feel the same need, given that I could now access anything there that I could in the big city. A good thing? Bad thing?

There’s no saying. Certain opportunities still exist only in the truly big centres of course, cities like Tokyo, New York or London. If you want to make movies it’s still true that you better get yourself to Los Angeles. But they’re not about to ban drive-thrus in Los Angeles. And that’s too bad.

Handprints in the Digital Cave

There are now more than 150 million blogs on the internet. 150 million! That’s as if every second American is writing a blog; every single Russian is blogging in this imaginary measure, plus about another seven million.

The explosion seems to have come back in 2003, when, according to Technorati, there were just 100, 000 “web-logs.” Six months later there were a million. A year later there were more than four million. And on it has gone. Today, according to Blogging.com, more than half a million new blog posts go up everyday.

doozle photo
doozle photo

Why do bloggers blog? Well, it’s not for the money. I’ve written on numerous occasions in this blog about how the digital revolution has undermined the monetization of all manner of modern practices, whether it be medicine, or music or car mechanics. And writing, as we all know, is no different. Over the last year or so, I slightly revised several of my blog posts to submit them to Digital Journal, a Toronto-based online news service which prides itself on being  “a pioneer” in revenue-sharing with its contributors. I’ve submitted six articles thus far and seen them all published. My earnings to date: $4.14.

It ain’t a living. In fact, Blogging.com tells us that only eight per cent of bloggers make enough from their blogs to feed their family, and that more than 80% of bloggers never make as much as $100 from their blogging.

Lawrence Lessig, Harvard Law Professor and regular blogger since 2002, writes in his book Remix: Making Art and Commerce Thrive in a Hybrid Economy, that, “much of the time, I have no idea why I [blog].” He goes on to suggest that, when he blogs, it has to do with an “RW’ (ReWrite) ethic made possible by the internet, as opposed to the “RO” (Read Only) media ethic predating the internet. For Lessig, the introduction of the capacity to ‘comment’ was a critical juncture in the historical development of blogs, enabling an exchange between bloggers and their blog readers that, to this day, Lessig finds both captivating and “insanely difficult.”

I’d agree with Lessig that the interactive nature of blog writing is new and important and critical to the growth of blogging, but I’d also narrow the rationale down some. The final click in posting to a blog comes atop the ‘publish’ button. Now some may view that term as slightly pretentious, even a bit of braggadocio, but here’s the thing. It isn’t. That act of posting is very much an act of publishing, now that we live in a digital age. That post goes public, globally so, and likely forever. How often could that be said about a bit of writing ‘published’ in the traditional sense, on paper?

Sure that post is delivered into a sea of online content that likely and immediately floods it with unread information, but nevertheless that post now has a potential readership of billions, and its existence is essentially permanent. If that isn’t publishing, I don’t know what is.

I really don’t care much if any one reads my blog. As many of my friends and family members like to remind me, I suck at promoting my blog, and that’s because, like too many writers, I find the act of self-promotion uncomfortable. Neither do I expect to ever make any amount of money from this blog. I blog as a creative outlet, and in order to press my blackened hand against the wall of the digital cave. And I take comfort in knowing that the chances of my handprint surviving through the ages are far greater than all those of our ancestors who had to employ an actual cave wall, gritty and very soon again enveloped in darkness.

I suspect that there are now more people writing—and a good many of them writing well, if not brilliantly—than at any time in our history. And that is because of the opportunity to publish on the web. No more hidebound gatekeepers to circumvent. No more expensive and difficult distribution systems to navigate. Direct access to a broad audience, at basically no cost, and in a medium that in effect will never deteriorate.

More people writing—expressing themselves in a fully creative manner—than ever before. That’s a flipping wonderful thing.

Exponential End

Computers are now more than a million times faster than they were when the first hand calculator appeared back in the 1960s. (An engineer working at Texas Instruments, Jack Kilby, had invented the first integrated circuit, or semiconductor, in 1957.) This incredible, exponential increase was predicted via ‘Moore’s Law,’ first formulated in 1965: that is that the number of transistors in a semiconductor doubles approximately every two years.

Another way to state this Law (which is not a natural ‘law’ at all, but an observational prediction) is to say that each generation of transistors will be half the size of the last. This is obviously a finite process, with an end in sight.  Well, in our imaginations at least.

The implications of this end are not so small. As we all know, rapidly evolving digital technology has hugely impacted nearly every sector of our economy, and with those changes has come disruptive social change, but also rapid economic growth. The two largest arenas of economic growth in the U.S. in recent years have been Wall Street and Silicon Valley, and Wall Street has prospered on the manipulation of money, via computers, while Silicon Valley (Silicon is the ‘plate’ upon which a semiconductor is usually built.) has prospered upon the growing ubiquity of computers themselves.

Intel has predicted that the end of this exponential innovation will come anywhere between 2013 and 2018. Moore’s Law itself predicts the end at 2020. Gordon Moore himself—he who formulated the Law—said in a 2005 interview that, “In terms of size [of transistors] you can see that we’re approaching the size of atoms, which is a fundamental barrier.” Well, in 2012 a team working at the University of New South Wales announced the development of the first working transistor consisting of a single atom. That sounds a lot like the end of the line.

In November of last year, a group of eminent semiconductor experts met in Washington to discuss the current state of semiconductor innovation, as well as its worrisome future. These men (alas, yes, all men) are worried about the future of semiconductor innovation because it seems that there are a number of basic ideas about how innovation can continue past the coming ‘end,’ but none of these ideas has emerged as more promising than the others, and any one of them is going to be very expensive. We’re talking a kind of paradigm shift, from microelectronics to nanoelectronics, and, as is often the case, the early stages of a fundamentally new technology are much more costly than the later stages, when the new technology has been scaled up.

And of course research dollars are more difficult to secure these days than they have been in the past. Thus the additional worry that the U.S., which has for decades led the world in digital innovation, is going to be eclipsed by countries like China and Korea that are now investing more in R&D than is the U.S. The 2013 budget sequestration cuts have, for instance, directly impacted certain university research budgets, causing programs to be cancelled and researchers to be laid off.

Bell Labs 1934
Bell Labs 1934

One of the ironies of the situation, for all those of us who consider corporate monopoly to be abhorrent, is evident when a speaker at the conference mentioned working at the Bell Labs back in the day when Ma Bell (AT&T) operated as a monopoly and funds at the Labs were virtually unlimited. Among the technologies originating at the Bell Labs are the transistor, the laser, and the UNIX operating system.

It’s going to be interesting, because the need is not going away. The runaway train that is broadband appetite, for instance, is not slowing down; by 2015 it’s estimated that there will be 16 times the amount of video clamoring to get online than there is today.

It’s worth noting that predictions about Moore’s law lasting only about another decade have been made for the last 30 years. And futurists like Ray Kurzweil and Bruce Sterling believe that exponential innovation will continue on past the end of its current course due in large part to a ‘Law of Accelerating Technical Returns,’ leading ultimately to ‘The Singularity,’ where computers surpass human intelligence.

Someone should tell those anxious computer scientists who convened last November in Washington: not to worry. Computers will solve this problem for us.

Clicktivism

I first joined Amnesty International back in the early 80s.  I still have a thickish file containing carbon copies of the letters I wrote and sent back then, thwacked out over the hum of my portable electric typewriter.  Despite my efforts to keep them informed, A.I. didn’t do a particularly good job of tracking me as I moved about from place to place in the following years, but, nevertheless, on and off, I’ve been sending protest messages under their aegis for some 30 years now.

Scott Schrantz photo
Scott Schrantz photo

But these days it’s a whole lot easier.  These days I receive an email from them, outlining another outrage by an oppressive government somewhere, and I’m asked to simply ‘sign’ a petition.  They have my details on hand already, so all I need do is click to the petition page and click one more time.  Done.

It’s called ‘clicktivism,’ and, quite rightly, its comparative value is questionable.  In the 2011 book, The Googlization of Everything (And Why We Should Worry), Siva Vaidhyanathan took this somewhat indirect swipe at the practice: “… instead of organizing, lobbying and campaigning… we rely on expressions of disgruntlement as a weak proxy for real political action.  Starting or joining a Facebook protest group suffices for many as political action.”

Writing in The Guardian a year earlier, Micah White made a much more direct attack: “In promoting the illusion that surfing the web can change the world, clicktivism is to activism as McDonalds is to a slow-cooked meal.  It may look like food, but the life-giving nutrients are long gone.”  White points out that clicktivism is largely activism co-opted by the techniques of online marketing.  The greater the emphasis on click-rates, bloated petition numbers and other marketing metrics, the cheaper the value of the message, according to White, resulting in “a race to the bottom of political engagement.”

One thing that hasn’t changed is that organizations like Amnesty pass their contact lists to other like organizations, presumably for compensation, without soliciting consent.  I did sign on with Avaaz, but I’ve never asked to receive emails from SumOfUs, Care2 Action Alerts, the World Society for the Protection of Animals, Plan Canada, the Council of Canadians, All Out, Change.org or Care Canada, but I do.  I will readily admit that many of those emails go unopened.

It’s a difficult phenomenon to come to terms with ethically.  These organizations are undoubtedly staffed by well-meaning people who genuinely believe they are making a difference.  And I’m sure that sometimes they do.  Yet there is also no doubt that the greatly facilitated process that clicktivism represents degrades more on-the-ground forms of political protest, and allows people like myself to make essentially meaningless contributions to worthy causes.  ‘Facilitate’ may be the operative word here, as in facile, meaning, according to Merriam Webster, “too simple; not showing enough thought or effort.”

December 10 is International Human Rights Day, as first proclaimed by the United Nations General Assembly in 1950.  Last year Amnesty International organized the sending of more than 1.8 million messages to governments everywhere on that date, asking them to respect the rights of people and communities under threat of persecution.  To their credit, in addition to urging their members to send messages, Amnesty is encouraging its members to organize or attend an event in their community or workplace on December 10.  They have targeted seven different cases of human rights abuse from around the globe for action.  These include Dr. Ten Aung, who was given a 17-year jail sentence in Myanmar last year after attempting to keep the peace between Rakhine Buddhists and Rohinyga Muslims; Ihar Tsikhanyuk, who has faced threats, intimidation and beatings in Belarus for attempting to register a group in support of LGBTI rights, and Badia East, from Nigeria, who, along with many of her neighbours, was left destitute and without compensation after authorities destroyed her home last February.

The problem with my problem with clicktivism is that it pales in comparison to the problems faced by these brave people on a daily basis.  And like so many other new processes made possible by digital technology, the change represented by online activism is not about to reverse itself.  We keep our eyes forward, think critically, and do what we can.  I’ll try to write a letter on December 10.

The Eggs in Google’s Basket

Back in the third century BC, the largest and most significant library in the world was in Alexandria, Egypt.  Its mission was to hold all the written knowledge of the known world, and so scribes from this library were regularly sent out to every other identified information repository to borrow, copy and return (well, most of the time) every ‘book,’ (mostly papyrus scrolls) in existence.  We don’t know the precise extent of the collection, but there is no doubt that its value was essentially priceless.

And then the library was destroyed.  The circumstances of the destruction are unclear—fire and religious rivalries had a lot to do with it—but by the birth of Jesus Christ, the library was no more.  The irretrievable loss of public knowledge was incalculable.

In those days, access to knowledge was limited and expensive; today such access is ubiquitous and free, via the World Wide Web.

MicroAssist photo
MicroAssist photo

Except that there’s this singular middleman.  A corporate entity actually, called Google, that acts as a nearly monopolistic conduit for all today’s abundant information.  As Siva Vaidhyanatha has pointed out in his recent book, The Googlization of Everything (And Why We Should Worry), the extent of Google’s domination of the Web is such that, “Google is on the verge of becoming indistinguishable from the Web itself.”

The corporation operates in excess of one million servers in various data centers spread across the planet, processing more than a billion search requests every day.  And then there are Google’s other services: email (Gmail), office software (Google Drive), blog hosting (Blogger), social networking (Google+ and Orkut), VoIP (Google Voice), online payment (Google Checkout), Web browsing (Chrome), ebooks (Google Books), mobile (Android), online video (YouTube), and real world navigation (Google Maps, Street View, Google Earth).  There’s more.

It’s a unique and amazing situation.  As Vaidhyanathan adds, “The scope of Google’s mission sets it apart from any company that has ever existed in any medium.”  Its leaders blithely assure us that Google will always operate consistent with its unofficial slogan of “Don’t be evil,” but it’s difficult to imagine how we should accept this assurance without some degree of caution.  The company is only about 17 years old, and every entity in this world, big and small, is subject to constant change.

Google is less dominant in Asia and Russia, with about 40% of the search market, but in places like Europe, North America and much of South America, Google controls fully 90 to 95% of Web search traffic.  For most of us, this gigantic private utility has taken over the most powerful communication, commercial and information medium in the world, and is now telling us, ‘Not to worry; we’re in control but we’re friendly.’  Well, maybe, but it behooves all of us to ask, ‘Who exactly appointed you Czar?’

For one thing, as many of us know, Google is not neutral in its search function; a search for “globalization” in Argentina does not deliver the same results as a search for the same term in the U.K.  Google is now making decisions on our behalf as to what search results we actually want to see.  It does this based upon its ability to mine our online data.  Why does Google do this?  Because that data is also valuable to advertisers.  Perhaps the most important point Vaidhyanathan makes in his book is that Google is not at its core a search engine company; its core business is advertising.

You might argue this is win-win.  Google makes money (lots and lots of it); we get a remarkably effective, personalized service.  At least we usually don’t have to wait for the advertising to conclude, as we do on TV, before we can continue our use of the medium.

Vaidhyanathan argues in his book for creation of what he calls the Human Knowledge Project (akin to the Human Genome Project).  This would deliver an “information ecosystem” that would supplant and outlive Google—essentially a global electronic network of public libraries that would be universally accessible and forever within the communal domain.

It’s an idea worthy of consideration, because, once again, we seem to be vulnerable to the loss or change of a single, monopolistic source of information.  As with the Alexandria library, there are too many eggs in Google’s basket.

Requiem for a Cinema Pioneer

The great Quebec filmmaker Michel Brault died last month, and while he and his career were fully appreciated in his home province—Premier Pauline Marois attended his funeral on October 4, and the flag at the Quebec City Parliament building flew at half-mast for the occasion—we in English-speaking North America know too little of the profound contribution this film artist made to cinema.

Especially in the realm of documentary, Brault’s influence can hardly be overstated.  He was among the very first to take up the new lightweight film cameras that began appearing in the late 1950s, and when he co-shot and co-directed the short film Les Raquetteurs (The Snowshoers) for The National Film Board of Canada in 1958, documentary filmmaking was forever changed.  The 15-minute film focused on a convention of cheery showshoers in rural Quebec, employing a fluid, hand-held shooting style, synchronous sound, and no voice-over narration whatsoever.  The dominant documentary visual style in previous years had been the ponderous look made necessary by the bulk of 35 mm cameras, a style frequently accompanied by somber ‘voice of God’ narration.  Subject matter was often ‘exotic’ and distant; say Inuit people in the Canadian Arctic, or dark-skinned Natives in Papua New Guinea.  Reenactment was, almost of necessity, the preferred manner of recording events.

12675326_102622376eIn 1960, the French anthropologist-filmmakers Jean Rouch and Edgar Morin were shooting Chronique d’un Ete (Chronicle of A Summer) in Paris, turning their cameras for the first time upon their own ‘tribe.’  When they saw Les Raquetteurs, they immediately fired their cameraman and brought Brault in to complete the work.  Rouch went on to label Chronique “cinema verité” (literally ‘truth cinema’), and an entire new genre of documentary film began to appear everywhere in the West.

Robert Drew and his Associates (chief among them D.A. Pennebaker, Richard Leacock and Albert Maysles) took up the cause in the United States, labeling their work ‘direct cinema,’ and delivering films like Primary, about the 1960 Wisconsin primary election between Hubert Humphrey and the largely unknown John F, Kennedy, and Don’t Look Back, about a young folksinger named Bob Dylan on his 1965 tour of the United Kingdom.  Both films would have a marked impact upon the subsequent rise of these two pivotal political/cultural figures.

Brault himself was slightly less grandiose in describing the filmic techniques he pioneered, saying, “I don’t know what truth is.  We can’t think we’re creating truth with a camera.  But what we can do is reveal something to viewers that allows them to discover their own truth.”

He would later turn to fictional filmmaking, writing and directing, among other works, Les Ordres in 1974, a smoldering indictment of the abuse of power which transpired during the ‘October Crisis’ of 1970 in Quebec.  Les Ordres was scripted, but the script was based upon a series of interviews done with a number of people who were in fact arrested and imprisoned during the crisis.  As such, it was considered ‘docudrama,’ another area where Brault’s influence was seminal.  Brault won the Best Director Award at the Cannes Film Festival in 1975 for Les Ordres, and he remains the only Canadian to have ever done so.

These days, with video cameras in every smart phone and tablet, the idea that we should turn our cinematic attention to our own people is taken for granted, as every police department now teaches its members.  But in Brault’s early career, that we should observe, at close quarters, those immediately around us, and do so in an unobtrusive but sustained way, then make that prolonged cinematic observation available to the public, well, that was an almost revolutionary notion.  We could stay close to home, and let the camera reveal what it would.  The process may not have unavoidably presented ‘the truth,’ certainly not in any genuinely objective way, but observational documentary filmmaking granted us new understanding, new insight into people both with and without power.  And we were the better for it.

If the goal is to leave a lasting impression, to press a permanent handprint onto the wall of the cave where we live, Michel Brault can rest in peace.  He made his mark.

Text vs. Talk

Nobody answers the phone anymore.  Not unless the call is from a member of that small, select group who qualify for as much in your life, and not unless call display tells you it’s them.  And maybe not even then.

Do you remember when you would unhesitatingly pick up the phone, not knowing who was calling, but confident that you would then be able to talk with someone you cared to?

file4421234854056Not if you’re among the technologically ‘native’ generations you don’t.  For anyone born from the late 80s on, texting rules.  A phone call is intrusive, burdensome to manage, and difficult to exit.  For my generation, on the other hand, texting seems like a throwback in technology, slow and cumbersome, like going back to the typewriter after enjoying the benefits of word processing.

Texting is all about control of course, carefully crafting, on your own time, a message that may appear casual but is in fact considered, strategic, probably revised.  A phone call is unpredictable, volatile even, and calling for that skill so prized by the Victorians—conversation.

But texting also directly reflects the hermetic quality of digital technology, that quality which allows us, in Sherry Turkle’s words, to be “alone together.”  Simply put, it’s more isolating.  The contact you make with another human being via texting is removed, with very real—not virtual—time and space set between sender and receiver.

As I’ve said elsewhere in this blog, isolation can be a good thing, something that most of us don’t get enough of.  Isolation where it leads to the opportunity for quiet contemplation, for thought, for listening to near silence; this sort of isolation is therapeutic, spiritual shoring up which should be a recurring part of our lives.  I spent a couple of summers alone on a very isolated fire tower when I was young, watching for smoke rising from the wilderness forests surrounding me, but really just being with myself, being with myself until I had no choice but to accept myself, and then move on.  The experience changed my life forever; it may be the single smartest thing I’ve ever done.

But that isolation, alone with the wondrous beauty of the Canadian wilds, also regularly made otherwise fully sane, well-grounded individuals slip off the edge of sanity.  “Bushed” it was called then, and probably still is.  There were scads of stories, but the best one I heard personally was from a forest ranger who, driving past a towerman’s hilltop station one evening, decided to pay a visit.  As he drove the winding road up to the tower, he glimpsed the lighted windows of the cabin a few times, but, when he pulled into the yard, those lights were off.  Surmising that the man had just gone to bed, he turned around, headed back down.  But from the road now twisting down the hill, he saw that the lights were back on.

His suspicions aroused, he returned to the cabin, again found the lights off, but this time knocked on the door.  With no response, he let himself in to find the man hiding timorously under his bed.

We are social animals who need regular human contact, and the more social contacts we have, the more likely it is that we are happy.  A little time spent in the company of others was all that was needed by those afflicted like the towerman above, in order to return to a healthy mental state.   Because here’s the thing; too much isolation leads to the isolated avoiding, not seeking out human contact.  I can attest to as much.  After enough days spent alone, you no longer wish to associate with other people.  It’s too much effort, requiring skills that are too corroded.

Now there is obviously a vast, vast degree of difference between mountaintop and telephone isolation, but that’s my point; it is only a difference in degree.  The direction of the impact is the same, toward insecurity and deteriorated social skills.

Like most digital technology, there’s no going back.  Innovation has again created a need we didn’t even know we had.  I won’t be picking up my phone any more often in future, but engagement is the price I’m paying, and engagement is precisely what makes us more alive.