Tag Archives: digital revolution

MOOC Learning

Academia is said to be one of the societal institutions which has best resisted the changes that have come with the e-revolution.  MOOCs may be changing all that.  MOOC is the acronym for Massive Open Online Course—a post-secondary education course where anyone with an internet connection can sign up, complete the coursework online, receive feedback and ultimately recognition for finishing the course.  Class sizes can be huge, over 150,000 in some instances, and a MOOC is sometimes combined with regular bums-in-seats-in-the-lecture-hall students.

MOOCs (pronounce it the same way a cow pronounces everything, then add the hard ‘C’) made their first appearance in 2008, and institutions as venerable as Stanford and Harvard have since introduced them.  Why not, since, like so many web-based operations, they’re cheap to set-up, and ready-made for promotion.  Fees are usually not charged for MOOC students, but I suspect that will soon begin to change.

But think of it.  You too can enroll in a course from Harvard, presented by an eminent Harvard professor, if only virtually.  It’s more of the greater democratization so often brought about by the internet.  All good thus far.

As is so frequently the case with digital innovation, however, the picture is not a straightforward one.   There is little genuine accreditation that comes with completion of a MOOC.  You may receive some sort of ‘certificate of completion’ for a single course, but there’s no degree forthcoming from passing a set number of MOOCs.  Sorry folks; no Harvard degree available via your laptop set upon the kitchen table.

The attrition rate is also high for MOOCs.  Many students who have eagerly signed up find it difficult to stay with and succeed at an online course from the unstructured isolation of the kitchen table.  The potential for cheating is another obvious issue.

Back to the upside for a moment though.  With MOOCs, learners are engaged in an interactive form of schooling which, research tells us, is considerably better than the traditional bums-in-seats model.  MOOCs are typically constructed via modules, shorter lessons which require the passing of a concluding quiz to demonstrate that the student has grasped the modular content and is thus ready to move on.  If not, then the material is easily reviewed, and the quiz retaken.  It’s a form of individualized learning which has obvious advantages over the scenario where a student, having failed to comprehend the message being delivered orally by the sage professor at her lectern, is obliged to raise his hand and make his failure known to the entire student assemblage.

p.txtOne of the most interesting aspects to the MOOC phenomenon emerged with one of the early Stanford MOOCs, when the regular, in-class students began staying away from lectures, preferring to do the online modules along with their MOOC brethren.

But online learning is also, as we all know, hardly restricted to deliverance by august institutions of higher formal education.  Anyone who has ever typed in “How to…” in the Youtube search box knows that much.  The Khan Academy, started by MIT and Harvard grad Salman Khan in 2006, now has more than 3600 mini-lessons available via Youtube.  A website like Skillshare offers lessons on everything from how to make better meatballs, to how to create an Android app.  At Skillshare you can sign up as either a teacher or a student, although as a teacher your course proposal must be vetted by the Skillshare powers-that-be.  Nevertheless, Skillshare courses are a bargain.  For one course I recently looked at, the fee was just $15 for the first 250 students to sign up.

But here’s the real kicker from a Skillshare course on how to become a Skillshare teacher.  The course is presented in just two video modules over three weeks, including office hours, and the instructor advises that you’ll need to set aside an hour per week to complete the class.  “By the end of this workshop,” gushes the young woman offering this golden opportunity, “You will be able to teach an excellent class.”  Well, to employ a pre-revolutionary term, what utter codswallop.  No one, neither Ghandi nor Einstein, should be guaranteed the ability to teach an excellent class after taking a part-time, three-week workshop.  With the internet, especially when it comes to start-ups, you’ll always want to watch your wallet.

The most significant downside to online learning is of course that it lends itself far better to certain kinds of subject matter than it does to others.  It works best with subjects where there is one and only one right answer.   That or a very defined skill, say the proper way to prune a fruit tree.  Any subject where individual interpretation, subtle analysis, critique, or indeed genuine creativity is required is not so easily adapted to a MOOC template.  Whether a computer will ever write something so sublime as King Lear is one thing; whether a computer will ever be able to legitimately grade hundreds of essays on the same work is another.

Quite simply, and despite all those from C. P. Snow on down who have argued so persuasively for the melding of arts and sciences, there are certain studies—those where the goal is insight into and appreciation of the ineffable—that will never lend themselves well to the MOOC model.  Praise be.

 

 

 

 

The Apostate

Jaron Lanier is an interesting case.  He’s an author, musician, composer and computer scientist who, back in the 80s, was a pioneer in virtual reality soft and hardware.  (The company he founded after leaving Atari in 1985 was the first to sell VR goggles and gloves.)

Most interestingly, these days Lanier is a sharp critic of what he terms, “digital Maosim,” the open source groupthink which says that not only all information, but all creative content images-3should be free and subject to appropriation (for mashups, for instance).  As you might expect, he’s been subject to considerable blowback from the online community in taking this position, and Lanier freely admits that he was once a card-carrying member of this same club, believing that any musicians or journalists who were going broke because of the digital revolution were simply dinosaurs, unable to adapt to a new environment that would soon provide them other means of financial support, if only they were a little patient and properly innovative.  The problem is, as Lanier writes in his 2010 book You Are Not a Gadget, “None of us was ever able to give the dinosaurs any constructive advice about how to survive.”

And so, currently, we have a world where creators—be they artists, musicians, writers or filmmakers—face massive competition and constant downward pressure on what they can charge for their product.  This while a few of what Lanier labels the “Lords of the Clouds”—those very able but still very lucky entrepreneurs who were at the right place at the right time with the right idea (think the owners of Youtube and Google)—have amassed huge fortunes.

These conditions have delivered a new feudal world where, according to Lanier, we again have starving peasants and rich lords, where formerly middle-class creators struggle to survive in the face of competition from what he adroitly describes as those people ‘living in their van,’ or those who are mere hobbyists, creating art as an after-work pastime, or perhaps because they can pay their monthly bills with an inheritance.  Important artists find themselves, like Renaissance artists of old, looking to rely on the beneficence of wealthy patrons.  “Patrons gave us Bach and Michelangelo,” rues Lanier, “but it’s unlikely patrons would have given us Vladimir Nabokov, the Beatles, or Stanley Kubrick.”

There’s little doubt that the digital revolution has been fairly disastrous for the creative community, at least once you combine it with the global economic tanking that took place in 2008-09.  (See last week’s post: ‘DEP.’)  As is so often the case, however, the picture is not so simple.  Another huge factor in the plethora of creative product out there at rock bottom prices is the advent of new production technology.  It’s now a whole lot easier than it was back in the 1970s to make a movie, or record some music, or publish your book.  The means of production have evolved to where just about anyone can get their hands on those means and begin creating, then distributing.  More supply, less [or the same] demand means lower prices; the invisible, emotionally indiscriminate hand of capitalism at work.  The former gatekeepers—the major record labels, publishing houses and movie studios—have lost their decisive positions at the entryway, and this in the end has to be a good thing.  It’s just that the change has not come without a flip side, one Lanier does a nice job of illuminating.

Back in 1990, Francis Coppola was interviewed by his wife for the making of Heart of Darkness; A Filmmaker’s Apocalypse, a documentary she was shooting about the extraordinary travails Coppola had faced in completing his career-defining opus Apocalypse Now.  Coppola had this to say about the future of filmmaking: “My great hope is that … one day some little fat girl in Ohio is gonna be the new Mozart and make a beautiful film with her father’s camera, and for once the so-called ‘professionalism’ about movies will be destroyed forever, and it will become an art form.”

Be careful what you wish for.

 

Referendum Politics

13716821-vote

An old friend once said to me that she thought voting should be a privilege, rather than a right.  She felt citizens should be educated on the issues before they would qualify to vote.  With that, presumably, would come the government requirement to take a course, complete a quiz, or somehow prove that you as potential voter were sufficiently informed to be eligible to step into the ballot box.

It’s a bit much for me, involving a bit too much faith in the benevolence of government, but, on the other hand, it’s not hard to empathize with the sentiment.  Anyone who has made any sort of sustained investigation into the illegality of soft drugs, for instance, will soon come to the conclusion that the U.S. ‘war on drugs’ is a colossal waste of police and legal resources, a policy which pitchforks money to organized crime, fills up jails with non-violent offenders, and delivers scant results in terms of decreased drug use.

And yet, until very recently—maybe—a majority of American voters favored retaining laws prohibiting marijuana use.  Why?  Well, two reasons I think.  First of all emotion, the historical residue of the hysteria generated by ridiculous government campaigns from out of the past touting the dangers of “reefer madness!”  Secondly, the simple fact that these people aren’t well informed about the issue.  They haven’t studied the facts.  They haven’t seen how much money is spent eradicating marijuana fields, taking down grow ops, busting teenagers, jailing small-time dealers.  They haven’t considered how much money flows to gangs, when it could be flowing in taxes to depleted government coffers.  They may be vaguely aware that the prohibition of alcohol back in the 1920s didn’t work out that well, giving rise to the American Mafia, but they haven’t really had to examine the parallels between those events and the prohibition against marijuana.  Why have the majority of Americans viewed marijuana prohibition as a good thing?  They don’t know any better.

It’s just one example which raises the question of whether ‘direct democracy’ is a good thing.  The digital revolution is fast delivering us the means to hold a referendum on every issue, voting from our smart phones, tablets and laptops.  Should we go there?  If we do we could probably eliminate the need for those noxious politicians squabbling in cantankerous legislatures.  Then we could institute, just as my friend suggested, online courses which a prospective voter would be obligated to complete, before casting her vote on any particular proposed law.  Tempted?

The question can be more germanely asked, here and now, as whether an elected official is compelled to vote ‘the will of the people.’   Setting aside for a second the reality of a ‘party whip’ dictating to said official how he will vote, should our rep be free to vote according to his own personal assessment of the proposition, or should he be obliged to vote in line with what polls show is the view of the majority of his constituents?

Personally, I’m a believer in representative democracy, where we send our best and brightest to debate, study and confer on the issues of the day, and then vote according to their soundest judgment.  Referendums are a mug’s game.  If we are to see progressive change in our society, we’re better off avoiding them.  Why?  For one specific reason: voting ‘no’ empowers; voting yes does not.  We can frame the referendum question as carefully as we like, crafting it like obsessed ad men, but the fact is that the number of voters out there who feel at least mild resentment toward politicians dwarfs the number who may be uninformed about any particular issue.  These folks are generally not terribly happy with their lives, and the easiest place to direct the blame is toward the government.

Thus, when the opportunity arises to ‘stick one’ to the government, they’re going to take it; they’re going to vote no to change.  Voting no means that the power still resides with you—maybe I’ll vote yes next time, if you’re nicer to me in the meantime—but voting yes means you no longer hold any leverage.  The power has been passed on to people who may never care to seek your input again.

As I keep saying, change is constant; new problems will always arise, so we need change to contend with those problems—new solutions for new problems.  And referendums will always make that difficult.  They’re a political cop-out.  They amount to politicians dodging their responsibility.

 

 

 

The Singularity

It’s the ultimate sci-fi concept.  Those infernal machines keep getting steadily smarter and smarter until, one day, shazaam, they surpass human intelligence and we arrive at “the singularity”—a point in time beyond which, almost by definition, the future is unknowable.

The idea has been popularized by science-fiction writers like Vernor Vinge  and Ray Kurzweil, who rightly point out that such an event would be more than a little disruptive to existing social and economic conditions.  Certainly we’ve seen that kind of disruption already with the effects of the digital revolution on nearly every industry out there.  It may have begun with music, but can you think of any industry now which has not been at least bent out of its former shape, if not turned on its proverbial ear, by the advent of digital technology, whether it be publishing, journalism, travel, entertainment or war?

Scott McIntyre, the CEO of Douglas McIntyre Publishers, the largest independent Canadian publishing house, tried to put the pressure on his industry into perspective during an interview broadcast by the Canadian Broadcasting Corporation on July 21 of 2012.  He repeated the publishing bromide which states that the first book Johannes Guttenberg published, after inventing the printing press, was The Bible.  The second book he published was “a screed on the death of the publishing industry.”  A little perspective on any problem is always a good thing.  Sadly however, proper perspective or not, Douglas McIntyre filed for bankruptcy on October 22 of 2012.

Kurzweil suggests, in his 2005 book The Singularity Is Near, a scenario of “accelerating returns” on computer technology, whereby computers progressively design new and better computers along an exponential growth curve.  Like humans, computers become self-replicating.  It’s an evolutionary path which, Kurzweil believes, is inevitable.

It all relates back to “Moore’s Law,” the oft-cited axiom which states that the processing power of computer chips doubles every two years.  Intel co-founder Gordon E. Moore provided the basis for the Law back in 1965, and his prediction has proven to be almost supernaturally accurate to date.  It’s interesting to note, however, that Intel itself has predicted that the trajectory may finally end as soon as 2013.  Moore has added that, “It can’t continue forever. The nature of exponentials is that you push them out and eventually disaster happens.”

Computers, I am told, are very near reaching the human brain’s capacity for language recognition.  Can we safely predict from there that, as many have suggested, a computer will nonetheless never be capable of writing, say, King Lear?  I recall a university professor of mine, back in the day, who cast withering aspersion on the prediction that, by the day he was speaking to us, the chess champion of the world would be a computer, reminding us that Boris Spassky currently occupied that seat.  As we all know, the good Professor would be in no position for such easy defamation today, as not only is a computer chess champion of the world, IBM’s ‘Watson’ triumphed over the very best players of the TV quiz show Jeopardy in 2011.

Ray Kurzweil
Ray Kurzweil

Kurzweil’s version of the Singularity is more than the ultimate sci-fi premise; it also represents the ultimate faith in technology.  Kurzweil believes that we will soon be able to achieve immortality via an upload of our bio-techno-enhanced consciousness, that we will be able to revive the dead, so long as we have stored enough information about them before they physically disappeared.  Optimistic seems an inadequate descriptor for this view.  Others have suggested that as computers exceed our intelligence and go on, at the same exponential rate, to become super beings far eclipsing our powers in every capacity, they will come to regard humans as utterly inconsequential, much the way we regard mosquitos—periodically irritating, but a problem easily remedied with a decisive swat.

1) Change is the only constant, and 2) prognosticators of the future are like baseball players at the plate: the very best of them get it right only about a third of the time.  These are the only two axioms that occur to me as reliable when it comes to considering the future.

The digital revolution has far more in common with the industrial than it does the Gutenberg revolution.  Like the industrial revolution, it has a profound upside, and a profound downside.  It remains for us to collectively attempt to benefit from its upside, and protect ourselves from its downside.  (The demise of independent Canadian publishing is no small loss.)  On an individual level, it’s the very same challenge.

 

 

 

Cybernetics

Chop Wood, Carry Water, Write Blog

I was ten years old when my father got his first calculator.  It was 1963, and a few years before he had left his position as City Engineer in the just-barely-big-enough-to-be-called-a-city I grew up in to start his own engineering and surveying business.  The calculator was the size of, say, a slightly elongated 300-page hard cover book, but the thing was, it could do trigonometry, instantly calculating sines, cosines and tangents (and therefore distances) that my father had been laboriously calculating ‘by hand’ until then.

This was significant for my family because my father worked long hours in those days.  He left early, came home for lunch, kicked back in his recliner for a brief nap, returned to work, showed up again just ahead of dinner, and then usually headed back to the office after dinner for a few more hours of mental toil.  It was particularly hard on my mother, who was left to contain myself and my two brothers.  The arrival of the calculator, my mother announced, meant that we would all see more of my father.

It didn’t happen.  Any let-up in my father’s work schedule came only years later as the result of a growing business, and the hiring of staff.

The term I remember from the 70s for this same naive hope was ‘cybernetics.’  It’s a word that seems to have meant many different things to many different people over the years, but the meaning I recall touted was one which suggested that, given the incredible speed and efficiency evolving via modern science and technology, we would all soon be enjoying far greater amounts of leisure time.

It didn’t happen.  You may have noticed.

What a gloriously well-intentioned crock it all was, and another lesson in how poorly we predict the future direction and impact of new technology.  (No one, for instance, foresaw the rise of social media ten years ago.)  We know now that technology—especially digital technology—doesn’t save us time, it simply accelerates our lives.  It simply closes the gap between what we can do now, and what would have previously taken us longer to get to.  With an instant calculation, or instant information, or instant communication, that task which we would have formerly had overnight, or maybe two weeks to anticipate and ponder, is immediately upon us, demanding the doing.

The fact is, if you’d like more time on your hands, get off the grid.  Escape electronic technology altogether.  Chop wood, carry water, save time.

Until recently, the place on Galiano Island where I write this was off the communications grid.  No phone (no cell phone coverage), no TV, no internet.  Electricity and radio, that was it.  To come here was to immerse yourself in the pre-digital age.  That experience became the inspiration for this blog.

Thus, this the inaugural post for a blog intended to be about our changing times, about the accelerated change we are all living with on a daily basis.  Ken Auletta, who writes a column on communications technology for The New Yorker, has said that we are now experiencing a greater degree of change than at any time in our entire history, and I suspect he’s dead right about that.  (It took radio 38 years to reach an audience of 50 million; the internet just four.)  And I further think that this condition isn’t given quite enough attention these days.

Make no mistake, the digital revolution has many, many positives.  The often-mentioned benefits of social media for the Arab Spring protestors should suffice as an example.  But so too do we have the catch phrase ‘digital Darwinism’ accompanying the ‘e revolution,’ as if to say that any who are not gleefully on board the digital train are doomed to an embarrassed extinction.  These purveyors of doom employ the word ‘exciting’ more often than any other in discussing the future of the revolution underway (Jeff Bezos, founder of Amazon, tells us, grinning, that we are still in “day one” of the revolution.), but let us make no mistake about this fact either: the burgeoning ubiquity of digital technology is driven by money.  Those who wax so enthusiastic about all of us participating in the social media game are themselves coaching from the sidelines.  They want to win alright, but the winner’s payoff for them is exactly as Aaron Sorkin’s brilliant screenplay for The Social Network would have us note: “Baby, you’re a rich man too.”  Those of us out there battling on the field are playing for very different stakes.  And for those on the field, the game is not without injury.

It’s not fashionable to say so, but the digital revolution warrants our skepticism, a critical rather than an eager or unthinking reception, and this blog is intended to facilitate exactly that.

The blogger's woodpile.
The blogger’s woodpile.