The Brave New (Wired) World of Online Education

For all our modern advances, the jury is still out regarding the most effective ways to teach online.

It is a brave new world, indeed, in which milk, cars, and spouses can all be acquired via the Internet. But for all our advances, the jury is still out regarding the most effective ways to teach online.

Many online learning platforms consist of passive video lectures and podcasts, or universities repackaging classes for the web. To illustrate, imagine you have students who have never seen a pizza before and want to learn how to make one. Working with current online teaching methods, they’d likely not throw the dough, choose the toppings, or get feedback on their work. They would probably have to sit quietly through written descriptions and video lectures online.

The prevalence of this passive approach demonstrates a key challenge in the pursuit of engaging, effective web-based education: the issue of interactivity. While more studies are showing that interactivity breeds engagement and information retention, instructors and platforms are still struggling to employ effective levels and modes of interactivity.

Researchers at Columbia University’s Community College Research Center examined 23 entry-level online courses at two separate community colleges and made some interesting discoveries on this phenomenon. Their assessment was that most of the course material was “text-heavy” and that it “generally consisted of readings and lecture notes. Few courses incorporated auditory or visual stimuli and well-designed instructional software.” While technology that supported feelings of interpersonal interaction was found to be helpful, mere incorporation of technology was insufficient—and recognized as such by the students. The research noted that, “Simply incorporating technology into a course does not necessarily improve interpersonal connections or student learning outcomes.”

The research specifically called out message boards (where instructor presence and guidance was minimal) to be insufficiently interactive to engage students in a way that they found clear and useful. The consensus of their research was that “effective integration of interactive technologies is difficult to achieve, and as a result, few online courses use technology to its fullest potential.”

Another interesting look at web-based learning and interactivity is a 2013 study conducted by Dr. Kenneth J. Longmuir of UC Irvine. Motivated by the fact that most “computerized resources for medical education are passive learning activities,” Professor Longmuir created his own online modules designed for iPad (and other mobile devices). These three online modules replaced three of his classroom lectures on acid-base physiology for first-year medical students. Using a Department of Defense handbook as his guide for incorporating different levels of activity, Longmuir utilized text and images side-by-side and had an embedded question and answer format. From student comments, “The most frequent statement was that students appreciated the interactive nature of the online instruction.” In fact, 97% of surveyed students said it improved the learning experience. They reported that not only did the online material take a shorter time to master than in-person lectures, but the interactivity of the modules was the “most important aspect of the presentation.”

While Dr. Longmuir was reluctant to draw hard conclusions about this particular online course’s efficacy (due to variables in student procrastination, students skipping important material, etc.), there are a few clear points to be taken from both studies. For one, engaging, interactive content is the exception, not the rule, in today’s online learning environment. Both studies suggest the importance of interactivity in online learning—if not definitively in test results (though that’s a possibility), certainly in how students feel about their engagement with the material. This isn’t surprising since research is showing that lack of interactivity in traditional classrooms is detrimental, as well.

While the science behind producing effective online learning courses is still in development, the need for meaningful interactivity in new educational technology seems like a no-brainer. If we hope to teach our students to make that pizza, the most effective way is not to drown them in video clips and PDF files; we should create online learning experiences that mimic—or even improve upon—the interactivity and satisfaction that pounding the dough themselves would provide.


Pedago Announces Partnership with Top Business School INSEAD

Smartly partners with the top business school INSEAD to prepare incoming students for classes through business fundamentals courses.

Today we’re very excited to announce a new partnership with INSEAD, one of the leading business schools globally.

INSEAD, the pioneer institution to offer MBA programs in Europe over fifty years ago, is given superior rankings by Forbes, Financial Times, and Business Insider, and is ranked number one in Europe and Asia-Pacific by the QS Global 200 Business Schools Report (registration required to view), which ranks institutions according to the preferences of over 4,000 actively hiring MBA employers across the world. INSEAD faculty created Blue Ocean Strategy, a revolutionary and highly-celebrated approach to business modeling, and the founder of INSEAD, Georges Doriot, is dubbed the “father of venture capitalism.” In short, they’re kind of a big deal, and we’re honored to be working with them!

Embed from Getty Images
INSEAD holds cutting-edge research and innovation in teaching as foundational pillars of their institution, and in line with these core values, they’ve offered us the opportunity to work closely with them and their incoming students to explore the ever-expanding and changing world of online education and educational technology. At Pedago, we believe that technology can accelerate learning outcomes by enabling education wherever the learner may be. We strive to create a more fulfilling and effective online experience.

We’d like to take this opportunity to welcome INSEAD students of the class of 2015 to our program. We thank in advance all participants for being a part of this milestone in our development.

Build and Deploy with Grunt, Bamboo, and Elastic Beanstalk

In response to Twitter feedback on our recent post “Goodbye, Sprockets! A Grunt-based Rails Asset Pipeline,” we at Pedago would like to share an overview of our current build and deploy process.

In response to Twitter feedback on our recent post “Goodbye, Sprockets! A Grunt-based Rails Asset Pipeline,” we’d like to share an overview of our current build and deploy process.

It goes a little something like this:

Local development environment

We currently have a single git-managed project containing our Rails server at the top level and our Angular project in a subdirectory of vendor. Bower components are checked in to our repo to speed up builds and deploys. The contents of our gruntfile and the organization of our asset pipeline are described here.

We can start up our server via grunt server (which we have configured to shell out to rails server) or directly with rails server for Ruby debugging.

Even though both the client and server apps are checked into the same project and share an asset pipeline, we restrict our Angular code to only communicate to the backend Rails server over APIs. This enforces a clean separation between client and server.

Bamboo build project

When Angular and Rails code is checked in to master, our Bamboo build process runs. We always push through master to production, à la the GitHub flow process. The build process comprises two stages:

Stage 1: Create Artifacts:

  • Rails: bundle install and freeze gems.
  • Angular: npm install, grunt build. No bower install is needed because we check-in our bower_components. The grunt build step compiles, concatenates, and minifies code and assets. It also takes the unusual step of cache-busting the asset filenames and rewriting any references in view files to point to the new filenames.
  • The resulting artifact is saved in Bamboo and passed to Stage 2.

Stage 2: Run Tests:

  • Rails: run rspec model and controller tests, and then cucumber integration tests. It was a bit tricky to get headless cucumber tests running on Bamboo’s default Amazon AMI; see details in our previous blog post.
  • Angular: grunt test.

If the artifact creation succeeds, and the tests run on that artifact all pass, Bamboo triggers its associated deploy project. Otherwise, our team receives notifications in Hipchat of the failure.

Bamboo deploy project

After every successful build, Bamboo is configured to automatically deploy the latest build to our staging environment.

The Bamboo deployment project runs the following tasks to kick off an Elastic Beanstalk deployment:

  1. Write out an aws_credentials file to the build machine. We don’t store any credentials on our custom AMIs. Instead, we keep them in Bamboo as configuration variables and write them out to the build machine at deploy time.
  2. Run Amazon’s script to add aws.push to the set of available git tasks on the build machine.
  3. Kick off the deployment to our Elastic Beanstalk staging environment with a call to git aws.push from the build machine’s project root directory.

Since our project is configured to use Elastic Beanstalk, the remaining deployment-related configuration (like which Elastic Beanstalk project and stage to push the update to) is checked in to the .elasticbeanstalk and .ebextensions directories in our project and made available to the git aws.push command. If there is interest in sharing the contents of these config files, please let us know on Twitter.

Elastic Beanstalk staging environment

After the staging deployment has been kicked off by Bamboo, we can head over to our EB console at and monitor the deployment while it completes. The git aws.push command from the previous step is doing the majority of the work behind the scenes. For staging, we use Amazon’s default Rails template, and “Environment type: Single instance.” Amazon’s default Rails template manages Rails processes on each server box with a passenger + nginx proxy.

When we first decided to go to a grunt-based asset pipeline, we worried this might impact the way we deployed our servers. In fact, it does not. Our git code bundle containing our Rails app, Angular front-end, and shared assets is deployed to Elastic Beanstalk via git aws.push, exactly as it was prior to our grunt-based asset pipeline switch.

We then do smoke testing on our staging environment.

Elastic Beanstalk production environment

After we have determined the staging release is ready to go to production, we promote the current code bundle from staging to production simply by loading up the EB console for the production stage of our project, clicking “Upload and Deploy” from the Dashboard, clicking “All Versions” in the popup, then selecting the git version currently deployed to staging.

For production, we use Amazon’s default Rails template, and “Environment type: Load balanced, auto scaling.” Elastic Beanstalk takes care of rolling updates with configured delays, aka no-downtime deployments.

Wrap up

The above system, combined with the grunt-based asset pipeline described in our previous post, allows us to iterate and deploy with confidence. Future work will focus on improving deploy times, perhaps by baking AMIs or exploring splitting our monolithic deployment artifact into multiple pieces, e.g., code and assets, npm packages, etc.

Curious about Pedago online education? Enter your email address to be added to our beta list.

Questions, comments? You should follow Pedago on Twitter.

Goodbye, Sprockets! A Grunt-based Rails Asset Pipeline

How to replace the Rails asset pipeline with a Grunt-based system: Part 1 of our build and deploy process.

This is the first in a two-part series. See Part 2 of our build and deploy process

Like any good startup, we try to leverage off-the-shelf tools to save time in our development process. Sounds simple enough, but the devil is in the details, and sometimes a custom solution is worth the effort. In this post, I’ll describe how and why we replaced the Rails asset pipeline with a Grunt-based system.

In the Beginning…

Early on, we embraced AngularJS as the foundation of our core application. We started prototyping using the Yeoman project and never looked back. If you’ve never used this project before, I highly recommend checking it out. It will save you time and tedium in setting up a development ecosystem. We fell in love with the Bower and Grunt utilities as a way to manage project dependencies and build pipelines, and we found the array of active development on the various supporting toolsets impressive. We were knee deep in NodeJS land at this point.

After we stubbed out a good portion of the UI on mock data, we had to start looking towards building out an API that could take us into further iteration. Ruby on Rails was proven and familiar, and we knew how to carve out a reliable backend in no time flat. Additionally, we wanted to take advantage of some proven RubyGems to handle common tasks for which the NodeJS web ecosystem hadn’t fully established itself. Some of these tasks include handling view responsibility, and as such relied on Sprockets for asset compilation.

At this point, we had an AngularJS project, built and managed with Grunt, contained within a Rails project, built and managed with Rake and Sprockets.

Trouble in Paradise

We quickly found ourselves hitting a wall trying to manage these two paradigms. As have several others.

Our hybrid Grunt + Sprockets asset pipeline included multiple build processes and methods of shuffling assets. The more we tried to get these two jealous lovers to play nice, the more they fought. Ultimately the final straw came down to minification-induced runtime errors and the lack of sourcemap compilation support in Sprockets (while somewhat supported in an on-going feature branch, sourcemaps hadn’t made it into master and required dependency changes we weren’t ready to make quite yet).

At this point it became apparent that we were wasting precious cycles dealing with things outside our core competency, and that we needed to unify these pipelines once and for all.


Our solution: say goodbye to Sprockets! We have completely disabled the traditional Rails asset pipeline, and now rely on GruntJS for all things assets-related. The deciding factors for us were the community activity and the flexibility the project provided. Here’s a Gist of our (slightly sanitized) Gruntfile.js powering the whole pipeline.

How we currently work:

  • We don’t use the Rails asset helpers…at all. We use vanilla HTML for our views as much as possible. Attempts to use the Rails asset helpers ended up being overly complex and ultimately felt like trying to work a square peg into a round hole.
  • We reference the compiled scripts and styles (common.js, app.js, main.css, etc) directly in our Rails layouts.
  • Grunt build and watch tasks handle the the pipeline actively and passively. In development, we use the wrapper task grunt server to launch Rails along with our watches. Source and styles are compiled and published directly to Rails as they are saved. Likewise, unit tests are run continually with output to console and OSX reporters.
  • LiveReload refreshes the browser or injects CSS whenever published assets are updated or otherwise modified.
  • We no longer require our Rails servers to perform any sort of asset compilation at launch, as they’re now built by CI with the command grunt build prior to deployment. Nothing structural in our build deployment process has changed (in our case, using Bamboo to deploy to Elastic Beanstalk).

With the above, we are now constantly testing using the assets that actually make it into a production environment, with sourcemap support to handle browser debugging sessions. Upon deployment, Rails instances do not need to pre-process static assets, reducing warm-up time.

Ultimately, the modular nature of the Grunt task system ensures we have a huge array of tools to work with, and as such, we’ve been able to incorporate all the nice little things that Sprockets does for us (including cache-busting, and gzip compression) and the things it doesn’t (sourcemaps).


Feel free to steal our Gruntfile.js if you’re looking to adopt this system. We’ve also cobbled together a list of Grunt tasks that we’ve found helpful:

  • grunt-contrib-watch – the glue that binds automated asset compilation together.
  • grunt-angular-templates – allows us to embed our AngularJS directive templates into our javascript amalgamation. Also useful for testing.
  • grunt-contrib-uglify – handles all JS concatenation, minification, and obfuscation. Despite adhering to AngularJS minification rules, we’ve found issues with the mangle parameter and must disable that flag when handling Angular code. Uglify2JS is also providing our sourcemaps.
  • grunt-contrib-compass – we only author SCSS and rely on Compass to handle everything concerning our styles, including compilation and minification as well as spritesheet and sourcemaps generation.
  • grunt-autoprefixer – …except we don’t bother writing browser-specific prefixes. Instead we use autoprefixer to automatically insert them. The recent version supports sourcemap rewrites.
  • grunt-cache-bust – renames assets to CDN friendly cache-busted filenames during distribution.
  • grunt-contrib-jshint + grunt-jsbeautifier – keeps our code clean and pretty.
  • grunt-karma – is constantly making sure we write code that works as intended.
  • grunt-todos – reminds us not to litter.  =]

Learn more about our build and deploy process in Part 2 of this series.

We hope this guide helps others trying to marry these two technologies. Please feel free to contribute with suggestions for future improvements via GitHub or Twitter!

We just launched our first product! Learn more about Smartly at

Questions, comments? Follow us on Facebook or Twitter.

Headless integration testing using capybara-webkit

We use Cucumber for integration testing our Rails servers, and by default all Cucumber scenarios tagged with “@javascript” pop up a browser. We needed to get this running headless so we could run these tests on our build machine. We use the Atlassian suite, and Bamboo for CI, running on EC2.

This post is for developers or sysadmins setting up Rails integration testing on a CI system like Travis, Hudson, or Bamboo.

We use Cucumber for integration testing our Rails servers, and by default all Cucumber scenarios tagged with “@javascript” pop up a browser. We needed to get this running headless so we could run these tests on our build machine. We use the Atlassian suite, and Bamboo for CI, running on EC2.

The de facto way of running headless tests in Rails is to use capybara-webkit, which is easy to install and run locally following the guides here.

Capybara-webkit relies on Qt, which is straightforward (though slow) to install on OS X, which we use for development. Our build box however is Amazon Linux, which is supposedly a distant cousin of CentOS. We’re using Amazon Linux because Bamboo OnDemand provides a set of stock Amazon Linux AMIs for builds that we have extended and customized.

We started out following the CentOS 6.3 installation guide from the capybara-webkit wiki above but quickly encountered problems because Amazon Linux doesn’t ship with a lot of libraries you might expect from Redhat or CentOS, like gcc and x11.

Here are the steps we followed to get Qt installed and our headless Cucumber tests running on our Bamboo build machine. This installation process was tested on ec2 AMI ami-51792c38 (i686).

# First install dependencies listed on that do not ship with amazon linux amis.
# If you dont do this, ./configure below will fail with errors like "Basic XLib functionality test failed!"

yum install -y gcc-c++
yum install -y libX11-devel
yum install -y fontconfig-devel
yum install -y libXcursor-devel
yum install -y libXext-devel
yum install -y libXfixes
yum install -y libXft-devel
yum install -y libXi-devel
yum install -y libXrandr-devel
yum install -y libXrender-devel
# download, configure, and install qt from source
tar xzvf qt-everywhere-opensource-src-4.8.5.tar.gz
cd qt-everywhere-opensource-src-4.8.5
./configure --platform=linux-g++-32
# caution: this will take a long time, 5 hrs on an m1.small!
gmake install
# add qmake location to path
export PATH=$PATH:/usr/local/Trolltech/Qt-4.8.5/bin/
# now finally gem install will work!
gem install capybara-webkit

Curious about Pedago online education? Enter your email address to be added to our beta list.

Questions, comments? You should follow Pedago on Twitter.

Pedago releases 3 AngularJS projects to the open source community

In the past week, Pedago has released 3 open source projects on our github page.

In the past week, Pedago has released 3 open source projects on our github page.


Iguana is an Object-Document Mapper for use in AngularJS applications.  This means that it gives you a way to instantiate an instance of a class every time you pull down data over an API.  It’s similar to Ruby tools like activerecord or mongomapper.


Iguana is dependent on super-model, which should someday include much of the functionality that activemodel provides for Ruby users.  For now, however, it only provides callbacks.


Both iguana and super-model depend on a-class-above, which provides basic object-oriented programming (OOP) functionality. A-class-above is based on Prototype’s class implementation, and also provides inheritable class methods and some convenient helpers for dealing with enumerables that are shared among classes in an inheritance hierarchy.

This is our first foray into the management of open-source projects, so we’ll be learning as we go along.  We’re trying hard to make these useful to the community, so we have packaged them up as bower components and spent time writing what we hope is useful documentation.  We used groc for the documentation and focused on documenting our specs in order to provide lots of useful examples, rather than documenting each method in the API.  We hope that this will be more helpful than more traditional API documentation would have been, and would love to hear comments on how it’s working for folks.

We hope that other AngularJS users will find iguana, super-model, and a-class-above to be useful and decide to contribute.


Curious about Pedago’s interactive education? Enter your email address to be added to our beta list.

Questions, comments? You should follow Pedago on Twitter.

Learning Game Trees and Forgetting Wrong Paths

This is the second of two blog posts delineating the pedagogical approach of Herb Simon, the man credited with inventing the field of artificial intelligence, for which he won a Turing award in 1975. (Read the first post here.) Simon was a polyglot social scientist, computer scientist and economics professor at Carnegie Mellon University, He later won the Nobel Prize in 1978 in economics for his work in organizational decision-making.

Game Tree
Tic Tac Toe Game Tree, Gdr from Wikimedia Commons

Dr Simon would often tell his students that he liked to think about human learning as a game tree: when you start out learning about a new topic, you begin at the root of the tree with what you already know, and follow connections to related topics, discovering new “nodes” in the tree. You employ a variety of search strategies to follow connections both broadly and deeply through related topics, loading as much of the explorable tree into memory as possible. As you discover and master each “node” on the tree, you learn which branches of the tree are fruitful and which are fruitless.

During and after exploration though, the entire game tree remains in your working memory, slowing you down. When you take breaks, not only are you relaxing, but you are also forgetting wrong paths – pruning those fruitless branches from your working memory. When you next return to the task at hand, you resume exploring connections and mastering concepts not at the very top of the tree, but in the most fruitful subtrees where you left off, making better use of your working memory.

At Pedago, we believe in learning by doing, and we want to break complex topics and concepts down into what Seymour Papert in the book Mindstorms calls “mind-sized bites.” One of the benefits of breaking complicated topics into “bites” is that it is easier to build learning content that learners can work through when they only have a few minutes free, on whatever device they have on hand.

As we build our database of short concepts and lessons, we find ourselves also building a rich tree structure of topic relation metadata that in structure is not unlike Simon’s game tree of learning. A nice side-effect of a learning solution with rich, encapsulated, short lessons is that you don’t have to commit to a thirty minute video – you can learn in bits and pieces throughout your day. And by doing this, you are unintentionally building and then pruning your learning game tree in an efficient way, forgetting wrong paths and making the best use of your working memory each time you return to your lessons.


Herb Simon on Learning and Satisficing

This is the first of two posts delineating the pedagogical approach of Herb Simon, credited with inventing the field of AI, for which he won a Turing award in 1975.

This is the first of two blog posts delineating the pedagogical approach of Herb Simon, the man credited with inventing the field of artificial intelligence, for which he won a Turing award in 1975. Simon was a polyglot social scientist, computer scientist and economics professor at Carnegie Mellon University. He later won the Nobel Prize in 1978 in economics for his work in organizational decision-making.

Herbert Simon, Pittsburg Post Gazette Archives

Herbert Simon, Pittsburg Post Gazette Archives

“Learning results from what the student does and thinks and only from what the student does and thinks. The teacher can advance learning only by influencing what the student does to learn.” –Herb Simon

Among his many accomplishments, Herb Simon was a pioneer in the field of adaptive production systems. He also identified the decision-making strategy “satisficing,” which describes the goal of finding a solution that is “good enough” and which meets an acceptability threshold, as opposed to “optimizing,” which aims to find an ideal solution.

Simon believed that human beings lack the cognitive resources to optimize, and are usually operating under imperfect information or inaccurate probabilities of outcomes. In both computer algorithm optimization and human decision-making, satisficing can save significant resources, as the cost of collecting the additional information needed to make the optimal decision can often exceed the total benefit of the current decision.

We live in a world where overwhelming amounts of information are at our very fingertips. Every month new educational software offerings are on the market. You can find tutorials to fix anything in your house, learn a new language for free, find lessons that teach you to dance, and watch video lectures from top universities in the topics of your choice.

I like to think of myself as a polyglot learner: I would love nothing better than to just take a year, or two, or ten, and learn as much as I can about everything. But unfortunately, I have limited time. How do I know which tutorials, lessons, and classes are worth the commitment of my time? How can I find a satisficing solution to the problem of becoming a more well-rounded learner and human being?

In Simon’s words, “information is not the scarce resource; what is scarce is the time for us humans to attend to it.” At Pedago we’ve been inspired by thinkers such as Simon to build a learning solution that makes the most of the scarce resource of your time, by employing curated streams of bite-sized lessons; rich, explorable connections between topics; interactive learn-by-doing experiences; and just the right amount of gamification. We want to enable you to craft your own learning experience, so that you can, as Simon would say, positively influence what you do and what you think.

Stay tuned for the second post in this series as we examine Simon’s modeling of human learning.

Tinkering Toward Learning

Given how useful the tinkering approach is for keeping learners motivated, how do we apply a similar approach to a subject like Finance?

man holding bicycle
By Artaxerxes (Own work) [CC-BY-SA-3.0], via Wikimedia Commons
My friend Alfredo builds bikes as a hobby. He started by replacing a broken chain on his own bike. Then he upgraded his brakes. After a few more repairs, he understood the whole bike system well enough that he could gather all the parts and build one from scratch.

Experienced programmers generally learn new languages in a similar way. We get assigned to a new project for which there is an existing codebase that needs to be maintained or extended. Everything is mostly working, but something needs to be tweaked or added. So we tweak it. After working on five or ten features, we know the new language well enough that we could start a new project ourselves.

In more traditional educational environments, however, we tend to learn things the other way around. We start with simple, contrived building blocks and slowly work our way up to the point where we can comfortably manipulate a more complex and realistic system.

For example, a course that teaches the principle of the “Time Value of Money” is likely to start with a question like “if someone offered you $90 today or $100 a year from now, which one would you take?” This is, to say the least, an unrealistic scenario. But it is an introduction into the concept. After working through a number of similar examples in order to allow the student to master the math, the course will hopefully move on to a more reasonable explanation of how this concept is used in practice.

By Anna reg (Own work) [GFDL or CC-BY-SA-3.0-at], via Wikimedia Commons
By Anna reg (Own work) [GFDL or CC-BY-SA-3.0-at], via Wikimedia Commons
Not that it was a bad course. I actually quite liked it. But this would be like if Alfredo had first worked on pedals, then wheels, then built himself a unicycle before moving on to gears and brakes. It would have been years before he had anything he could ride on. Knowing Alfredo, he would have had no hope of staying motivated for such a long time with no bike to show for it.

Given how useful the tinkering approach is for keeping learners motivated, how do we apply a similar approach to Finance? It turns out this is difficult to do because it often involves risking real money and waiting years to see any results. What a learner really needs is a safe environment to develop intuition around the long-term consequences of her decisions and to discover for herself the places where she needs to dig deeper.

At Pedago, developing alternative approaches to teaching tough topics is what we’re passionate about. Stay tuned over the coming months to see us tackle similar problems.


This post has been updated to include a clearer example. Thanks to Earthling for the feedback!

Teaching with Time-Lapse

How do you convince a skeptic that climate change is real? The documentary Chasing Ice takes on this challenge to awe-inspiring effect.

How do you convince a skeptic that climate change is real? The documentary Chasing Ice takes on this challenge to awe-inspiring effect.

There’s no obvious connection between the melting of glaciers and online learning, so you might be wondering why this would be relevant to Pedago, an educational technology company. But bear with me.

The hero of the film, James Balog, turned to photography after finishing his master’s degree in Geology because he felt science was becoming too focused on numbers and statistics for him to enjoy. He believed he could make a greater impact through documenting Nature rather than dissecting it.

Thus, when faced with his own dawning realization that climate change was real, and human-influenced, he understood that facts, statistics, and lectures were ill-suited to sway the minds of a disbelieving public. He explored how best to use the tools of his trade–camera and ice axe–to make a difference.

His solution embodies the writer’s maxim to show and not tell. For three years, he and his team captured time-lapse images from Iceland, Alaska, Greenland, and Montana, then stitched them together. The resulting videos provide indisputable evidence that glaciers are receding ever more quickly. They are at once alarming, awesome, and visceral–attributes the standard “facts-and-graphs” discussions of climate change typically lack.

After watching this documentary, it really struck me that Balog was able to transform the conversation around a topic that is so frequently debated in the public space. Global warming is disputed more in American popular media than by scientists, its facts often treated as fictions promoted by activists. It’s difficult to convince people who are determined not to be convinced, even with the dramatic (but indirect) evidence of recent natural disasters. What can we learn from Balog’s feat?

I believe the key lesson is artful choice of data representation. The time-lapse images Balog’s team produced form physical evidence that is easily consumed. The viewer can wrap her mind around them, consider them as evidence, believe them or not with her own eyes. If the best way to understand the effects of global warming is to travel to a glacier and watch it calve icebergs or shrink into itself season after season, then bringing the key moments of this experience to a wider audience is certain to make a greater impact than presenting yet another statistic. A time lapse is worth a thousand graphs.

Balog’s accomplishment serves as a reminder to educators of the power in choosing novel representations for the material being presented. At the intersection of art, science, and technology, there is the potential for greater educational impact.

See the Chasing Ice trailer here: