An Interview with Michael Horn on the future of EdTech

Michael Horn, author of Blended: Using Disruptive Innovation to Improve Schools talks about disruption in the EdTech space.

We are so excited to welcome Michael Horn, author of Blended: Using Disruptive Innovation to Improve Schools, and a force for positive and innovative change in the world of education, as an advisor for Pedago. We met with Michael a few weeks ago to talk about disruption in the EdTech space. Here’s what he had to say.

In your book, Blended, you explain how in-classroom learning can be melded with technology to create effective learning experiences; why do you think there was no one doing this until recently?

MH: Until just recently, education had been essentially the same since the printing press. There were the traditional teaching methods for the general populace, mixed with tutoring systems reserved for the elite and for those who had enough social capital.

Finally, disruptive technology—online learning—started to appear. When MOOCs arrived, people conceptualized the online learning movement as video tutorials—filmed, staged lessons. The disruptive innovation theory gave us a way to talk about this new movement more broadly, though, and see where it was going, which allowed us to realize that there, online learning represented a bigger moral opportunity and a chance to think about education in a truly novel way that could benefit all students. The theory gives us a framework to understand that we have the potential to use online learning to transform education in a massive way, beyond these filmed lessons, and create a personalized learning solution for every student at a cost we can afford.

What do you see in the near-term future for EdTech?

Video is just a small part of my vision for what the EdTech world has the potential to become. We need to move toward creating different modalities for different kinds of learning. Learning through games, virtual reality—these are great ideas, but they don’t work for every subject. We need solutions that can be customized based on the subject matter to facilitate active learning.

You talk a lot about disruption—how do you qualify disruption, and how do you see it playing out in the EdTech space?

One of the ways that we measure disruption is through asking the question: does your technology have a low-cost value proposition you can bring to market now, while still improving it over time to tackle more complex problems? There aren’t a ton of these on the market yet in the EdTech space.

Some might suggest that MOOCs are disruptive, but I would disagree. There’s a limit to the amount of dynamic education you can provide through MOOCs and video content because interaction between learners and educators is so limited.

Disruption starts by tackling simple problems, then moves up-market to tackle more difficult problems. That’s why there are so many companies tackling math right now—because it’s rules-based. It’s harder to address higher-end education. I’m excited to see what starts coming out of the EdTech space to tackle these harder concepts.

Last question—what’s one of your best learning experiences?

In all seriousness, my first time trying Smartly blew me away. But, if I have to choose something else, I’d have to say my class with Clayton Christensen at Harvard Business School because he combined theory lessons with real-life applications using case studies, so the learning was very concrete.

Want to hear more from Michael? Stay tuned to Smarty’s blog or find Michael Horn on Twitter (@michaelbhorn)! Visit Smartly at https://smart.ly.

The Blue Ocean Mind

In my third year of running Rosetta Stone as CEO, I opened the book Blue Ocean Strategy, and it was a pivotal moment.

 

Ten years ago, I picked up a copy of Blue Ocean Strategy at an airport bookstore.

At the time, I was in my third year of running Rosetta Stone as CEO and we were enjoying annual growth rates of close to 100%. While still operating out of a converted seed warehouse in rural Harrisonburg, Virginia, I engaged consultants and advisers who invariably asked me: “Who’s your competition? What are your competitor’s strengths? How can you stay ahead or catch up?” or “Do you know how big the language learning industry is today? How fast is it growing? How can you gain market share?” Those weren’t the things we were focused on! We were thinking about how to build an interesting, enduring and delightful company.

We were an emerging company and were…well…a bit odd. Our price point was tenuous (twenty times the cost of rival language learning software). Marketing spend also seemed unsustainably dominant (with sprawling kiosks and crazy-high ad spending), all managed without an integrated media plan. Indeed, we were unfocused in terms of our end markets, offering the same curriculum in 25+ languages to the US Army, Fortune 500 companies, school districts, homeschoolers, and individual consumers. We were a legacy of seemingly illogical decisions (we were told) in need of a strategy to become more competitive.

And yet, we had just become the #1 company in the US language learning industry by revenues overtaking the long established brick-and-mortar based Berlitz. We were profitable, and were one of the fastest growing companies in the nation. We certainly did not feel like we were all wrong—even if we didn’t have it together in all sorts of ways. We were doing well, and enjoying the ride.

Discovering Blue Ocean Strategy was a pivotal moment for me. It provided a framework for what we had been doing and explained why our independent and unusual approach was working. It spurred me to hone our strategic approach as we evolved new innovative offerings and business models. And as colleagues also became familiar with Blue Ocean Strategy, the powerful concepts became part of our common parlance across the company, inspiring product designers to re-think English language training in Asia, while also helping with on-boarding new collaborators who typically wanted to teach us their more reasonable way of focusing on beating the competition.

While not every single Blue Ocean Strategy turned to gold, it was the right framework for designing solutions to age-old problems. Like anything in life, it will not work every time and reality is unpredictable. But it is a wonderful way to approach work and life in general—a license to do what you think is right, and to stop wasting time on stuff that you don’t think is required. It is what explains success such as Tesla, Cirque du Soleil and IKEA—and how they escape the traditional competitive mindset that is so limiting and even exhausting. If you haven’t yet used the Blue Ocean Strategy framework to think about your company and life, please do so! You’ll be happier for it.

As one of our first courses at Pedago, we’ve developed a quick intro to Blue Ocean Strategy via our new platform Smartly. Whether it is your first contact with the framework or more of a refresher, with Smartly, you’ll breeze through it!

And in the process, you’ll get to see what Alexie, Ori and I, and the rest of the Pedago team, have been up to over the past couple of years. We think we’ve come up with a powerful new way to teach using technology, and we hope that it works for you. In the future, we’ll develop many more courses using this platform and technology.

Our solution is designed for the smartphone, and works great on desktop and tablet. And there aren’t any plans for a CD-ROM or any bright box packaging! So get going and escape the red ocean by going to https://smart.ly/blue-ocean-strategy.

May Blue Ocean Strategy become your team’s strategic lingua franca!

Git Bisect Debugging with Feature Branches

Inspectocat, courtsey of GitHub
Inspectocat, courtesy of GitHub

At Pedago, we follow the GitHub Flow model of software development. Changes to our app are made in feature branches, which are discussed, tested, code reviewed, and merged into master before deploying to staging and production. This approach has become pretty common, and in most cases does a good job of balancing our desire to ship quickly with the need to control code quality.

But, what happens when a bug inevitably creeps in, and you need to determine when it was introduced? This article describes how to apply git bisect in the presence of numerous feature branches to quickly detect when things went awry in your codebase.

Enter Git Bisect

git bisect is tool for automatically finding where in your source history a bug was introduced. It saves you the pain of manually checking out each revision yourself and keeping a scratchpad for which ones were good and bad.

Here’s how you get started:

# start up git bisect with a bad and good revision
git bisect start BAD_REVISION GOOD_REVISION

At this point, git is going to start checking out each revision and asking you if the commit is good or bad. You tell git this information by typing git bisect good or git bisect bad. Git then uses binary search (bisecting the history) to quickly find the errant commit.

You can also further automate things by giving git a script to execute against each revision with git bisect run. This allows git to take over the entire debugging process, flagging revisions as good or bad based on the exit code of the script. More on this below!

Example

Imagine you go back to work from a vacation and discover that the Rails specs are running much more slowly than you remember before you left. You know that the tests were fast at revision 75369f4a4c026772242368d870872562a3b693cb, your last commit before leaving the office.

Being a master of git, you reach for git bisect. You type:

git bisect start master 75369f4a4c026772242368d870872562a3b693cb

…and then for each revision git bisect gives you, you manually run rake spec with a stopwatch. If it’s too slow, you type git bisect bad, and if it’s fast you type git bisect good.

That’s kind of monotonous, though, and didn’t we mention something about automating things with a script above? Let’s do that.

Here’s a script that returns a non-zero error code if rake spec takes longer than 90 seconds:

#!/bin/bash

start=`date +%s`
rake spec
end=`date +%s`

runtime=$((end-start))


if [ "$runtime" -gt 90 ]
then
    echo TOO SLOW
    exit 1
fi

echo FAST ENOUGH
exit 0

Let’s say you save this script to /tmp/timeit.sh. You could use that instead of your stopwatch and keep manually marking commits as good and bad, but let’s go further and have git bisect do the marking for us:

git bisect run /tmp/timeit.sh

Now we’re talking! After waiting for a bit, git tells us that the errant revision is:

31c60257c790e5ab005d51d703bf4211f43b6539 is the first bad commit
commit 31c60257c790e5ab005d51d703bf4211f43b6539
Author: John Smith john@example.com
Date: Wed Jan 21 12:02:38 2015 -0500
   removing defunct jasmine-hacks.js
:040000 040000 94ff367b586ec62bacb3438e0bc36ae62f90da22 bd3b447e7fc8ce782a7a4c01d11d97383bf06309 M karma
bisect run success

OK, so that sounds good. But wait, that’s a commit that only affected javascript unit tests! How could that have caused a problem with the Ruby specs?

Damn You, Feature Branches

The problem is that git bisect is not confining itself to only the merge commits in master. When it narrows down the point in time when things got slow, it isn’t taking into account the fact that most revisions are confined to our feature branches and should be ignored when searching the history of changes to master.

What we really want is to only test the commits that were done directly in master, such as feature branch merges, and the various one-off changes we commit directly from time to time.

git rev-list

Here’s a new strategy: using some git rev-list magic, we’ll find the commits that only exist in feature branches and preemptively instruct git bisect to skip them:

for rev in $(git rev-list 75369f4a4c026772242368d870872562a3b693cb..master --merges --first-parent); do
  git rev-list $rev^2 --not $rev^
done | xargs git bisect skip

In short, the above chunk of bash script:

  1. Gets all revisions between the known-good revision and master, filtering only those that are merges and following only the first parent commit, and then for each commit
  2. Gets the list of revisions that only exist within the merged branch, and then
  3. Feeds these branch-only revisions to git bisect skip.

Pulling It Together

Here’s the complete list of commands we’re going to run:

$ git bisect start master 75369f4a4c026772242368d870872562a3b693cb

$ for rev in $(git rev-list 75369f4a4c026772242368d870872562a3b693cb..master --merges --first-parent); do
>   git rev-list $rev^2 --not $rev^
> done | xargs git bisect skip

$ git bisect run /tmp/timeit.sh

This runs for a while, and completes with the following chunk of output:

Bisecting: 14 revisions left to test after this (roughly 4 steps)
[086e45] Merged in update_rails_4_2 (pull request #903)
running /tmp/timeit.sh
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................................................
....................................
Finished in 1 minute 21.79 seconds (files took 6.63 seconds to load)
719 examples, 0 failures
Randomized with seed 54869

TOO SLOW

There are only 'skip'ped commits left to test.
The first bad commit could be any of:
342f9c65434bdeead74c25a038c5364512d6b67e
9b5395a9e1c225f8460f8dbb4922f52f9f1f5f1d
dcb1063e60dbcb352e9b284ace7c83e15faa93df
027ec5e59ca4c380adbd352b6e0b629e7b407270
1587aea093dffaac2cd655b3352f8739d7d482dc
2ff4dee35fd68b744f8f2fcd5451e05cb52bff87
73773eae4f6d283c3487d0a5aea0a605e25a8d3f
1cf615c6fa69e103aea3761feaf87e52f1565335
26d43d2060880cb2dbe07932fe4d073e3ccb7d44
293190779e33e26b9ceabfcff48021507591e9d1
77d504ee4b52b0869a543670cd9eb2fb42613301
3f25514f793e87549c9d64ddcfe87f580b29f37e
d43d1845b9fd6983ff323145f8e820e3aea52ebd
32a9e3c879546d202c27e85ab847ca9325977d5c
ea3e3760fb06e3141e5d12f054c1153e55b5cc67
9665813264a5e0d7489c43db871b87e319143220
b8f5106a8901d56621e72ba6b8bd44d4d5471dd2
086e45a2c0a2ed2cd26eeb48960c60048af87d0a
We cannot bisect more!
bisect run cannot continue any more

Hooray! We’ve found our offending commit: Merged in update_rails_4_2 (pull request #903). That makes sense—we upgraded RSpec and made a bunch of testing-related changes in that branch.

Furthermore, we see a list of skipped commits that git bisect didn’t test. This also makes sense—those commits are all within the update_rails_4_2 branch.

Conclusion

With a bit of git magic and some scripting, we’ve completely automated what could have been a very tedious exercise. Furthermore, thanks to the judicious use of git rev-list and git bisect skip, we’ve been able to cajole git into giving an answer that takes our branching strategy into account. Happy hacking!

Fixturies: The speed of fixtures and the maintainability of factories

 

We had a rails app. We used factories in our tests, and it took ten minutes to run them all.  That was too slow. (spoiler alert: by the end of this blog post, they will run in one minute.)

We suspected that we could speed up the test run time by using fixtures instead, but worried that fixtures would be much more difficult to maintain than our factories.

As it happens, we are not the first developers to deal with the issue that factories are slow and fixtures are hard to maintain.  I cannot explain the issue any better than the following folks do, so I’ll just give you some quotes:

“In a large system, calling one factory may silently create many associated records, which accumulates to make the whole test suite slow …”

“Maintaining fixtures of more complex records can be tedious. I recall working on an app where there was a record with dozens of attributes. Whenever a column would be added or changed in the schema, all fixtures needed to be changed by hand. Of course I only recalled this after a few test failures.”

“Factories can be used to create database records anywhere in your test suite. This makes them pretty flexible and allows you to keep your test data local to your tests. The drawback is that it makes them almost impossible to speed up in any significant way.”

In our case, 99% of our tests were using identical records.  For example, we were calling FactoryGirl.create(:user) hundreds of times, and every time, it was creating the exact same user.  That seemed silly.  It was great to use the factory, because it ensured that the user would always be up-to-date with the current state of our code and our database, but there was no reason for us to use it over and over in one test run.

So we wrote the gem fixturies to solve the problem this way:  Each time we run tests, just once at the beginning, we execute a bunch of factories to create many records in the database.  The fixturies gem then dumps the state of the database to fixtures files, and our tests run blazingly fast using those fixtures.

We saw a 10x improvement in run times, from ten minutes down to one.  We still use factories here and there in our tests when we need a record with specific attributes or when we want to clear out a whole table and see how something behaves with a certain set of records in the database.  But in the vast majority of cases, the general records set up in that single run at the beginning are good enough.

If you are using factories in your tests to re-create the same records over and over again, and your tests are running too slowly, give fixturies a try and let us know how it goes.  It only took us about half a day to refactor 700 tests to use fixturies instead of traditional factories, so there is a good chance it will be worth your time.