13 years, 4 months ago

Netflix prize: learning how to fix algorithms when everything goes haywire.

After I described last week how machines learn by accepting “training sets”—real-world data they use to construct their algorithms—I got an email from Slate patron saint R.M. * For the past few weeks, the intro machine-learning course I’m taking online via Stanford has been teaching us students to build increasingly complex models that identify handwritten numerals, to the point that we’re more than 97 percent correct. However, Auros has an excellent point: If you’re testing your algorithm on the same data you used to train it, how could you possibly know if it’s any good? When the Netflix Prize was first announced—the idea was that the company would pay $1 million to whoever could beat the original prediction algorithm by 10 percent—I assumed the winner would be the person who had a groundbreaking instinct into the vagaries of human preference.

Slate

Discover Related