Companies, nonprofit organizations, and governments design algorithms to learn and predict user preferences. They embed these algorithms in recommendation systems that help consumers make choices about everything from which products or services to buy to which movies to see to which jobs to pursue. Because these algorithms rely on the behavior of users to infer the preferences of users, human biases are baked into the algorithms’ design. To build algorithms that more effectively predict user preferences and better enhance consumer well-being and social welfare, organizations need to employ ways to measure user preferences that take into account these biases.