Social Sciences

The Inside Story Of Facebook’s Pivot To Social Responsibility (Part 2)

Written by Mamie M. Arndt

The is Part 2 of an aspirational future history of Facebook’s awakening to its social responsibilities. Read Part 1 here.

SAN FRANCISCO—February 3, 2050—Few could have imagined back in 2021, when federal and state regulators were suing to break it up, that Facebook would evolve to be the force for social good that it is today. 

“Mark Zuckerberg was such a cold fish and seemed so motivated by profit,” observed one analyst, “I had a very hard time believing that he had a real aha moment that put him on this socially responsible path.” She assumed that his professed enlightenment was just another PR stunt. 

Multiple anonymous sources who refused to be identified discussing family matters told of how the pivot was real and the impetus came from events closer to home, rather than outside pressures. Mark Zuckerberg was deeply shaken when Max, his daughter, came home from preschool sobbing earlier that year. “What’s a Doomsday Machine?” she asked her dad, “and why do all the kids say that your company is responsible for the attack on Capitol Hill?” 

That’s when Zuckerberg realized that the Chan Zuckerberg Initiative and his gifting pledge would not be enough to fulfill his birth letter promises to Max. His actions at Facebook were the true measure. That led the elder Zuckerberg to change three core assumptions about how he ran Facebook.

First, Zuckerberg took overt responsibility for power and consequences of his social network. As Bran Ferren, chief creative officer of Applied Minds and former president of Walt Disney Imagineering, commented at the time: “Just as you can’t host a party of a few hundred people and absolve yourself of what happens during it, Zuckerberg can’t host two billion users on Facebook and disown responsibility for what is said and done. Accepting responsibility is necessary for Facebook to have any chance of reaching some high-order, good-for-the-world potential.”

Second, Zuckerberg became willing to sacrifice Facebook profits to fix the negative consequences of its use, rather than just talking about doing so. This was perhaps the easiest part. Zuckerberg was a multi-billionaire by the age of 23. Making more money wasn’t an issue. As Facebook chairman and CEO, he did have a responsibility to keep the publicly traded company vibrant. But, as the controlling shareholder, he had ample flexibility to tweak Facebook’s immensely profitable business model and, as the influential Business Roundtable urged all CEOs in those days, invest to also serve other stakeholders, not just investors.

Third, he abandoned the claim of Facebook being an “open and neutral platform.” Instead, he began to exercise explicit control over the content on and usage of his platform. In this, some claimed, Zuckerberg merely bowed to a reality that was quickly being forced on him. As Mark Philipzcuk, a technology industry marketing veteran observed, “Whether [Zuckerberg] liked it or not, Facebook was not just a platform. It was a publisher because that’s how the users used the product.”

By embracing the very changes that he had been resisting for years, Zuckerberg became free to unleash the power and reach of his platform to pursue what he had promised Max in that 2015 letter:  advancing human potential and promoting equality.

Alan Kay has observed that every technology is an amplifier to human propensities and inborn drives. This is a double edge sword. It can yield beneficial changes, such as how the printing press sparked the Enlightenment and how computing revolutionized science. But technology can also have the opposite effect. To Kay’s astute eye, the Facebook of 2020 fed negative human universals, such as susceptibility to stories, status and kinship, in a narcissistic way. This, in turn, magnified parochialism and tribalism and were the root causes of the problems and upheavals for which Facebook was being condemned.

To shift from being a force for “disastrous and possible final danger,” as Alan Kay deemed it, to a force for “advancing human potential and promoting equality,” as Zuckerberg promised, Facebook embraced a recommendation from Katherine Milkman, a behavioral economist at the Wharton School, to take on the goal of improving its user’s lives—and prioritized this before Facebook’s own profits.

Milkman pointed out that independent and Facebook’s own research showed how its newsfeed could influence its users’ emotions both positively and negatively. Other studies showed that social-network-enabled experiences could increase patient engagementhelp prevent chronic diseases and boost voting turnout. What if this power was used to enhance the positive while avoiding the negative?

Many researchers, including Milkman and her colleagues at the Center for Health Incentives and Behavioral Economics, were trying to understand how social networks could drive positive behavior change. None could match the access, tools and resources available to Facebook, however.

Zuckerberg built a Xerox-PARC-like research center and unleashed it to advance the best behavioral science research on how to help users make better decisions and improve personal outcomes. Facebook used rigorous scientific methods, like A/B testing, to evaluate what truly created benefits for Facebook users and what doesn’t.

What’s more, Zuckerberg was transparent about such efforts. Facebook helped the whole world learn how social media could change behavior and in what ways. It generated new scientific insights. This allowed others to follow successful strategies Facebook found for improving outcomes. While such transparency did not maximize profits, it optimized for the greater good of users drawn to Facebook’s mission.

With such a massive experimental testbed and transparent research overseen by an empowered independent review board, Facebook made significant progress. Rather than following the “move fast and break things” mantra of its startup days, Facebook’s mantra became “Think big, start small and learn fast.” In the context of Facebook’s massive scale and committed resources, however, “small” was very big and “learning” was phenomenally quick. Rather than preying on negative human universals that amplified tribalism and parochialism, Facebook learned to foster human universals to connect people in positive ways. 

As Facebook’s recent quarterly results continue to affirm, being good for the world is also good for business.

The ultimate marker of success: an employee survey showed that 86% of Facebook executives and employees felt good about how their own children used Facebook.

Read Part 1: Facebook 2050: New Facebook CEO Affirms ‘Moral Responsibility’ To Make A ‘Dramatically Better’ World

About the author

Mamie M. Arndt