On Good Friday, Facebook blocked an ad sponsored by Franciscan University of Steubenville because it violated user guidelines that bar “shocking, sensational or excessively violent content.”

It turns out that Facebook censors were disturbed by a particular image in the ad: the San Damiano Cross — a beloved icon of the Franciscan order that is also ubiquitous on the university’s Ohio campus.On Good Friday, Facebook blocked an ad sponsored by Franciscan University of Steubenville because it violated user guidelines that bar “shocking, sensational or excessively violent content.”

Facebook later apologized for its decision to pull the ad, and Tom Crowe, the university’s web communications director, noted that previous ads with the image had been posted on the platform without incident. During an interview with Fox News, Crowe speculated that “a low-level Facebook employee with a grudge against Christianity” might be responsible for this “one-off mistake.”

Just a week later, Mark Zuckerberg, Facebook’s 33-year-old founder and chief executive, also apologized for the decision to take down the ad and other controversial moves, like the blocking of 20 Catholic Facebook pages last year.

Conservative groups have also complained about political bias on the platform, and Sen. Ben Sasse, R-Neb., raised this issue when Zuckerberg appeared before the Senate.

“Can you imagine a world where you might decide that pro-lifers are prohibited from speaking about their abortion views on your platform?” Sasse asked.

Zuckerberg said he did not think pro-life speech would fit any of Facebook’s definitions for hate speech. But he acknowledged that the tech industry, based in California’s Silicon Valley, “is an extremely left-leaning place.” Thus, it was important to make sure, he said, that “we do not have any bias in the work that we do, and I think it is a fair concern that people would at least wonder about.”

Nevertheless, he sought to convince Congress that Facebook could be trusted to monitor content posted on its global platform and stated that critics should not “extrapolate from a few examples to assume that the overall system is biased.”

Likewise, Zuckerberg sought to convince lawmakers that Facebook could still be trusted with Americans’ personal data, though the tech giant has been assailed for allowing the political consulting firm Cambridge Analytica to access the data of an estimated 87 million Facebook users without their knowledge.

Then again, he also admitted that Facebook had not taken “a broad enough view of our responsibility” to protect data privacy and develop a more effective response to threats posed by “fake news,” foreign interference in elections and “hate speech.”

It was surely a moment of reckoning for the tech leader, who has sold his social-networking platform as a kind of virtual sacred space that builds human understanding and relationships across the globe.

Americans who followed his congressional testimony and related news coverage of his company’s missteps are grappling with the fact that Facebook has no magic algorithm for excluding political bias from routine evaluations of sensitive content or preventing the unlawful manipulation of personal data. And while many of us may continue to depend on Facebook to catch up with family and friends, we have been ambushed by a painful truth: This “free,” addictive social network has been built to harvest our personal data for hungry advertisers who are not interested in promoting world peace and understanding. Many of Facebook’s most popular features, including the “like” button users click at the end of a post, help advertisers identify users’ interests, character traits and politics, while other data points help pinpoint their friends’ birthdates and ages, all helpful tidbits for marketers.

Third-party data brokers also tap this trove of information. In 2013, Cambridge Analytica developed a personality quiz that allowed the consulting firm to secure the data of users who downloaded the quiz, as well as information about their friends. The data were marketed as a helpful election campaign tool to target would-be voters.

At one time, “Facebook insisted they’re a technology company, not a media company, and thus they are no more responsible for what gets written on Facebook than the people who build bathroom stall walls are for someone writing ‘for a good time call Jenny at 867-5309,’” said Jim Geraghty in an analysis of Zuckerberg’s appearance on Capitol Hill for National Review.

But now, Facebook has assumed a greater measure of responsibility as it continues to remove content it deems inappropriate. What’s missing, said Geraghty, are “clear criteria for what [is] … acceptable on Facebook and what isn’t.” He noted that Facebook has been criticized for developing an algorithm for advertising categories that could not identify and filter out extremist speech. Thus, it was possible for marketers to use the platform to reach hate groups, with labels like “Jew hater,” and the like. After a watchdog group alerted Facebook, the company scrambled to remove the ad categories, “which were created by an algorithm rather than people” — meaning that no one at Facebook in a position of authority knew the tool was available to paying customers.

The issue underscores the difficulty of controlling activity on a platform that now attracts 2 billion users.

Now, Congress is expected to issue regulations designed to beef up the platform’s privacy and security standards, clean up political bias and quash extremist groups that traffic in explicit hate speech.

This further drives home the point that Americans must be informed about the issues, and parents will have to exercise greater responsibility over their children’s online presence.

Last year, Zuckerberg signaled his intent to tap into the deep hunger for personal connection as more people stop attending church and drift away from civic engagement.

“[A] lot of people … need to find a sense of purpose and support somewhere else,” Zuckerberg opined during a 2017 speech that framed Facebook’s large community-support groups as a worthy destination for people who no longer participated in religious services but missed the chance to bond with friends. Facebook would change its “whole mission to take this on,” he vowed.

But sharing posts on Zuckerberg’s platform is no substitute for communal worship and Christian fellowship, and hitting the “like” button will never match the soul-transforming effect of hands-on service to the needy or a quiet chat with an elderly relative.

Zuckerberg says he wants Facebook to foster a more connected and peaceful world. Instead, the company’s scandals and its real bottom-line priorities remind us that danger lurks in every human enterprise, including this billionaire philanthropist’s would-be Utopia.

The most powerful algorithms are only tools. They do not make personal responsibility obsolete at tech companies. Nor do they have the power to transform Facebook’s money machine into something that has never existed in this world: a place where hate and self-dealing have been vanquished, and toleration and altruism rule. “The world feels anxious and divided, and Facebook has a lot of work to do,” said Zuckerberg in January, “whether it’s protecting our community from abuse and hate … or making sure that time spent on Facebook is time well spent.”

Time well spent? That judgment is best left to us, not Facebook.