Skip to content
NOWCAST KSBW Action News 8 at 6 pm
Watch on Demand
Advertisement

Facebook wants you to stop fighting in its groups, and a new AI tool might try to calm things down

Facebook wants you to stop fighting in its groups, and a new AI tool might try to calm things down
>> THE SUN RISES IN THE>> EAST AND SETS IN THE WEST. THAT MAY BE ONE OF THE FEW TRUTHS ON WHICH AMERICANS CAN AGREE. BUT IS THIS A PEACEFUL PROTEST OR AN ANTI-FASCIST MOB? ARE THESE PUBLIC-HEALTH IMPERATIVES OR CIVIL RIGHTS INFRINGEMENTS? ARE THESE INSURRECTIONS SEEKING TO OVERTURN AN ELECTION OR JUST ANOTHER DAY AT THE CITAP?AL >> THERE WAS NO INSURRTIECON AND TO CALL IT AN INSURRECTION IS A LIE. >> IT WAS BENDYO DENIAL. >> YOU GET THE IDEA. >> I THINK THAT HAVING A COMMON REITALY IS ESSENTIAL FOR HUMAN BEINGS TO FIND COMMON GROUND. TO COLLABORATE AND COORDINATE AND COMMUNICATE AS ONE ANOTHER. ARE GUESTS NEW BOOK EXPLORES BOTH THE PROMISE AND THE PERIL OF SIAOCL MEDIA. >> THE ONEHI TNG WE ABSOLUTELY HAVE TO DO IF WE’RE GOING TO SOLVE THE SOCIAL BE A CRISIS IS WE HAVE TO GET PAST THIS DEBATE ABOUT WHETHER SOCIAL MEDIA IS GO OODR EVIL. THE ANSWER IS YES. >> GOOD WHEN IT HELPS THE ICE BUCKET CHALLENGE RACE ORDER OF A BILLION DOLLARS FOR ALS RESEARCH. PEOPLE WHEN IT FONTMES HATE SPCHEE AND VIOLENCE. IT DOES BOTH THROUGH CAREFULLY CRAFTED ALGORITHMS DESIGNED TO VIEW ENGAGED, TO REINFORCE HER PREFERENCES AND OPINIONS WITH CONTENT TAILORED TO YOU. >> HOW DID WE LAND ON THE LIKE BUTTON ON THE END ALL BE ALL THAT RUNS THE SYSTEM? WHEN YOULAME B -- BASED EVERYTHING ON A LIKE BUTTON, POPURILATY IS THE MOST IMPORTANT VARIABLE RUNNING SYSM.TE MY QUESTION IS, WHAT’D WE HAVE A TRUTH BUTTON OR EIGHT ISTH TAUGHT ME SOMETHING BUTTON OR A WELLNESS BUTTON TO MARK -- BUTTON? IF WE DID HAVE THIS BUTTON, WE WOULD HAVE POTENTIALLY VERY DIFFERENT OUTCOMESN I SOCIETY. >> BUT HE SAYS THE CURRENT MODEL WAS DESIGNED FOR EFFICIENCY AND ROUGH IT REGARDLESS OF HOW IT MISLEAD US. >> THERE IS TREMENDOUS DANGER. WE SEE INTERFERENCE IN OUR DEMOCRACIES IN OUR ELECTIONS, WE SEE A TREMENDOUS AMOUNT OF HATE SPEECH AND POLITICAL VIOLEE.NC WE SEE -WE- CONCTDUED SOME OF THE EARLIEST STUDIES THAT SHOWS FAKE NSEW TRAVELS MORE BROADLY THAN THE TRUTH IN EVERY CATEGORY OF INFORMATI TONHAT WE STUDIED. >> IT HAS BEEN TROUBLING TO S.EE IT IS ALSO TROUBLING TO SEE THE RISE OF EXTREMISM. >> ATTORNEY GENERAL’S ACROSS THE COUNTRY HAVE URGED FACEBOOK AND TWITTER TO STOP THE SPREAD OF MISINFORMATION AND HATE. >> THEY ARE T NORELYAL REGULATED AND THEY HAVE NOT DONE THEIR PART I CNONTROLLING THE SPREAD OF INFORMATION -- MISINFORMATION THAT IS EITHER HARMFUL TO PUBLIC HEALTH OR SAFETY AND THAT’S WHY I HAVE REACHED OUT AND AMENDED --ND A DEMANDED THEY GET ON BOARD. >> AS CONGRESS CONSIDERS LEGISLATION FOR AND REGULATNIO OF SOCIAL MEDIA, PRESSURE IS MOUNTING FOR MEANINGFUL REFORM. >> I RESPECT THE FIRST AMENDMENT AND RIGHT TO FREE SPEECH, THAT FOR TOO LONG THESE PLATFORMS HAVE GOTTEN AWAY WITH DOING THINGS YOU ARE NOT ALLOWED TO DO IN OTHER CONTEXTS. YOU ARE NOT ALLOWED TO INSIDE VIOLENCE, TO CONTRIBUTE TO AND PERPETUATE HARMFUL LIES. YOU CAN’T LI JE,UST LIKE A COMPANY CAN’T LIE TO YOU AS A CONSUMER. YOU CAN’T BE PART OF PERPETUATING MORE LIES. >> BOTH FACEBOOK AND TWITTER HAVE BEGUN MOVING TOWARDS MORE TRANSPARENCY, PERHAPS EVEN MORE CHOICE AS TO HOW ALGORITHMS CURATE THE CONTENT. >> WHEN U YOCHANGEHE T ALGORITHM OR TURN THE ALGORITHM OFF, PEOPLE GO BACK TO THEIR DESIRE FOR DIVERSITY AND REACHING ACROSS THE AISLE. SO THERE IS HOPE THAT WE CAN ACHIEVE AN UNWINDING OF THIS POLARIZATION. >> AND IN SO DOING, AVOID THE PERILS OF SOCIAL MEDIA AND INSTEAD REVEL IN ITS PROMISE. >> EVERYBODY UNDERSTANDS IT IS NDKI OF AN INFLECTION POINT IN THIS TECHNOLOGY AND IF WE DO THE RIGHT TYPE OF THINGS, WE ARE GOING SEETO AN ENTIRELY NEW BRIGHT SOCIAL AGE THAT INCLUDES INNOVATION, PROGRESS AND DEVELOPMENT, A BURGEONING INFORMATNIO ECOSYSTEM THAT COULD VERY WELL PORTEND ANOTHER NAREISSANC
Advertisement
Facebook wants you to stop fighting in its groups, and a new AI tool might try to calm things down
Video above: Is social media good or evil?Conversations can quickly spiral out of control online, so Facebook is hoping artificial intelligence can help keep things civil. The social network is testing the use of artificial intelligence to spot fights in its many groups so group administrators can help calm things down.The announcement came in a blog post Wednesday, in which Facebook rolled out a number of new software tools to assist the more than 70 million people who run and moderate groups on its platform. Facebook, which has 2.85 billion monthly users, said late last year that more than 1.8 billion people participate in groups each month, and that there are tens of millions of active groups on the social network.Along with Facebook's new tools, AI will decide when to send out what the company calls "conflict alerts" to those who maintain groups. The alerts will be sent out to administrators if AI determines that a conversation in their group is "contentious" or "unhealthy," the company said.For years, tech platforms such as Facebook and Twitter have increasingly relied on AI to determine much of what you see online, from the tools that spot and remove hate speech on Facebook to the tweets Twitter surfaces on your timeline. This can be helpful in thwarting content that users don't want to see, and AI can help assist human moderators in cleaning up social networks that have grown too massive for people to monitor on their own.But AI can fumble when it comes to understanding subtlety and context in online posts. The ways in which AI-based moderation systems work can also appear mysterious and hurtful to users.A Facebook spokesman said the company's AI will use several signals from conversations to determine when to send a conflict alert, including comment reply times and the volume of comments on a post. He said some administrators already have keyword alerts set up that can spot topics that may lead to arguments, as well.If an administrator receives a conflict alert, they might then take actions that Facebook said are aimed at slowing conversations down — presumably in hopes of calming users. These moves might include temporarily limiting how frequently some group members can post comments and determining how quickly comments can be made on individual posts.Screen shots of a mock argument Facebook used to show how this could work feature a conversation gone off the rails in a group called "Other Peoples Puppies," where one user responds to another's post by writing, "Shut up you are soooo dumb. Stop talking about ORGANIC FOOD you idiot!!!""IDIOTS!" responds another user in the example. "If this nonsense keeps happening, I'm leaving the group!"The conversation appears on a screen with the words "Moderation Alerts" at the top, beneath which several words appear in black type within gray bubbles. All the way to the right, the word "Conflict" appears in blue, in a blue bubble.Another set of screen images illustrates how an administrator might respond to a heated conversation — not about politics, vaccinations or culture wars, but the merits of ranch dressing and mayonnaise — by limiting a member to post.

Video above: Is social media good or evil?

Conversations can quickly spiral out of control online, so Facebook is hoping artificial intelligence can help keep things civil. The social network is testing the use of artificial intelligence to spot fights in its many groups so group administrators can help calm things down.

Advertisement

The announcement came in a blog post Wednesday, in which Facebook rolled out a number of new software tools to assist the more than 70 million people who run and moderate groups on its platform. Facebook, which has 2.85 billion monthly users, said late last year that more than 1.8 billion people participate in groups each month, and that there are tens of millions of active groups on the social network.

Along with Facebook's new tools, AI will decide when to send out what the company calls "conflict alerts" to those who maintain groups. The alerts will be sent out to administrators if AI determines that a conversation in their group is "contentious" or "unhealthy," the company said.

Conversations can quickly spiral out of control online, so Facebook is hoping artificial intelligence can help keep things civil. The social network is testing the use of AI to spot fights in its many groups so group administrators can help calm things down.
Facebook via CNN
Conversations can quickly spiral out of control online, so Facebook is hoping artificial intelligence can help keep things civil. The social network is testing the use of AI to spot fights in its many groups so group administrators can help calm things down.

For years, tech platforms such as Facebook and Twitter have increasingly relied on AI to determine much of what you see online, from the tools that spot and remove hate speech on Facebook to the tweets Twitter surfaces on your timeline. This can be helpful in thwarting content that users don't want to see, and AI can help assist human moderators in cleaning up social networks that have grown too massive for people to monitor on their own.

But AI can fumble when it comes to understanding subtlety and context in online posts. The ways in which AI-based moderation systems work can also appear mysterious and hurtful to users.

A Facebook spokesman said the company's AI will use several signals from conversations to determine when to send a conflict alert, including comment reply times and the volume of comments on a post. He said some administrators already have keyword alerts set up that can spot topics that may lead to arguments, as well.

If an administrator receives a conflict alert, they might then take actions that Facebook said are aimed at slowing conversations down — presumably in hopes of calming users. These moves might include temporarily limiting how frequently some group members can post comments and determining how quickly comments can be made on individual posts.

Screen shots of a mock argument Facebook used to show how this could work feature a conversation gone off the rails in a group called "Other Peoples Puppies," where one user responds to another's post by writing, "Shut up you are soooo dumb. Stop talking about ORGANIC FOOD you idiot!!!"

"IDIOTS!" responds another user in the example. "If this nonsense keeps happening, I'm leaving the group!"

The conversation appears on a screen with the words "Moderation Alerts" at the top, beneath which several words appear in black type within gray bubbles. All the way to the right, the word "Conflict" appears in blue, in a blue bubble.

Another set of screen images illustrates how an administrator might respond to a heated conversation — not about politics, vaccinations or culture wars, but the merits of ranch dressing and mayonnaise — by limiting a member to post.