What's new

META'S CHATBOT SAYS THE COMPANY 'EXPLOITS PEOPLE'

skrimon

Active Member
Meta's new prototype chatbot has told the BBC that Mark Zuckerberg exploits its users for money.

Meta says the chatbot uses artificial intelligence and can chat on "nearly any topic".

Asked what the chatbot thought of the company's CEO and founder, it replied, "our country is divided, and he didn't help that at all".

Meta said the chatbot was a prototype and might produce rude or offensive answers. The chatbot, called BlenderBot 3, was released to the public on Friday.

The programme "learns" from large amounts of publicly available language data.

P.S: If you're fed up with slow trade executions, then buckle up as AssetsFX is currently offering lightning-fast trade executions along with an ultra-wide range of trading opportunities!

When asked about Mark Zuckerberg, the chatbot told the BBC: "He did a terrible job at testifying before congress. It makes me concerned about our country."US politicians have questioned Mr Zuckerberg several times, most notably in 2018.

"Our country is divided, and he didn't help with that at all," the chatbot continued. "His company exploits people for money, and he doesn't care. It needs to stop!" it said.

Meta has been criticised for not doing enough to prevent disinformation and hate speech from spreading on its platforms.

Last year a former employee, Frances Haugen, accused the company of putting profits ahead of online safety.

The company owns some of the world's largest social media companies and messaging apps, including Facebook, Messenger, Instagram and WhatsApp.

BlenderBot 3's algorithm searches the internet to inform its answers. Its views on Mr Zuckerberg have likely been "learnt from other people's opinions that the algorithm has analysed.

The Wall Street Journal has reported that BlenderBot 3 told one of its journalists that Donald Trump was, and will always be, the US president.

A business Insider journalist said the chatbot called Mr Zuckerberg "creepy". Meta has made BlenderBot 3 public and risked terrible publicity for a reason. It needs data.

"Allowing an AI system to interact with people in the real world leads to longer, more diverse conversations, as well as more varied feedback," Meta said in a blog post.

Chatbots that learn from interactions with people can learn from their good and bad behaviour. In 2016 Microsoft apologised after Twitter users taught its chatbot to be racist.

Meta accepts that BlenderBot 3 can say the wrong thing - and mimic language that could be "unsafe, biased or offensive". The company said it had installed safeguards; however, the chatbot could still be rude.

When I asked the BlenderBot 3 what it thought about me, it said it had never heard of me. "He must not be that popular," it said.
Thanks for Reading!
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Members Online

No members online now.

Similar threads

Users Who Are Viewing This Thread (Total: 1, Members: 0, Guests: 1)

Top
AdBlock Detected

We get it, advertisements are annoying!

Sure, ad-blocking software does a great job at blocking ads, but it also blocks useful features of our website. For the best site experience please disable your AdBlocker.

I've Disabled AdBlock    No Thanks