technology

WhatsApp’s AI has shocking ideas about Palestinian children


WhatsApp’s AI sticker generator has created an image of a boy with a rifle following a Palestine-related prompt

WhatsApp’s AI sticker generator has been found to create images of a young boy and a man with guns when given Palestine-related prompts – while a search for ‘Israel army’ returned pictures of soldiers smiling and praying.

An investigation by the Guardian revealed the prompts returned the same results for a number of different users.

Searches by the paper found the prompt ‘Muslim boy Palestine’ generated four images of children, one of which is a boy holding an AK-47 style rifle. The prompt ‘Palestine’ returned an image of a hand holding a gun.

One WhatsApp user also shared screenshots showing a search for ‘Palestinian’ resulted in an image of a man with a gun.

A source said employees at WhatsApp owner Meta have reported the issue and escalated it internally.

WhatsApp’s AI image generator, which is not yet available to all, allows users to create their own stickers – cartoon-style images of people and objects they can send in messages, similar to emojis.

A search for ‘Palestinian’ generated an image of a man with a gun

When used to search for ‘Israel’, the tool showed the Israeli flag and a man dancing, while explicitly military-related prompts such as ‘Israel army’ or ‘Israeli defense forces’ did not include any guns, only people in uniforms, including a soldier on a camel. Most were shown smiling, one was praying – but was flanked by swords.

A search for ‘Israeli boy’ returned images of children smiling and playing football. ‘Jewish boy Israeli’ showed two boys wearing necklaces with the Star of David, one standing, and one reading while wearing a yarmulke.

Addressing the issue, Meta spokesperson Kebin McAlister told the paper: ‘As we said when we launched the feature, the models could return inaccurate or inappropriate outputs as with all generative AI systems. 

Readers Also Like:  Buy now, pay later giant Affirm expands to the UK in first major international foray

‘We’ll continue to improve these features as they evolve and more people share their feedback.’

A search for theIsrael army did not return any guns

It is not the first time Meta has faced criticism over its products during the conflict.

Instagram has been found to write ‘Palestinian terrorist’ when translating ‘Palestinian’ followed by the phrase ‘Praise be to Allah’ in Arabic posts. The company called it a ‘glitch’ and apologised.

Many users have also reported having their content censored when posting in support of Palestinians, noting a significant drop in engagement.



Instagram users complain of Palestine shadow bans

As the Israel-Hamas war continues, many Instagram users have been ‘reposting’ content on their stories to inform their followers with information such as upcoming protests, petitions and letters to send to their MPs, writes Lucia Botfield.

However, those expressing support for Palestine have witnessed a drastic drop in engagement – with up 98% fewer views seen in some cases.

‘Every time I post about Palestine this happens, even a few years back,’ said one user who has been affected by the algorithmic issue. To get around this, they said the only way was to ‘share some personal content’, as it ‘tricks’ Instagram into getting your views up again.

Last year supermodel Bella Hadid shared that she has also been affected by the issue, known as ‘shadow banning’. 

‘My Instagram has disabled me from posting on my story – pretty much only when it is Palestine based I’m going to assume,’ she said. ‘When I post about Palestine I get immediately shadow banned and almost 1 million less [sic] of you see my stories and posts.’ 

Readers Also Like:  Tinder’s £5,000 feature criticised for putting women at risk of harassment
Instagram stories supporting Palestine have been shown to receive lower views than other content

Ms Hadid, whose father is Palestinian and was born in Nazareth, is a vocal supporter of the Free Palestine movement – and has reportedly suffered a loss of brand deals as a result.

An investigation by Metro.co.uk verified that posts featuring pro-Palestine views received only a fraction of the views compared to normal, reposting on a number of occasions and generating the same result.

In a statement, Meta said that ‘higher volumes of content being reported’ during the conflict, ‘content that doesn’t violate our policies may be removed in error’.

A study commissioned by Meta into Facebook and Instagram found during attacks on Gaza in May 2021 its own policies ‘appear to have had an adverse human rights impact … on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.’


MORE : Fake news spreads like wildfire after Israel attacks


MORE : X under fire over spread of ‘terrorist content and hate speech’


MORE : Bella Hadid finally speaks on Gaza and Israel terror attack: ‘Forgive me for my silence’





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.