WhatsApp just blocked two million accounts for breaking tough new rules – don’t be next


WhatsApp just blocked two million accounts for breaking tough new rules – don’t be next

Two million WhatsApp accounts have been blocked for breaking strict new restrictions; don’t be one of them.

WHATSAPP has just prohibited two million users from sending messages after harsh new regulations were discovered.

WhatsApp has gone on a blocking spree, removing millions of its two billion users for violating a little-known regulation intended to prevent the spread of hoaxes on the service. In just one month, WhatsApp has blocked almost two million accounts for breaking the rules. Users who were determined to be sending a “high and anomalous rate of communications” were targeted, according to the business.

What constitutes an excessive number of messages? Fortunately, breaking the laws would take a lot of effort. Over 95% of the bans, according to the business, were imposed “due to the improper use of automated or mass messaging,” not normal texting.

Almost all of the blocked accounts are in India, which has over 400 million WhatsApp users. When users forwarded messages too many times, the app’s rigorous new limit on how many times a message can be forwarded to other persons or groups was breached, resulting in bans.

The limit was put in place in April 2020 to combat spam and the spread of viral rumors, pictures, and hoaxes. In India, where many people rely on the app for news, this is a major issue.

Messages carrying “false news” have been blamed for lynchings and other acts of violence around the country.

Previously, the app has mass-banned users. Every month, WhatsApp claims to block about eight million accounts around the world using AI technology.

It decides who to ban based on information like as profile and group photographs and descriptions, as well as “behavioral signals” from accounts.

Other users’ reports were also used to help catch the rogue accounts.

“We are particularly focused on prevention because we believe it is much better to prevent harmful behaviour from occurring in the first place than to detect it after harm has occurred,” WhatsApp stated in a statement. Abuse detection occurs at three points in an account’s life cycle: at registration, messaging, and in response to negative feedback received in the form of user reports and blocks.”

It also stated that due to the app’s end-to-end encryption, it had not viewed any of the spam messages transmitted through it.

WhatsApp filed a lawsuit against the Indian government earlier this year in an attempt to ban a new “Brinkwire Summary News.”


Leave A Reply