AI has recently been used to save teens from suicide and combat systematic cyber bullying of individuals and corporations on the social media battleground.
According to federal health officials this month, suicide rates among are now at the highest point in the past 40 years.
Despite the fact that rates were even higher during the 90s, we have seen worrying spike in recent years, bringing the subject back into the limelight. From 2007 to 2015 rates rose from 10.8 to 14.2 per 100,000 male teenagers and from 2.4 to 5.1 per 100,000 female teenagers. In 2011, for the first time in more than 20 years, the number of deaths from suicide was higher than that of homicide.
Already the app has had a huge impact on a number of families and has analyzed more than 500 million messages to date, according to CEO Brian Bason. Bason says that 25 parents have contacted him to say the app saved their child’s life by alerting them to their suicidal intentions.
AI has not only been saving teenage lives from suicide but also reducing cyber bullying.
In the wake of recent events in Charlottesville, USA, social media has recently erupted over the strong presence of white supremacist beliefs and rallies. The president’s ambivalent response towards the events has only incited further frustration. The scuffles have also drawn in a number of large profile companies which have felt a need to publicly voice their opinion on the matter. This has triggered anger within online communities that support the white supremacy movement, and as a result, some companies have been targeted
The startup Bubble, which is dating app designed to empower women, was recently targeted by “groups with white supremacist affiliations”. On Thursday the company sent an email to users asking them to report hate speech or hate symbols in user profiles, including neo-Nazi symbols such as swastikas.
“Last week, a neo-Nazi media site published an article to their community urging them to call and email our team with harassing statements, given Bumble’s stance towards promoting women’s empowerment,” according to the email.
This type of abuse is not uncommon from disgruntled groups or unpleasant individuals. However, AI has once again been used as a solution, this time by the startup, BrandBastion. The Finnish company deals with social engagement by automating the management of millions of comments and social interactions. The AI is able to combat negative and abusive comments in a fast and effective way, in real-time with everyday users.
“There is a lot of engagement that is not in the best interest of people and fans that are trying to communicate with their brand,” Wolfram explains. “This could be spam, phishing attempts, threats, malware, pornography or free gaming gems being spread.”
Over the past few weeks, we have seen how dark and troubling the internet can be, like the case of the website The Daily Stormer, which was taken down due to its disgusting comments on the death of Heather Hayer. The site may have been removed, but there is very little that can be done to stop putrid hate and violence that these sites spread, as they will constantly keep popping up under new domains, like an infuriating game of whack-a-mole.
Nonetheless, with the help of innovative startups like Bark and BrandBastion utilizing the revolutionary new technology which is available thanks to AI, we can help reduce online abuse towards individuals and businesses everywhere.
Many of you probably know that one of the most crucial steps in running a…
New EPA updates regarding emissions regulations have many vehicle fleet operators wondering not if but…
The blockchain industry is set to benefit from a new collaboration between the University of…
The NTT Research Foundation announced this month a gift to establish the Harvard University Center…
Welcome back business owners, marketing professionals, and anyone interested in harnessing digital analytics for business…
Pitbull Ventures, the early-stage venture capital firm founded by prominent investor Brad Zions, today announced the…