PERSONAL SECURITY

PERSONAL SECURITY

The problem

A large and growing number of people are victims of harassment, abuse, stalking, threats and intimidation. The problem is especially bad for individuals with public-facing roles, such as politicians and media celebrities, but it also affects private citizens including children. Much of the intimidation is directed through social media channels. What can be done to help the victims?

The ideal solution

In an idealised world of unlimited resources and no other constraints, the social media companies and law enforcement agencies would tackle the problem at its roots, before it reached the victims. The social media companies would accept responsibility for the content published on their platforms and block most offending material at source. Further protection would be provided by AI-enabled technology, which would scan all social media and messaging channels, along with the visible web, other open sources, the ‘deep’ web and ‘dark’ web, for threatening content. The technology would accurately interpret their meaning and block them or relay them to the authorities for appropriate action. Finally, law enforcement agencies would have the powers and the capacity to bring perpetrators to justice. The current reality is very different.

Reality check

There are many reasons why current reality falls far short of what could in principle be achieved:

  • The social media companies and law enforcement agencies have been unable or unwilling to tackle this problem head-on.
  • There are many different social media and messaging platforms, including Twitter, Facebook, Instagram, WhatsApp, YouTube, LinkedIn, Google+, Reddit, Tumblr, Vimeo, Flickr, Pinterest and Snapchat, all carrying huge volumes of messages in different formats.
  • Intimidation is directed through other channels besides social media, such as online gaming – and, of course, in person. Therefore, not all threat indicators could be found on social media.
  • The volume of material carried by social media channels is vast. Facebook has 1.8 billion active users, WhatsApp 1.2 billion, Instagram 700 million, and Twitter 320 million. The number of tweets sent each day on Twitter is 500 million (6,000 a second).
  • Then there is plain email. There are thought to be approximately 4.3 billion email users in the world, and more than 200 billion emails are sent each day (27 for every human on the planet).
  • Social media companies are generally reluctant to provide unfiltered access to their bulk data and secure messaging platforms are closed.
  • Assessing whether a particular message is genuinely threatening is an immensely complex problem to automate. For instance, a humorous and entirely unthreatening message might contain the word “kill” (as in “I could kill for a beer”). Artificial intelligence might develop this capability in the future, but it is not there yet. At present, only humans can perform this assessment with a reasonable degree of reliability.
  • The person who is best placed to make the initial assessment of whether a message is genuinely threatening is the victim, because they are more likely to understand the context and meaning. However…
  • The victim might not read all of the material sent to them, because they are too busy or the volume is too great.
  • Not all of the material that may be of concern is sent to the victim. Antagonists may communicate among themselves about their intention to attack a victim.
  • Expert judgment is needed to recognise certain forms of threat, particularly those arising from fixated individuals with mental health problems, or from threat actors who are known to the authorities.
  • Even an expert analyst will make errors, giving rise to false positives and false negatives.
  • Anyone performing the assessment on behalf of the victim would potentially bear the moral and legal risk of failing to detect a real threat.
  • The threshold for engagement by law enforcement is high. The police have limited capacity and operate according to a stringent definition of ‘threat to life’. The bar for criminal prosecution is set even higher. Many online trolls, stalkers and abusers feel they can act with impunity.
  • The social media identity of an antagonist may be hard to link to an identified person or a physical location.
  • Antagonists may be located overseas, beyond the jurisdiction of law enforcement in the victim’s country.

In sum, the victims and the authorities would need to work together in sifting vast volumes of material in multiple formats, filter out the tiny proportion that are of genuine security concern, store them securely in a retrievable format for analysis, and respond appropriately.

What can be done now?

In the absence of technology that can access all social media traffic and assess it accurately for threatening content, what can be done now to help victims? An immediate practical solution would deal with the most important and tractable elements of the problem. It would:

  • Focus on social media messages that are sent to the victim (rather than all online traffic referring to the victim).
  • Focus on Twitter, which appears to be the social media channel most commonly used for abusing and intimidating individuals in public life. Unlike secure messaging platforms such as WhatsApp, Twitter does not require the recipient to accept a connection with the sender.
  • Use the victim to triage the messages – i.e., make the initial assessment of whether the message is sufficiently threatening or distressing that further action is required.
  • Automatically scan all traffic received by the victim and flag up messages that are potentially threatening, so that they are not overlooked.
  • Securely store any threatening messages in an evidential format, for use by law enforcement or in future legal proceedings.
  • Enable the victim to relay any worrying messages to the relevant authority (e.g., their employer, security manager or police) quickly, easily and in a suitable format.
  • Enable the victim to summon help.
  • Provide the victim with practical advice.