Every summer, Rob Holmes helps Hollywood track down bad actors. Not the ones who will show up in the box‑office flops, but a more malicious breed—hackers, rogue employees or others who try to breach major Hollywood studios.
Holmes, CEO of the cybersecurity investigation firm IPCybercrime, leads a team that hunts for signs of massive hacks, leaks or other impending security problems for top studios during Comic‑Con, the huge entertainment and gaming fest in San Diego that attracts thousands of fans each July—a period of peak demand for sneak previews.
Holmes’ team monitors message boards and online channels like Reddit and 4Chan to detect signs that overzealous fans are about to leak a studio’s teaser of its next blockbuster, or that a disgruntled studio employee might be planning something equally damaging.
“If a teaser trailer gets leaked ahead of its scheduled release on, say, YouTube, that’s a disaster we have to get taken down now,” Holmes says. “We are constantly in ‘prevent and protect’ mode.”
Holmes says predicting an attack by monitoring communication is an undervalued but increasingly important niche, especially when it comes to stopping leaks from the inside.
Analysts estimate that 80 percent of corporate data is unstructured, with much of it in the form of employee communications. Meanwhile, employees—both intentionally and unintentionally—account for 60 percent of all “inside” workplace breaches, according to a 2016 IBM report.
Entertainment firms and financial services companies are among the most vulnerable to employee breaches, but these breaches are a growing challenge in every sector. Experts say one key to stopping them is AI‑enabled tools that can sift through millions of messages for warning signs, along with human hunters who know where to look.
There aren’t enough hours in a day to track every email or business chat, nor is it really necessary, machine learning expert Uday Kamath argued in a recent Gigaom.com piece. But, he said, “within every e‑communication lies unique insights that could lead businesses to uncover some harsh truths about employee activity.”
Most employees don’t start out as security risks. But some can turn into “insider threats” when their environment, relationships and other factors at work change.
Getting passed over for a promotion, not receiving enough recognition, being overworked or otherwise mistreated can all contribute to an employee considering a breach or an attack, says Jason Morgan, a behavioral intelligence executive at Wiretap, a startup that identified security risks by analyzing behavior on corporate collaboration platforms.
A typical company faces at least three types of insider threats, says Morgan: Those who commit breaches when they’re about to leave the company; independent contractors who have access to workplace networks; and employees who unwittingly release proprietary information over public networks.
Most of the early‑warning signs bubble up on internal messaging platforms like Slack and Yammer, Morgan says, where employees communicate in a more casual, more revealing manner than they do over email.
Machine‑learning applications can help companies spot possible internal threats by looking for statements with negative sentiment about the employer. Still, machines alone can’t anticipate a leak.
“We can all point to people who are unhappy in their jobs and stay,” says Greg Moran, Wiretap’s chief operating officer. “And there are other cases that the first time someone becomes frustrated, they may act maliciously.”
It might sound invasive, but employers are legally within their rights to know what employees are saying on the job. Nearly 80 percent of large U.S. companies monitor employees’ use of email, internet and phone calls, according to a survey released last year by the American Management Association. About one‑fourth of those firms have fired employees for misusing those channels.
Companies are fighting the inside battle on two fronts: with malicious employees, and with others who unknowingly trigger security disasters. Not surprisingly, 90% of companies today report that they feel vulnerable to inside security threats, whether they’re intentional or not, according a survey from CA Technologies.
Training employees to recognize phishing and fake emails can help address inadvertent breaches. But relying on people to sift through communications to find what’s basically a needle in a haystack is impossible.
That’s where technology comes in. Holmes, the Hollywood consultant, says his firm uses X1 Social Discovery, which collects and makes searchable data from social networks and the internet, to help follow social media mentions of studios and certain A‑list stars. His team also created special Python scripts to track buzz involving its clients.
This year, Holmes’ team squashed two minor leaks involving unreleased trailers. Declining to go into specifics, Holmes said the leaks involved a couple of young, lesser‑known actors apparently “trying to impress their friends” with what they were working on.
“In each of those cases we talked to them, using a soft‑glove approach, and we kindly asked them to take their posts down,” Holmes says. “They did.”
A couple years ago, Holmes tracked down a man who had posted package art for a blockbuster movie that was about to go to DVD on an online entertainment forum.
“We noticed a little bit of chatter, found out who the guy was,” he says. “Turns out he worked at the marketing company working for the studio. “So we stopped him from doing it and the marketing company got in big trouble.”
He said the incident was not well‑publicized because they caught the leak early. “The key is to look for everything,” says Holmes. “If you think you know what you’re looking for, you’re definitely going to miss something.”