A security researcher talks about the rise in critical data breaches—and what organizations can do to defend their networks
A spate of large-scale ransomware attacks has spurred the U.S. to ramp up its cybersecurity efforts. The Biden administration has also called on dozens of countries to partner with American intelligence agencies in thwarting elusive cybercriminals who operate around the globe.
“Security is a weakest-link kind of game,” said Daniel Votipka, Lin Family Assistant Professor of Computer Science and director of the Tufts Security and Privacy Lab. “Defenders need to defend everything. If you miss one thing—if you have a password that’s easy to guess, you click on a phishing email, you forget to update one application—that leads to a way in for an attacker.”
To help untangle the truly dark web of cybercrime, Tufts Now recently spoke with Votipka, who specializes in the behavioral component of security tasks. He discussed the routine ways in which data breaches may be avoided, how companies and organizations can fortify their human-run systems from within, and the advantages of collaborating with “outsiders.”
Tufts Now: Security breaches to computer networks are a well-known threat, yet we still see regular reports of companies and institutions experiencing serious data leaks. The live-video site Twitch is one example, in which content creators’ earnings, among other details, were posted online. Why are these companies getting caught flat-footed?
Daniel Votipka: In some cases, it’s a lack of awareness of some of the threats that companies might face. Either the people who are running these sites—especially those who are less technologically savvy—are unaware of the threats, or it’s that knowing all the things that need to be secure tends to be a secondary priority.
It’s not what the teams are built to do, and if you don’t have a security staff that is well-trained, that knows what to look for, and that is given the authority to spend a lot of time working on this, that can cause problems.
Plus, data security requires a lot of effort and forethought; without a systematic and carefully architected approach, stuff falls through the cracks. If you don’t have a process from the beginning to actively monitor all the threats that are occurring, you’re going to miss things.
Security is a weakest-link kind of game. Defenders need to defend everything. If you miss one thing—if you have a password that’s easy to guess, you click on a phishing email, you forget to update one application—that leads to a way in for an attacker. And all of these requirements compound the already-large burden on the system administrators. So, in some cases, it’s understandable that the breach occurs.
All the training in the world, however, is not going to prevent one of your employees from accidentally clicking on a phishing email. So, organizations need to think about those kinds of weak points, too, and what impact they can have. — Daniel Votipka
From a security researcher perspective, though, the way we want to think about this is: How do we make these security roles easier? How do we reduce what administrators need to know? How do we make the tools for securing these systems more usable?
When it comes to corporate platforms, what is considered good cyber hygiene?
There are some fundamentals common to securing any system, including your own personal accounts. These include two-factor authentication, strong passwords, and making sure you update your systems.
For companies with large networks, added considerations include threat modeling, or identifying probable threats along with the corresponding safeguards. They also need to architect their network so that they know where potential vulnerabilities are.
All the training in the world, however, is not going to prevent one of your employees from accidentally clicking on a phishing email. So, organizations need to think about those kinds of weak points, too, and what impact they can have.
What else can businesses and organizations do?
Companies also need to be very careful about using additional layers of security to protect the files that really need protection. At my previous university, my lab worked with New York City Cyber Command, which is an organization that functions to protect any website, any municipality-related portal, any data that’s collected by New York City.
We found that the planning for additional protection helped security professionals prioritize what they should be focusing on. It gave them a shared language for—and understanding of the need for—this kind of prioritization. It gave them a structure that helped them push for improvements.
Normally, city organizations might have siloed off younger tech staff with security expertise, but less of an understanding of how government works from other staff who don’t understand the technology. In our study, we saw the power of enabling communications across these different groups.
Going through and understanding the way your network works, establishing a shared understanding of how all the pieces fit together, having an idea of how you’re going to defend each of those pieces, and then practicing how you’re going to actually respond to an incident—all those steps are very beneficial for organizations.
Talk about the recent surge of ransomware attacks, in which cybercriminals hold computer systems hostage until a demanded sum is paid out.
Attackers tend to use something like Bitcoin, which offers an effective mechanism for extorting funds from organizations. They’re targeting companies or organizations that are not necessarily the most protected, especially for example, hospitals.
My wife’s a nurse practitioner, and I know from experience that those organizations don’t necessarily have the best security staff, because it’s not the primary focus of their organization. They’re there to help people get better, to alleviate illness.
So, in addition to their not necessarily having the most secure networks, you have organizations where, more and more, critically important work is being done digitally. Hospitals, for example, are filled with Internet-connected devices that rely on these networks to perform life-saving procedures. Add to that not necessarily having the highest security controls in place, and that creates a target-rich environment for an attacker. That is, I think, part of where the rise is coming from.
What is being done to address these so-called cybergangs and their malicious activities? And how effective do you think these measures are or will be?
It was recently announced that the National Security Agency is going to work more closely with the private sector, declassifying more quickly some of what they’ve gleaned from their intelligence collection. The government has a lot of experience in this space; they’ve been defending their own networks for a long time.
In many cases, hackers tend to operate in countries with which we don’t have extradition treaties, so we can’t go and arrest them even if we could identify them. So, I don’t know that we’re going to get to a position of being able to stop the attacks altogether.
But providing the support to these organizations that don’t necessarily have the needed expertise so they can more readily mitigate security issues will be a start.
What is the difference between malicious hackers and, say, journalists investigating security flaws and alerting the appropriate authorities, as in the case of the St. Louis Post-Dispatch reporter who found that a Missouri state website inadvertently revealed teachers’ Social Security numbers via its source code?
There are two kinds of hackers. There are attackers, who are trying to break into a system for the purpose of stealing data, for ransomware or other destructive purposes. And there are ethical hackers, whose intent is not to do harm, but to go in and identify problems that they can report back to someone else. They can say, “Here’s a bug. You should go and fix this so that someone doesn’t come along and take advantage.”
The people who operate on the ethical side would generally term themselves as “security researchers.” They work in a responsible-disclosure ethos system, where an issue is made public only after it has already been patched.
This has usually been organized under the umbrella of bug bounty programs, where companies will offer a small payout to incentivize people to come and look at a specific component of a network. Whatever vulnerabilities participants find can then be addressed.
There is a push for legal safe harbor for people who are trying to operate within these ethical settings, and that can help guide people into these bug bounty programs. There has also been a directive from the White House recommending that all organizations, especially government organizations, begin to share breach information and create more of these responsible-disclosure programs. If one had existed in the St. Louis case, the journalist could’ve reported the bug safely, within this formal process.
The fact is, you need people who have the security expertise and are willing to provide their time and energy to help identify vulnerabilities. It’s really important to have lots of eyes, not just those of the people who built the system, but other people from the outside. That’s the only way you’re going to find some of these really deep, critical flaws that need fixing.