Editor’s note: Aaron Turner has investigated and researched security incidents on the ground in more than 100 countries during his career. Turner shared several lessons learned from that extensive body of work in May at ISACA Conference North America: Digital Trust World in his session, “Reacting to Partner Breaches: Making Rational Move-Forward Decisions.” Turner, a member of the IANS Faculty, recently visited with @ISACA to reflect on his career journey and analyze the current security landscape. See the Q&A below, and find out more about upcoming ISACA Digital Trust World events, virtually and in Dublin, Ireland, here.
You have researched security incidents in over 100 countries. Wow! Do you see more commonalities or more differences in the way incident response plays out in different parts of the world?
I’ve been fortunate to be able to help private companies perform investigations as well as with local law enforcement in many places around the world and I would group those experiences in categories of working in countries with mature legal frameworks and those with immature ones. For example, I was working on an investigation in Haiti, and there really is not a functioning government there, let alone law enforcement. Working there was very stressful as we had to worry about our personal safety as well as trying to get to the bottom of a significant incident that was targeting Haiti’s mobile networks.
In some places that you would expect to have very mature legal systems, like Germany, there are good legal frameworks to deal with cyber investigations, but it surprised me just how much bureaucracy existed to do simple things like obtain the subscriber information associated with a residential cable modem’s IP address. When I asked the local expert why it was such a drawn-out process compared to how it is done in the US, she explained that Germany’s legal system has been built to deal with the aftermath of East Germany’s state-run surveillance programs. (A great movie to watch to get insights into that world is “The Lives of Others.”)
There are cultural differences about how each country wants to run an investigation based upon legal precedents and the relative awareness of cybersecurity attacks within the legal and incident response communities around the world. What I’ve always enjoyed in my travels has been following a long work day with a great meal of local food, and I’ve had some crazy eating experiences as I’ve worked in far-flung locations.
Your recent session at ISACA’s Digital Trust World conference in Boston dealt with how to make rational decisions when bad things happen. What are some irrational temptations that security professionals and their organizations should try to avoid?
Whenever a cyber incident makes the front page of the Wall Street Journal, CEOs and CIOs begin to ask questions about the exposure that an organization has to whatever has been reported. The easiest answer to those questions would be, “Oh, we had that service, but we just terminated it so we no longer are exposed to that risk.” Unfortunately, these kinds of rash decisions have been made that have all sorts of unintended consequences. I shared a framework that security leaders can use to make rational decisions.
- If the product/service involved in the incident is only used on an ancillary basis and is not fully deployed, and if cutting ties has minimal operational impact, then cut ties and use that as political capital with business leadership to show security team responsiveness.
- If the product/service involved is integrated into multiple business processes and is fully deployed across the organization, then perform an investigation to look at the actual risks of the deployed solution. In most cases where the product/service has been properly deployed, then exposure should be reduced and manageable. If the product/service was improperly deployed and exposure is significant, then evaluate the costs/benefits of re-deployment with the same solution or if switching would be the right move to make to manage leadership’s concerns.
What was your role in analyzing the LastPass incident and what type of lessons learned do you see for the industry?
Over the past decade, I could have been classified as a LastPass fanboy. I helped hundreds of IANS Research clients discover the LastPass solution and evaluate using it to increase the complexity of user credentials and decrease the likelihood that password fatigue would strike to create a credential replay attack scenario for the organization. When I made recommendations to LastPass to integrate strong MFA like YubiKey tokens, they listened and got the feature deployed. As the result of my own satisfaction of using the system personally, within my businesses and consulting clients, I helped drive deployment of LastPass.
When the incident was first reported, I was approached by many of those organizations that had listened to my recommendation and asked me, “Should we stick with LastPass?” I wrote an IANS Research Report that walked organizations through the decision-making process to evaluate if moving from LastPass would be a good thing for them or if they should stay the course. In addition to evaluating the impact that LastPass users suffered, I also derived some key lessons learned that I shared with IANS clients about how privileged cloud users’ identities should be better managed, what kind of hygiene requirements they should have and how privileged cloud users should probably move to segmenting their home networks to better protect themselves from IoT/consumer software vulnerabilities on their home networks.
ChatGPT is top of mind for a lot of people these days – how do you see it most impacting the security landscape in the foreseeable future?
Let’s run ChatGPT through the classic CIA of security: Confidentiality, Integrity and Availability.
As we’ve seen in the recent vulnerabilities that OpenAI has disclosed, anything that a user sends to a large language model as input could be retrieved by an unauthorized user. Also, the terms of use allow large language model operators to harvest significant portions of user input to improve the model. We need to advise users that there are inherent confidentiality problems with ChatGPT input.
Whenever I see a new technology that breaks into the industry like ChatGPT has, I first try to use the platform for something constructive. I built an interesting photo display platform for LG WebOS TV’s using ChatGPT. It was an incredibly efficient way for me to go from concept to prototype. It is a smartTV app that allows you to turn a TV into a large digital photo display frame. When I went to submit my project to the LG WebOS App store, I was asked to guarantee that the code I was submitting was not subject to other’s copyrights or protections. I hired a WebOS expert to help me answer that question and discovered that many of the modules that were given to me as output from ChatGPT had origins that made them other people’s protected intellectual property. Luckily, we could fix that problem fairly easily. For others, it may not be as easy of a fix. I believe organizations need expertise and processes to evaluate how using ChatGPT impacts the integrity of their intellectual property if their developers are using ChatGPT-delivered code in their projects. Also, I've seen situations where ChatGPT delivers code that has security vulnerabilities in the output.
ChatGPT suffers from availability problems. Not just the availability of the service itself, but also when ChatGPT delivers code as output when using one model, then if OpenAI upgrades the model, I’ve seen situations there the new model’s code breaks what was delivered by the old version of ChatGPT. Security teams should evaluate carefully how relying on large language models could impact the availability of services, especially when an organization like OpenAI upgrades or changes their model.
Let’s close by returning to the concept of digital trust: what do you consider to be the most effective ways that companies can take the good things they’re doing with their security programs and turn that into sustained customer trust that can become a competitive advantage?
I believe that transparency is key to trust. Looking at the LastPass incident, trust was lost due to some delays in disclosing material details of the attack. Over the past two decades that I’ve been in this business, we’ve seen expectations of business leaders and customers range from everyone wanting to sweep an incident under the rug and never talk about in the late 1990s to the expectation that customers and business partners have near real-time updates about what is going on in an incident response process. That is some real whiplash for old-timers, and for those new to the industry can create some unhealthy expectations about how to communicate material details.
Should organizations err on the side of speed and set the expectation that the accuracy may be low? Or drive for nearly perfect accuracy and suffer delays in informing impacted customers and partners? There is no perfect answer when it comes to each situation, but I learned some powerful lessons very early in my career when working on global incidents like the SQL Slammer and Windows Blaster responses as part of Microsoft’s security teams. The perennial standard for responding to a business crisis is the crisis response that the Tylenol team followed during the poisoning attack in the early 1980s. If security teams hold themselves to those Tylenol-crisis levels, then they should be ready to respond appropriately.