On this page
Artificial intelligence (AI) is transforming cybersecurity, fundamentally changing both cyberattacks and cyber defense. From AI-generated phishing emails and deepfakes to sophisticated malware, AI tools make it faster and easier for cyber criminals to gain access to sensitive data undetected.
To better understand emerging AI risks in the cybersecurity space, Harvard Extension School gathered a panel of chief information security officers (CISOs) and cybersecurity leaders to discuss the evolving threat landscape.
In this new era, CISOs must know how to use AI as a defensive tool — and how cybercriminals are using it to launch attacks. By building awareness and developing an AI skillset, cybersecurity leaders can keep their organization’s critical information secure.
Meet Our Experts
Naveen Balakrishnan
David Cass
Jennifer Gold
How AI Enables the Next Generation of Cyber Attacks
AI has democratized cybercrime. Today, the barrier to entry is lower than ever. Anyone with access to an AI tool and a desire to commit cybercrimes can do so. For example, would-be phishers no longer need to learn English and develop a polished, believable message to gain entry into a data system; they can simply have AI do it for them.
Naveen Balakrishnan, managing director at TD Securities, explains the new developments in deepfake phishing.
“Attackers now have access to incredible tools that allow them to search your public data, your personal information, and do very personalized deep phishing tactics. And it’s incredible how much work is already done for them with very little effort.”
Not only has it become easier for hackers to launch targeted cyberattacks, but AI also enables them to scale their attempts at an unprecedented pace. Hackers can generate malware code en masse and automate attacks.
For companies, the speed of AI cyberattacks presents a unique problem, and the stakes are high. David Cass — a cybersecurity instructor at Harvard Extension School, CISO at GSR, and president of CISOs Connect — drew upon his personal consulting experience to highlight this point.
“I’ve had to work as an expert with numerous companies where literally, in under 30 minutes, they’ve lost north of $25 million,” Cass said. “When you look at 30 minutes to lose more than $25 million — that’s not a lot of time to react to things.”
Hackers are always looking for new entry points to access a company’s secure data. For CISOs, managing security for endpoints, supply chains, and third-party vendors remains a constant challenge, as bad actors exploit processes that use people, AI, and other technology to breach sensitive information.
How Organizations and CISOs Can Prepare for AI-Driven Cyber Risks
Evaluate third-party vendors
“I think 70 percent of those attacks make it into our environment through our vendors,” Balakrishnan said. “We buy a lot of different software, plug a lot of things in. And the threat actors know exactly how to leverage the vendors we buy AI-like things from to open up our surface.”
Understandably, many organizations are concerned about how AI could affect their cybersecurity posture. To keep private data secure, organizations need to understand how their vendors are using AI.
CISOs should assess each vendor’s approach to AI governance and safeguards. Balakrishnan explained what he, as a cybersecurity professional, would ask.
“I would want to know what kind of governance policies they have,” he said. “I would want to know: how do you monitor? How do you check the drift? What do you do in those cases? I would want to put some SLAs and KPIs if we were to buy the model and adjust the model.”
Additionally, CISOs should thoroughly review master service agreements. “A lot of these AI vendor products are still very young in their governance and their policies and their framework,” said Balakrishnan.
Lastly, CISOs must ensure that vendors communicate proactively, especially when introducing new features or system updates. Staying in touch with vendors and monitoring implementation can help keep companies’ data secure.
Attackers now have access to incredible tools that allow them to search your public data, your personal information, and do very personalized deep phishing tactics. And it’s incredible how much work is already done for them with very little effort.
Naveen Balakrishnan
Secure internal systems
It’s not only third-party vendors that can expose a company to AI cyberattacks. Businesses building their own models are vulnerable to attacks. Securing internal AI systems requires strong governance, clear transparency, proactive defense, and the right cybersecurity talent to manage it all.
Implement strong internal AI governance
Governance structures are essential for protecting models from manipulation and ensuring they function as intended.
Cass explained how an internal AI model could be infected by a hacker. “The models learn from the data,” he said. “So if they can be poisoned by an attacker, they’ll be pretty much useless or create a new means of attack because you’ll be so focused on the misinformation that the model is presenting you with that you may miss critical spots.”
Since AI is evolving rapidly with little regulation, internal governance guardrails are critical — not only to protect systems but also to provide insight to boards and stakeholders. Jennifer Gold, chief information security officer at Risk Aperture, explains.
“We need to understand that people are going to use these technologies regardless,” she says. “How do we enable people to innovate and use these technologies — and support them as security practitioners? How do we do so in a way that has the right guardrails in place and provides that visibility to boards? How do we ensure we have the right tools to quantify the risk and the need for the guardrails?”
One important method for mitigating risk is to keep humans involved in the AI loop. Human analysts can help prevent hallucinations or manipulations by attackers.
Recently, the European Union has released guidance for AI usage, and experts predict that more guidance will be coming in the United States, too.
In the meantime, CISOs can adopt trusted frameworks. The National Institute of Standards and Technology’s AI Risk Management Framework helps companies govern, map, measure, and manage their AI risk.
Leverage AI as a defensive tool
AI isn’t only a powerful tool for bad actors. Cybersecurity professionals are also using it in their defensive arsenal.
Security teams get numerous alerts each day, making it difficult to pinpoint which ones are urgent. Sometimes, it can take hours for a critical alert to be handed off to an expert, who can then investigate entry points or affected systems.
AI makes this process faster.
“The AI tools right now are very effective, where now you can get a skilled responder to investigate within minutes,” Balakrishnan says.
I think we really need to start upskilling and training people, and education is so incredibly important. We can’t just assume that we’re rolling out all of this new technology without training people in the best way to use it.
Jennifer Gold
Build transparency into AI systems
As AI systems become deeply embedded in cybersecurity operations, organizations must ensure these tools are transparent and understandable — not only to security teams, but also to boards and stakeholders.
“I think the explainability and the transparency have to be at a level where the general consumer can understand it,” Cass says. “What actions is this taking on our corporate systems, and exactly how is this working? And what have we permitted it to do? It [AI] can’t operate just as a black box where it just makes decisions and you have to accept the decision for what it’s worth.”
“You can never outsource your accountability,” Cass continues. “So if you decide to place reliance on these AI models, whether it’s something you built or something you’re using and something goes terribly wrong, the accountability is still going to fall on the organization that adopted the use of those models.”
Invest in cybersecurity talent
Cybersecurity talent is more important than ever. Businesses need skilled professionals who can successfully combat AI-driven attacks. They must invest in upskilling their existing security teams to face complex threats. In the AI age, cybersecurity is unlikely to see AI-caused disruption.
“AI is solving some of our lower-level problems for our security staff, but is it a replacement for them? Not really. Because again, you still need human intervention and you still need people that understand how your organization works,” Cass says. “Everybody might be using the same enterprise technology stack, but we all configure it differently. So we all create different vulnerabilities there.”
But current team members will likely need upskilling to remain competitive in a rapidly shifting threat landscape.
“I think we really need to start upskilling and training people, and education is so incredibly important. We can’t just assume that we’re rolling out all of this new technology without training people in the best way to use it,” Gold says.
Balakrishnan agrees:
“From a security lens, talent, talent, talent. We’ve got to find the right talent. We’ve got to train the right talent.”
What’s Next for AI in Cybersecurity
AI is generating new opportunities for operational efficiency for cybersecurity teams. But it’s also generating a host of new threats.
“I see novel ways of attack, new vectors, and new ways that we really need to take a step back and take a look at how we’re utilizing these technologies and be more thoughtful in our approach,: Gold says. “Shadow AI is very problematic right now, and I see that continuing to create a larger threat landscape.”
Threats are also emerging through asset inventories and compliance blind spots, highlighting the need for solid security fundamentals when adopting new technology. In asset management, historically, teams that could account for 90 percent of their assets were considered successful. But AI presents new challenges, even for companies that are high-performers in asset management security.
“If you’re a large organization and you have 300,000 plus assets, then you have 30,000 things that are unaccounted for, which is pretty scary,” Cass said. “And I’d say that next emerging spot is really your asset inventory — what cloud and SaaS products are you using? And how many of those actually have AI embedded into them?”
Balakrishnan summed up the threat and opportunity landscape: “There’s going to be crossroads between security, business, and AI. If the risk is not balanced in the lens of commercial, then the companies that figure it out will have a competitive advantage,” Balakrishnan said. “I think the next 18 to 24 months is going to be very focused on learning the fundamentals of AI as organizations, building detailed playbooks, having defined risk appetites, potentially more regulatory intervention. We’re all going to have to spend money building capabilities.”
Advice for Cybersecurity Professionals
Given the changing environment, the panel had a surprising piece of advice for cybersecurity practitioners — be brilliant at the basics.
“AI is always going to evolve. But you have to understand the core fundamentals, making sure you understand the tools to protect, whether it’s your emails, firewalls,” Balakrishnan says. “And then I would say elevate it by continuously investing in your knowledge education.”
Cybersecurity is a field that’s always evolving. CISOs and other practitioners need to embrace lifelong learning if they’re going to compete with the speed and complexity of AI-driven threats.
A strong community of like-minded professionals is also essential.
“Create a great network of security professionals. I find that helps a lot. And a lot of the things we battle are similar. And so having a great network helps,” Balakrishnan said.