Generative AI is Changing the Cybersecurity Landscape, Panelists at Verizon Event Say

At an October meeting at Verizon’s Basking Ridge headquarters, invited guests learned what the communications giant thinks about the future of cybersecurity as the artificial intelligence (AI) landscape changes. Much of the panel discussion focused on generative AI, especially ChatGPT, and how it has changed the nature of threats.

The panel was moderated by Chris Novak, managing director of cyber security consulting at Verizon. Also appearing were Sean Atkinson, chief information security officer at the nonprofit Center for Internet Security (East Greenbush, N.Y.), who helps design and implement strategies, operations and policies to protect enterprise and government information assets; and Krista Shea Valenzuela, bureau chief, cyber threat outreach & partnerships, at the New Jersey Cybersecurity & Communications Integration Cell(NJCCIC), a state government agency based in Ewing Township.

Novak has been involved with cybersecurity for 20 years, helping customers in the U.S. and globally with cybersecurity services, he said. Another group at Verizon works to keep the network safe. He pointed out that the company’s cybersecurity focus is shifting from protection against loss of data caused by bad actors to protection against attacks launched by machines.

“The role of the human is more about how we code the algorithm that mines the data, but the algorithm itself may come up with those insights, and they may be very different than what a human may come up with,” he said.

After a discussion on how data privacy is changing, the panelists talked about how generative AI is helping to create spear-phishing attacks that are almost undetectable to those who are targeted. “I think I just saw a report from Trend Micro,” Atkinson said. “There is approximately a 70 percent increase in terms of the ability to target individuals, versus [conventional] phishing. They [the hackers] are no longer using the big pond, trying shoot anything in their sights. They are now spear phishing the individual and are able to build those [detailed] profiles. This creates a narrative that is a lot more clickable than just the general attack.” Also, you used to be able to identify a phishing attack by the grammar in those emails, he said, but not anymore. “Generative AI can clean that all up and provide a succinct approach to getting clicks.”

As companies give permission to their employees to use generative AI tools, dangers are rising in business. “One of the things that we’ve got to understand is assessing AI governance and building a framework in terms of implementation. I don’t think it should be a free-for-all,” Atkinson said.

For example, “I believe a lot of electronics manufacturers would use ChatGPT and actually put information that was proprietary into that system. That system aggregates that data and uses it as part of its model, building and training. And with that, [any data given to them] has been lost. And, so, we’ve got to be very conscious, and also provide awareness to our end users, in terms of how they can use these tools. Ultimately, there is a benefit. We’ve seen it, but we’ve got to be able to assess the risk and control it.”

According to Valenzuela, the state government doesn’t prohibit the use of generative AI tools, but it has put out guidelines that describe best practices. These include “making sure we’re not putting anything proprietary or sensitive into those machines. And if we’re going to use that [generative AI], we have to verify that the information is true, because we’ve seen time and again that you put information in there or you put a question in there, and the information you get back is not actually fully accurate. And it’s not a one-off. This happens often.” The state understands, however, that this tool is a big time-saver and helps agencies do more with less, she added.

The panel also discussed where the next threats using AI are coming from, concurring that countries with the resources to learn how to implement sophisticated AI programs will be involved. By contrast, ordinary hackers might not have the sophistication to learn AI, so they will probably stick to conventional phishing.

Novak pointed out that some people are looking at this situation as an AI arms race, but any such race will likely be limited to few participants. “It requires a significant amount of resources. It’s not just the technology resources, it’s the financial resources to make sure an AI attack works.” As for the defense against such attacks, it will take more than “just proofreading your phishing email.” There is hope, however. Organized crime groups may have resources, but they “are more loosely affiliated, and I think it will be a higher climb for them, which I’m hoping gives us [the good guys] an advantage.”

Sharing is caring!

32 More posts in cybersecurity category
Recommended for you
Jeff Miller, CEO of Synchronoss
CES 2024 — “On the Road” With Jeff Miller, CEO of Synchronoss Technologies

 by Rob Rinderman NJTechWeekly.com contributor Rob Rinderman caught up with Jeff Miller, president and CEO...