Google CEO Sundar Pichai speaks in conversation with Emily Chang throughout the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and runs through November 17.
Justin Sullivan | Getty Images News | Getty Images
MUNICH — Rapid developments in artificial intelligence could help strengthen defenses against security threats in cyberspace, in response to Google CEO Sundar Pichai.
Amid growing concerns in regards to the potentially nefarious uses of AI, Pichai said the intelligence tools could help governments and firms speed up the detection of — and response to — threats from hostile actors.
“We’re right to be anxious in regards to the impact on cybersecurity. But AI, I feel actually, counterintuitively, strengthens our defense on cybersecurity,” Pichai told delegates at Munich Security Conference at the top of last week.
Cybersecurity attacks have been growing in volume and class as malicious actors increasingly use them as a strategy to exert power and extort money.
Cyberattacks cost the worldwide economy an estimated $8 trillion in 2023 — a sum that is ready to rise to $10.5 trillion by 2025, in response to cyber research firm Cybersecurity Ventures.
A January report from Britain’s National Cyber Security Centre — a part of GCHQ, the country’s intelligence agency — said that AI would only increase those threats, lowering the barriers to entry for cyber hackers and enabling more malicious cyberactivity, including ransomware attacks.
“AI disproportionately helps the people defending since you’re getting a tool which can impact it at scale.
Sundar Pichai
CEO of Google
Nevertheless, Pichai said AI was also lowering the time needed for defenders to detect attacks and react against them. He said this may reduce what’s referred to as the defenders’ dilemma, whereby cyber hackers have to achieve success only once to attack a system whereas a defender has to achieve success each time with the intention to protect it.
“AI disproportionately helps the people defending since you’re getting a tool which can impact it at scale versus the people who find themselves trying to take advantage of,” he said.
“So, in some ways, we’re winning the race,” he added.
Google last week announced a latest initiative offering AI tools and infrastructure investments designed to spice up online security. A free, open-source tool dubbed Magika goals to help users detect malware — malicious software — the corporate said in a press release, while a white paper proposes measures and research and creates guardrails around AI.
Pichai said the tools were already being put to make use of in the corporate’s products, equivalent to Google Chrome and Gmail, in addition to its internal systems.
![U.S. lawmakers reiterate support for Ukraine as President Zelenskyy calls for more aid](https://image.cnbcfm.com/api/v1/image/107375275-17083424511708342447-33395927615-1080pnbcnews.jpg?v=1708342451&w=750&h=422&vtcrop=y)
“AI is at a definitive crossroads — one where policymakers, security professionals and civil society have the possibility to finally tilt the cybersecurity balance from attackers to cyber defenders.”
The discharge coincided with the signing of a pact by major firms at MSC to take “reasonable precautions” to forestall AI tools from getting used to disrupt democratic votes in 2024’s bumper election yr and beyond.
Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI, TikTok and X were among the many signatories of the brand new agreement, which incorporates a framework for a way firms must reply to AI-generated “deepfakes” designed to deceive voters.
It comes as the web becomes an increasingly vital sphere of influence for each individuals and state-backed malicious actors.
Former U.S. Secretary of State Hillary Clinton on Saturday described cyberspace as “a latest battlefield.”
“The technology arms race has just gone up one other notch with generative AI,” she said in Munich.
“When you can run just a little bit faster than your adversary, you are going to do higher. That is what AI is de facto giving us defensively.
Mark Hughes
president of security at DXC
A report published last week by Microsoft found that state-backed hackers from Russia, China and Iran have been using its OpenAI large language model (LLM) to reinforce their efforts to trick targets.
Russian military intelligence, Iran’s Revolutionary Guard, and the Chinese and North Korean governments were all said to have relied on the tools.
Mark Hughes, president of security at IT services and consulting firm DXC Technology, told CNBC that bad actors were increasingly counting on a ChatGPT-inspired hacking tool called WormGPT to conduct tasks like reverse engineering code.
Nevertheless, he said that he was also seeing “significant gains” from similar tools which help engineers to detect and reserve engineer attacks at speed.
“It gives us the power to hurry up,” Hughes said last week. “More often than not in cyber, what you may have is the time that the attackers have in advantage against you. That is often the case in any conflict situation.
“When you can run just a little bit faster than your adversary, you are going to do higher. That is what AI is de facto giving us defensively in the intervening time,” he added.
![Germany has been benefitting from a 'peace dividend' for years, defense minister says](https://image.cnbcfm.com/api/v1/image/107375187-17082479791708247975-33381934982-1080pnbcnews.jpg?v=1708248231&w=750&h=422&vtcrop=y)