Peril vs. Promise: Companies, Developers Worry Over Generative AI Risk

  /     /     /  
Publicated : 23/11/2024   Category : security


Peril vs. Promise: Companies, Developers Worry Over Generative AI Risk


Executives and developers believe AI can help businesses thrive, but worry that reliance on generative AI brings significant risks.



The vast majority of developers believe that using generative AI systems will be necessary to increase productivity and keep up with software challenges, but intellectual property issues and security concerns continue to hold back adoption.
Some 83% of developers believe that adopting AI is necessary or they risk falling behind, but 32% were concerned about introducing AI into their process. Of those, nearly half (48%) worry that AI could pollute the intellectual property protections of their code, and 39% cite concerns that AI-generated code will have more security vulnerabilities, according to a survey published by development services firm GitLab this week. More than a third of developers also worried that AI systems could replace them or eliminate their jobs.
Overall, developers see that generative AI systems could make them more efficient, but worry over the eventual impacts, says Josh Lemos, CISO at GitLab (no relation to the author).
The privacy and data security concerns over [large language models] are still a barrier for entry, [as well as] the quality of code suggestions, he says. Understanding how to best leverage generative AI features, whether its code or other functions in your work stream, is going to change the way in which people work, and they have to consciously adopt a new approach to interacting with their their codebase.
Developers are not the only ones concerned over the dual nature of generative AI. More than half the members of corporate boards (59%) have concerns about generative AI, especially leaks of confidential information uploaded by employees to services such as ChatGPT, according to the report
Cybersecurity: The 2023 Board Perspective
, published by Proofpoint this week. In addition, attackers adoption of generative AI systems to improve their phishing attacks and other techniques has become a concern.
Boards are calling on CISOs to shore up their defenses, says Ryan Witt, resident CISO at Proofpoint.
As a tool for defenders, generative AI is critical to work behind the scenes, especially in cases where you are employing LLMs — large language models, he says. For bad actors, formulating well-written phishing and business email campaigns just became much easier and scalable. Gone are the days of advising end users to look for obvious grammatical, context, and syntax errors.
Companies have quickly moved to explore generative AI as a way to speed knowledge workers in their daily tasks. A number of companies, such as Microsoft and Kaspersky, have created services based on LLMs to resell or use internally
as a way to augment security analysts
. GitHub, GitLab, and other providers of developer services have released similar systems aimed at
assisting programmers in producing code
more efficiently.
Overall, developers have seen, or hope to see, efficiency gains (55%) and faster development cycles (44%) because of AI,
according to GitLabs recent survey
. Yet 40% also expect more secure code to come from their adoption of AI, while 39% expect more security vulnerabilities in AI-generated code.
Overall, developers will become more granular about their adoption of AI, readily accepting certain applications of generative AI while resisting others. GitLabs Lemos, for example, finds the ability of generative AI to create a concise summary from a code update or merge request to be most compelling, especially when the notes on the update have dozens or hundreds of comments.
I get a concise summary of everything thats going on, he says. I can get up to date in a few seconds on whats happening with that issue without reading through the entire thread.
One widespread concern over AI is that the systems will replace developers: 36% of developers worry that they will be replaced by an AI system. Yet the GitLab survey also gave more weight to arguments that disruptive technologies result in more work for people: Nearly two-thirds of companies hired employees to help manage AI implementations.
Part of the concern seem to be generational. More experienced developers tend not to accept the code suggestions made by AI systems, while more junior developers are more likely to accept them, Lemos says. Yet both are looking to AI to assist them with the most boring work, such as documentation and creating unit tests.
Im seeing a lot more developers raising the idea of having their documentation written by AI, or having test coverage written by AI, because they care less about the quality of that code, but just that the test works, he says. Theres both a security and a development benefit in having better test coverage, and its something that they dont have to spend time on.
While AI may be helping developers with the most mundane tasks, attackers are also learning as well, Proofpoints Witt says. Companies should not expect AI to clearly benefit one side of the cybersecurity equation or the other, he stresses.
This may devolve into a cat-and-mouse game, where AI-enhanced defenses are persistently challenged by AI-improved threats, and vice versa, he says. All of this will require continued investment in AI technology so that cybersecurity defenders can match their aggressors on the virtual battlefield.

Last News

▸ Developing and implementing an endpoint security plan. ◂
Discovered: 26/12/2024
Category: security

▸ Negligence and glitches increase breach costs globally. ◂
Discovered: 26/12/2024
Category: security

▸ Zeus Bank Malware Spreading on Facebook. ◂
Discovered: 26/12/2024
Category: security


Cyber Security Categories
Google Dorks Database
Exploits Vulnerability
Exploit Shellcodes

CVE List
Tools/Apps
News/Aarticles

Phishing Database
Deepfake Detection
Trends/Statistics & Live Infos



Tags:
Peril vs. Promise: Companies, Developers Worry Over Generative AI Risk