A chain of Bitcoin (Marketing Strategies for Blockchain Innovators)

How AI-Assisted Coding Complicates Application Security

According to a recent survey by StackOverflow, a popular coding forum, 76% of developers use or plan to use AI tools for their work. With tools like ChatGPT and GitHub Copilot, developers can write code faster, automate repetitive tasks, and even generate entire frameworks from scratch.

However, there is one risk that many coders underestimate with their increasing reliance on AI: security. Application security hinges on writing secure code and carefully vetted libraries – things that AI tools often forego in the name of speed and functionality, or simply because they haven’t yet been optimized to take these considerations into account.

This article will bring light to these risks and explain how developers can utilize application security testing and other methods to ensure the use of AI doesn’t compromise the security of their applications.

The Main Security Risks Posed by AI-Assisted Coding

The main problem with blindly following AI outputs is that these outputs don’t originate from any in-depth understanding or knowledge of coding. They are simply patterns generated from analyzing vast amounts of data.

While it’s impressive what AI tools can generate and the insights they can provide, they lack the contextual awareness and critical judgement required to produce secure and context-specific code. This often results in code that inadvertently introduces common vulnerabilities like SQL injections, cross-site scripting (CSS), session hijacking and so forth.

For developers who copy and paste AI-generated code without a thorough review, there is a very high probability of embedding these flaws into their applications.

Copying insecure code isn’t the only risk. Merely asking the AI for suggestions can introduce vulnerabilities. For example, AI may recommend libraries or frameworks that haven’t been updated in years. Since the recommendations are mainly based on historical data, the AI might overlook whether a library has been abandoned, lacks recent security patches, or contains known vulnerabilities.

So, developers should not only verify the functionality of the suggested libraries, but also aspects like patch history and active community support.

How to Manage These Risks Effectively

While the risks are significant, completely banning AI use would probably be a net loss for developers and organizations. The goal is to figure out how to use AI in a way that doesn’t compromise security. Addressing the issue early in the AI adoption phase is critical to ensuring long-term application resilience and developer confidence.

There are a few strategies to do so. Perhaps most importantly, there has to be a process for testing the application’s code, which is best done through a combination of static and dynamic application security testing (SAST and DAST). SAST analyzes the code early in the development process, detecting vulnerabilities before the application goes live. On the other hand, DAST is used post-launch to ensure the application works securely in a live environment.

Software Composition Analysis (SCA) is also useful to vet third-party libraries and dependencies which may have been selected based on AI recommendations.

Weekly AI essentials. Brief, bold, brilliant. Always free. Learn how to use AI tools to their maximum potential and access our AI resources to help you grow. 👇

On top of these technical measures, organizations should also consider implementing rules and guidelines regarding AI-assisted coding. Forbidding AI use will likely result in resistance and a loss of productivity, so the focus should shift on establishing clear boundaries and best practices for responsible use.

The first step could be creating a list of approved AI tools developers can use. The tools should be vetted for security. Generally, it’s best to avoid newer tools or tools that aren’t developed by known and well-funded tech companies.

Additionally, developers should know what data can and can’t be inserted into the approved tools. For instance, sensitive or proprietary code should never be input into public AI platforms to avoid potential data leaks.

Will AI Ever Be Fully Secure for Coding?

With the rapid advancements in generative AI and our increasing reliance on these tools, the question arises of whether they can ever fully grasp the security principles and nuances required to produce truly secure and reliable code.

According to CISA’s “2023-2024 CISA Roadmap for Artificial Intelligence,” for that to happen, security must be integrated as a core component of the AI’s development cycle. The roadmap suggests the adoption of secure-by-design principles early in the AI system’s lifecycle before it’s deployed for widespread use.

What we’ve seen instead with a lot of new AI tools is that speed and functionality are prioritized over security, leaving the burden on application developers to manually vet, test, and secure AI-generated code.

Only when that approach changes will we get to see AI tools that not only enhance productivity but help developers produce secure and resilient code.

Conclusion

Generative AI has been a great addition to the developer’s toolkit, significantly boosting productivity and helping developers problem-solve faster. However, completely relying on AI outputs is a sure way of introducing vulnerabilities to your applications.

With AI becoming so popular among developers, application security testing is more important than ever. SAST and DAST, combined with clear guidelines for AI use and human oversight ensures that all code, whether written by a developer or suggested by AI, is safe and reliable to use.

Sign Up For Our AI Newsletter

Weekly AI essentials. Brief, bold, brilliant. Always free. Learn how to use AI tools to their maximum potential. 👇

Weekly AI essentials. Brief, bold, brilliant. Always free. Learn how to use AI tools to their maximum potential.