Pros & Cons of AI Coding Assistants

Are AI Coding assistants simply speeding up code production by developers, or can they actually improve its quality?

At the core of every software application lies its code lines. These millions-strong lines of code can be affected by many aspects like design and functionality as well as architectural choices. A typical codebase consists of modular libraries connected by Application Programming Interfaces (APIs), often organized into microservices with containers to reduce complexity.

Developers face many difficulties when creating software, yet creating code manually remains one of their greatest hurdles.

As AI becomes an ever-increasing part of modern life, more developers are turning to AI Coding assistants such as GitHub CoPilot or ChatGPT to speed up and improve productivity when it comes to AI coding. According to research by GitHub, 92% of developers in the US currently utilize such AI tools for speedier AI coding sessions and improved productivity.

GitClear conducted an analysis that cast doubt on whether AI coders were helping improve software development. Their study, comprising over 150 million lines of code, revealed troubling trends such as increased code churn and reuse rates since AI models-based development became mainstream.

This brings up an important question: although AI coders may work quickly, what are their effects on code quality and integrity?

AI Coding Assistants & How They Work

Software development AI Coding assistants such as Github Copilot, Divi AI and Amazon CodeWhisperer have become essential tools in modern software development environments. Utilizing artificial intelligence capabilities they assist developers with writing reviews, analyzing code and code reviews.

These assistants rely on sophisticated artificial intelligence (AI) models and extensive language models (LLMs) developed from vast libraries of code. Their primary function is to offer suggestions and completes for blocks of code or lines within an integrated development environment (IDE).

As developers type, assistants provide suggestions for automatically completing common syntax, function names and variables as well as code sections in context of what they are typing. For instance, if they begin typing “for I in…” then their assistant may suggest they complete their loop syntax by typing: “for i within range(10 ):”.

Note that these programs go far beyond mere auto-complete capabilities; they actively take part in AI-driven coding with developers, adapting to each context in which they operate in order to provide accurate suggestions.

As well as writing code, certain assistants have the capability of creating complete class definitions or functions based on descriptions provided by developers. One manager at Deloitte described how AI Coding assistants allowed him to conceptualize and launch a program from scratch in 30 days using AI assistants.

AI programming tools offer many additional useful features, including error detection and assistance in documenting code through comments and READMEs; plus they can offer explanations or examples of its usage.

Also Read: Sora: 9 Awesome AI-generated Videos Made By Sora

AI Coding Assistants Offer Speed — But Do They Guarantee Quality Code?

Vaclav Vincalek, founder of 555vCTO and AI Coding assistant advocate shared his insights via email with NexusArticle.

Karthik Sj., vice president of Product Management & Marketing for Aisera is concerned that an accelerated process may reduce quality software, leading to potential repercussions for both users and developers.

He said:

In terms of quality, Language Models (LLMs) can often lead to unexpected results and require developers to closely inspect any code generated. As such, it’s crucial that developers carefully review code generated.

GitClear reports indicate that AI Coding tools can speed up code creation but fail to take into account bad code that should not have been written initially.

The report asserts that code generation speed alone should not be considered the sole determining factor in its quality and usage; other considerations also need to be taken into account, including its importance and utility for future needs. Speed of code production alone cannot prevent technical debt, maintenance issues and additional complications later if produced quickly with subpar code that doesn’t serve a necessary purpose or is irrelevant.

Nazmul Hasan, founder and CIO of AI Buster, shares Gitclear’s belief that poorly written code can cause severe suffering to readers and maintainers trying to understand and modify its contents in the future.

NexusArticle interviewed him further. In their discussion he noted: “Incorporating AI coding assistants has had mixed effects on code maintainability and readability.

“AI-powered assistants make coding simpler, but there is a risk that AI Coding generated for my project might not comply with my specific guidelines or be understood by my team.

I’ve witnessed instances where AI usage led to inconsistencies and technical debt. To maximize effectiveness when using artificial intelligence (AI), it’s becoming clear that when making use of this technology it’s crucial that code remains clean, well-documented and aligned with our design principles.

AI Coding Assistants: A Recipe for Software Vulnerabilities

Generative AI security is a worldwide concern and there is legitimate cause for alarm over their increasing reliance.

Cornell University researchers recently conducted a comprehensive investigation of AI coder assistants, and discovered patterns which indicate possible security weaknesses. For instance, their findings show that programmers who utilized such assistance tend to write less secure code than those without such support.

Programmers demonstrated an ability to view AI-enabled code they wrote as safe even if it contained potential security flaws.

The study also revealed that those who place less trust in AI Coding tools and whose prompts were carefully planned have less security risks.

AI tools for coding may increase efficiency, but they may also cause overconfidence to increase, potentially jeopardizing the quality of code created for applications that require security.

Given these substantial threats to security and integrity of code they create, can AI coders still be trusted?

Hasan indicates that the answer to this question is complex and variable depending on several factors such as nature, complexity and domain of code being written; effectiveness and reliability of AI programming assistant; level of human supervision/verification etc.

Safety is an intriguing topic highlighted by David Brauchler, principal security consultant for NCC Group in an email conversation with NexusArticle.

“We should also recognize that these systems aren’t trained to develop software with impeccable qualities; their training involves creating codes which resemble human behaviors such as mistakes, presumptions, and weaknesses.

As these models ingest greater volumes of data, they may reveal clues as to the types of mistakes people make when programming; creating subtle yet difficult-to-spot issues.

How Developers Can Minimize Risk While Using AI Coding Assistants

Artificial Intelligence-powered coding tools have rapidly gained ground. Yet the decision of whether or not to embrace or decline these AI tools presents teams with a difficult decision: either adopt them, but risk compromising code quality; or turn them down and be left behind during product development and launch processes.

There is another strategy available which promises benefits while simultaneously lowering risk.

Sj is firm in her belief that developers must ensure their AI partners receive comprehensive training, as well as establish clear usage guidelines.

Peter McKee, Sonar Source’s Head of Developer Relations suggests that developers incorporate code scanning tools into their workflows in order to avoid GenAI-related mistakes in their code.

McKee spoke out in an interview with NexusArticle to emphasize this point.

Integrating code scanning into a Continuous Integration/Continuous Deployment pipeline enables continual inspection of software created using artificial intelligence, providing an opportunity for detection of discrepancies or defects.”

Developers can then focus on more general project issues while simultaneously gaining insight into error causes for faster problem resolutions.

McKee suggests the use of Static Application Security Testing (SAST) to identify weaknesses in GenAI-generated code and help developers detect security flaws prior to deployment of code.

Also Read: Can AI Help You Win the Stock Market? Exploring Hoops AI, Option Alpha, and Wealthfront

Conclusion

AI code assistants can be an effective aid in speeding up software development projects and increasing developer efficiency, but they don’t offer a complete solution as they could produce poor or insecure code that reduces developer efficiency, creativity and generate ethical or legal issues.

Developers should exercise extreme caution in their use of AI programming assistants, and avoid depending too heavily on them. Furthermore, it would be prudent for them to carefully assess, assess and verify any code generated by these assistants to make sure it satisfies industry benchmarks as well as the programming language in use.

Nexus Article

Tags:

Nexus Article
      Nexus Article logo

      Dive into a world of daily insights at Nexus Article. Our diverse blogs span a spectrum of topics, offering fresh perspectives to elevate your knowledge. Join us on this journey of exploration and discovery.

      Quick Links

      © 2024 Nexus Article All Rights Reserved.

      Nexus Article
      Logo