Quoting the abstract (I added emphasis and paragraphs for readability):
AI code assistants have emerged as powerful tools that can aid in
the software development life-cycle and can improve developer
productivity. Unfortunately, such assistants have also been found
to produce insecure code in lab environments, raising significant
concerns about their usage in practice.
In this paper, we conduct a
user study to examine how users interact with AI code assistants
to solve a variety of security related tasks.
Overall, we find that
participants who had access to an AI assistant wrote significantly
less secure code than those without access to an assistant. Partici-
pants with access to an AI assistant were also more likely to believe
they wrote secure code, suggesting that such tools may lead users
to be overconfident about security flaws in their code.
To better
inform the design of future AI-based code assistants, we release our
user-study apparatus and anonymized data to researchers seeking
to build on our work at this link.
Caveat; quoting from section 7.2 Limitations:
One important limitation of our results is that our participant group consisted mainly of university students which likely do not represent the population that is most likely to use AI assistants (e.g. software developers) regularly.
Quoting the abstract (I added emphasis and paragraphs for readability):
Caveat; quoting from section 7.2 Limitations: