A survey of developers by coding Q&A site Stack Overflow has found that AI tools are becoming commonplace in the industry even as coders remain skeptical about their accuracy. The survey comes at an interesting time for the site, which is trying to work out how to benefit from AI while dealing with a strike by moderators over AI-generated content.
The survey found that 77 percent of respondents felt favorable about using AI in their workflow and that 70 percent are already using or plan to use AI coding tools this year.
Only 3 percent of respondents said they “highly trust” AI coding tools
Respondents cited benefits like increased productivity (33 percent) and faster learning (25 percent) but said they were wary about the accuracy of these systems. Only 3 percent of respondents said they “highly trust” AI coding tools, with 39 percent saying they “somewhat trust” them. Another 31 percent were undecided, with the rest describing themselves as somewhat distrustful (22 percent) or highly distrustful (5 percent).
The annual survey received 90,000 responses from 185 countries, according to Stack Overflow. Other highlights regarding AI usage include:
- ChatGPT is the most popular AI search tool, used by 83 percent of respondents, followed by Bing AI (20 percent), WolframAlpha (13 percent), and Google Bard AI (10 percent).
- GitHub Copilot is the most popular developer search tool, used by 55 percent of respondents, followed by Tabnine (13 percent) and AWS CodeWhisperer (5 percent).
- Respondents to the survey in India, Brazil, and Poland were more likely to embrace AI tools than developers in the US, UK, and Germany.
- Respondents who were “learning to code” were more likely to use AI tools than those who said they were “professional developers” (82 percent versus 70 percent).
Joy Liuzzo, Stack Overflow’s vice president of product marketing, told The Verge that the company would use these responses to shape its own approach to AI.
“We are investing in AI right now, and we needed to understand how developers were perceiving the technology and incorporating it as part of their developer workflow,” said Liuzzo. She said that AI would “democratize” coding, allowing more people to learn the profession without access to formal education. “That’s why we really believe we can play that crucial role in how AI accelerates, focusing on the quality of the AI offerings.”
Stack Overflow’s CEO, Prashanth Chandrasekar, recently described AI as a “big opportunity” for the site. Chandrasekar said the company would start building generative AI tools into its platform while exploring ways to charge companies for access to its data.
Community knowledge sites like Stack Overflow are incredibly useful resources for companies training AI language models and AI coding tools. Companies generally scrape their data without permission, but sites are beginning to object to this, especially as AI tools become more lucrative and threaten the data sources they owe their existence.
Some of Stack Overflow’s moderators are on strike over its policies allowing AI-generated content
In Stack Overflow’s case, the company is also trying to work out how to stop AI-generated content from polluting its own community-created database of knowledge. The company temporarily banned the submission of AI-generated content last December. Still, it essentially reversed this decision in May, asking moderators to “apply a stringent standard of evidence to determining whether a post is AI-authored when deciding to suspend a user.” In response, a number of moderators have gone on strike, saying the policy will allow for too many low-quality AI-generated answers to remain on the site and “poses a major threat to the integrity and trustworthiness of the platform and its content.”
When asked by The Verge about the contrast between Stack Overflow’s embrace of AI and the dissatisfaction expressed by its moderators, Liuzzo declined to answer. Later, Stack Overflow sent The Verge a press statement from its VP of community, Philippe Beaudette, criticizing moderators for levying “unnecessary suspensions” on users. One of the strike’s elected representatives, Mithical, told The Verge that the company’s characterization was incorrect and that it had failed to provide any actual examples of incorrect suspensions.
Arguably, this tension between Stack Overflow’s management and its most dedicated users reflects some of the same fault lines found in the survey. Users are increasingly turning to AI coding tools, even if they don’t always trust the results. Now, the profession needs to work out how to deal with this new liability.