Many workers in the United States are increasingly relying on the chatbot program ChatGPT for assistance with basic tasks, according to a recent Reuters/Ipsos poll. The poll found that despite concerns that prompted companies such as Microsoft and Google to limit its use, ChatGPT is being embraced by workers for various work-related activities such as drafting emails, summarizing documents, and conducting initial research.
The poll, conducted between July 11 and 17, revealed that 28% of respondents regularly use ChatGPT at work, although only 22% reported that their employers explicitly allowed the use of external AI tools. The poll, which surveyed 2,625 adults across the US, had a credibility interval of about 2 percentage points. Conversely, 10% of respondents stated that their employers expressly prohibited the use of external AI tools, while approximately 25% were unsure about their company’s stance on the technology.
Despite its popularity, the rise of ChatGPT has raised concerns among security firms and businesses, who worry about the potential risk of intellectual property and strategy leaks. OpenAI, the developer of ChatGPT, has faced criticism from privacy watchdogs in Europe due to its mass data collection methods. Additionally, the fact that human reviewers from other companies may read the generated chats has raised further concerns about the security of proprietary information.
Ben King, VP of customer trust at corporate security firm Okta, highlighted the need for businesses to understand how data is utilized when using generative AI services. OpenAI declined to comment on the ramifications of individual employees using ChatGPT but assured corporate partners that their data would not be used to train the chatbot without their explicit permission.
Companies such as Samsung and Google have implemented restrictions on the use of AI chatbot tools, while others like Coca-Cola and Tate & Lyle are actively exploring their potential benefits while prioritizing security measures. With the increasing adoption of ChatGPT and similar platforms, experts suggest that caution is necessary to protect sensitive company information from potential risks associated with these technologies.