Attention FAE Customers:
Please be aware that NASBA credits are awarded based on whether the events are webcast or in-person, as well as on the number of CPE credits.
Please check the event registration page to see if NASBA credits are being awarded for the programs you select.

Companies Concerned About Potential Risks Posed by AI Tools at Work

By:
S.J. Steinhardt
Published Date:
Jul 10, 2023

As generative AI tools become increasingly integrated into workplace tools, companies have banned or restricted their use by employees, fearful of the potential risks of disclosing confidential information, The Washington Post reported.

A worst-case scenario, in the eyes of several corporate leaders interviewed by The Post, is one in which an employee uploads proprietary computer code or sensitive board discussions into the chatbot while seeking help at work. That information could be hacked or accessed by competitors, though computer science experts interviewed by The Post said that the validity of such concerns remain unclear.

The potential of AI is causing companies to experience both “a fear of missing out and a fear of messing up,” Danielle Benecke, the global head of the machine learning practice at the law firm Baker McKenzie, told The Post. “You want to be a fast follower, but you don’t want to make any missteps,” she said.

OpenAI CEO Sam Altman privately told some developers that the company wants to create a ChatGPT “supersmart personal assistant for work” that has built-in knowledge about employees and their workplace. This tool could draft emails or documents in a person’s communication style with up-to-date information about the company, The Information reported in June.

Google, which is developing its own rival to ChatGPT, Bard, has “always told employees not to share confidential information and have strict internal policies in place to safeguard this information,” Robert Ferrara, the communications manager at Google, said in a statement to The Post. Verizon executives has also warned their employees not to use ChatGPT at work.

Joseph B. Fuller, a professor at Harvard Business School and co-leader of its future of work initiative, predicted that companies eventually will integrate generative AI into their operations, because they soon will be competing with start-ups that are built directly on these tools. If they wait too long, they may lose business to nascent competitors, he told The Journal.

Companies are taking a range of approaches to generative AI. Defense company Northrop Grumman, media company iHeartMedia, and financial services firms Deustche Bank and JPMorgan Chase have banned the tool outright, arguing that the risk is too great to allow employees to experiment.

Among law firms, Steptoe & Johnson did not ban ChatGPT but its employees are not allowed to use generative AI tools in client work. Baker McKenzie sanctioned the use of ChatGPT for certain employee tasks, but any work produced with AI assistance must be subject to thorough human oversight, given the technology’s tendency to produce convincing-sounding yet false responses.

Yoon Kim, a machine-learning expert and assistant professor at MIT, said companies’ concerns are valid, but they may be inflating fears that ChatGPT will divulge corporate secrets. He told The Journal that it is technically possible that the chatbot could use sensitive prompts entered into it for training data but also said that OpenAI has built guardrails to prevent that.

“It’s unclear if [proprietary information] is entered once, that it can be extracted by simply asking,” he said.

Click here to see more of the latest news from the NYSSCPA.