Alphabet inc warns employees about chatbots

As technology advances in the digital age, companies increasingly rely on artificial intelligence tools, including chatbots, to streamline their operations. However, Alphabet Inc., the parent company of Google, has cautioned its employees against using chatbots, including its own program, Bard.

The company is concerned about the potential misuse of chatbots and the risk of leaking sensitive information and has advised its workers not to enter any confidential materials into AI chatbots, citing its long-standing policy on safeguarding information.

Discover below the risks of those technologies and how to protect yourself.

Who Are Using Chatbots?

The use of publicly-available chat programs has raised concerns for companies, and Alphabet’s warning reflects the need for guardrails on AI chatbots. Samsung,, Deutsche Bank, and Apple have also set up similar guidelines.

In a survey of nearly 12,000 respondents, 43% of professionals were using AI tools, often without telling their bosses.

While such technology can draft emails, documents, and even software itself, it can also include misinformation, sensitive data, or even copyrighted passages. Some companies have developed software to address these concerns.

What Are The Companies Concerns?

Cloudflare, for example, is marketing a capability for businesses to tag and restrict some data from flowing externally. Google and Microsoft are also offering conversational tools to business customers that come with a higher price tag but refrain from absorbing data into public AI models.

Microsoft’s consumer chief marketing officer, Yusuf Mehdi, said it makes sense that companies would not want their staff to use public chatbots for work. Microsoft's free Bing chatbot has much stricter policies compared to its enterprise software, by the way.

And What About Bard And ChatGPT?

Bard is a human-sounding program that uses generative artificial intelligence to hold conversations with users, and human reviewers may read the chats. Alphabet has also alerted its engineers to avoid direct use of computer code that chatbots can generate, which can create a leak risk.

The default setting in Bard and ChatGPT is to save users' conversation history, which users can delete. Alphabet Inc.'s warning underscores the importance of safeguarding confidential information on chatbots.

While AI tools are becoming increasingly popular, it is important to be aware of the potential risks and to take steps to address them. Companies must prioritize data security and ensure their employees follow guidelines to protect sensitive information. Do you agree? Share this article with other people.