Have you ever used an application called Chat & Ask AI? If so, there’s a good chance your messages were exposed last month. In January, an independent researcher was able to easily access some 300 million messages on the service, according to 404 Media’s Emanual Maiberg. The data included chat logs related to all kinds of sensitive topics, from drug use to suicide.
Chat & Ask AI, an app offered by the Istanbul-based company Codeway that is available on both Apple and Google app stores, claims to have around 50 million users. The application essentially resells access to large language models from other companies, including OpenAI, Claude, and Google, providing limited free access to its users.
The problem that lead to the data leak was related to an insecure Google Firebase configuration, a relatively common vulnerability. The researcher was easily able to make himself an “authenticated” user, at which point he could read messages from 25 million of the app’s users. He reportedly extracted and analyzed around 60,000 messages before reporting the issue to Codeway.
What do you think so far?
The good news: The issue was quickly patched. More good news: there have been no reports of these messages leaking to the broader internet. Still, this is yet one more reason to carefully consider the kinds of messages you send AI chatbots. Remember, conversations with AI chatbots aren’t private—by their nature, these systems often save your conversations to “remember” them later. In the case of a data breach, that could potentially lead to embarrassment, or worse—and using an reseller like Chat & Ask AI to access large language models adds another layer of potential security risks, as this recent leak demonstrates.
