Chatbots are making life more convenient for people everywhere, but they’re also enabling fraudsters to steal money and personal information at an unprecedented rate. New research claims over 1 lakh ChatGPT accounts compromised India affected
According to a Singapore-based cybersecurity firm, Group-IB, hackers have breached 1,01,134 devices with saved ChatGPT credentials, now sold on dark web marketplaces. The stolen data includes passwords, phone numbers, and email addresses, which criminals could use to steal users’ money or login information for other online services.
Traditional fraud detection methods rely on pre-defined rules and patterns to identify suspicious activity, but fraudsters constantly change their tactics. This means that these systems are struggling to keep up and that fraudster can use chatbots like ChatGPT to create more convincing and authentic texts for phishing attacks and other forms of fraud.
To generate a text with ChatGPT, the neural net inside the bot is given a starting corpus of words (like news articles or other real-world texts) and a prompt to continue from. The neural net tries to find words that will reasonably continue the text with a high probability. This is done for every token in the chatbot’s list, so generating longer pieces of text can take a very long time.
When the system is finished generating a text, it’s compared with the original starting corpus of words to see how close it is to the intended results. If it’s not close enough, the algorithm will start over from scratch with a fresh prompt and set of words to try again. This process is repeated until the desired text is produced.
As the artificial intelligence in chatbots continues to improve, so does its potential for misuse. For example, ChatGPT’s ability to impersonate a person can be used to increase the likelihood of successful whaling scams. These types of scams typically involve impersonating an organization’s senior member and asking the recipient to send money or their login information so that they can “refund” them.
Fraudsters could also use chatbots like ChatGPT to speed up the process of sending spam emails. This would allow them to send messages at a much higher volume and with better quality while still appearing legitimate to the recipients. This would make it harder for security systems to spot the content as fake or malicious since they tend to look for duplicated content.
In the future, the proliferation of artificial intelligence will only increase the risks for consumers and businesses. However, some steps can be taken to mitigate these risks. One of the most important is to follow best practices for securing your online accounts and devices, including using two-factor authentication and following proper password safety guidelines.