The Other “AI”: Managing Artificial Identity Amidst the AI Boom

With the recent release of generative AI tools like ChatGPT and Generative Fill, the AI craze is in full swing. But in 2023, our obsession with AI is hardly breaking news. In fact, the first recorded mention of AI comes from Homer’s Iliad (762 BC), meaning by the time the first successful AI system was invented in 1951, we’d already been imagining its potential for at least 2,700 years.  

So why is AI more popular than ever in 2023? The introduction of commonly available generative AI tools has created a reality in which, for the first time, ordinary people are interacting with AI in a meaningful way, daily. AI is beginning to look like history’s (and Hollywood’s) imaginings for the first time. This is very exciting; It’s also our cue to proceed with caution.  

The Threat Artificial Identity Poses to Enterprises 

As a cybersecurity expert, my top concern for enterprises amidst the AI boom surrounds what I like to call “the other AI”: artificial identity. Where digital identity is built on user data, compromised data spawns artificial identity; hackers know that targeting enterprises is the best way to steal a large amount of personal data simultaneously. With ever-advancing AI-powered attacks, data is only becoming harder to protect and easier to steal.  

Artificial identities built on stolen data can be used to: 

  1. Access your organization’s digital environment, or 
  2. Pretend to be your employee/customer anytime, anywhere on the internet to
    • say anything,
    • do anything, or
    • buy anything

…as someone they are not; as an employee or customer whose digital identity is your responsibility to protect. Statistics show that approximately 780,000 records are lost to hacking each day and the average cost of a breach is $7 Mil+, making organizational security breaches not a matter of “if”, but “when”.  

To further understand the technology at play, let’s examine some benefits & challenges of modern AI for enterprises and identify how artificial intelligence might become “the other AI”.  

AI-powered passwordless authentication uses biometrics to create an incredibly seamless digital experience for employee and customer users navigating an organization’s digital environment.  

That same biometric recognition technology can be used to generate hyper-realistic deepfakes using personal data stolen from an enterprise. Deepfakes are convincing visual or auditory digital representations of a victim most used for catfishing, impersonation, blackmail, extortion, theft, or pornography.  

Recent research, conducted by amazon funded firm exposes captcha vulnerability with AI which puts a lot of companies at risk of brute force attacks, you can read more about it here

In the retail sector, the omnichannel experience for retail perfects the buyer’s journey by using AI to pinpoint customers in their consumer journey and engage them where they are. Creating a comprehensive customer identity is key to this approach; it allows retailers to create personalized experiences online and in person, makes check-out quick and easy, and keeps customers engaged with retailers outside of the traditional shopping experience. Customers’ banking information is often tied to their digital identity within a retailer’s system.  

For organizations storing and managing profiles such as these, a data breach would have the potential to yield comprehensive artificial identities equipped with sensitive banking information. Current AI tools can mimic a real user and can cause financial loss to the customer by withdrawing their funds without one’s knowledge 

In the healthcare sector, AI optimizes clinical workflows, detects, and diagnoses diseases faster than ever, automates drug discovery, expertly analyzes X-rays, and enables patients to engage with their healthcare journey wherever they are.  

But healthcare organizations store our most sensitive, valuable data, making them the #1 most targeted enterprises by cybercriminals. In 2022, 51.9M digital healthcare records were stolen during breach events, many of which were assisted by AI. With stolen data of this nature, hackers can sell artificial patient identities used to fraudulently obtain expensive medical services/medical benefits, access prescription medication, and even blackmail patients.  

What can we do about it?  

How can organizations share in the excitement surrounding positive AI use cases while protecting themselves from malicious attacks?  

AI for enterprises falls into two categories: experiential and protective. Businesses that invest in the first and not the second not only leave themselves vulnerable to cyberattacks but also inadvertently optimize their data to create artificial identities; These are the businesses we see suffer the most frequent and detrimental breaches. Enterprises should always couple experiential AI with the strongest cybersecurity for the most success. 

Organizations can keep their digital environment safe from AI-equipped bad actors by bringing identity to the center of their cybersecurity strategy with zero trust. By never trusting and always verifying, systems won’t be fooled by artificial intelligence turned artificial identity.  

Don’t be afraid to get excited about AI. Visit to learn more about protecting your enterprise with a zero-trust cybersecurity strategy. 

Sasi Kelam
Sasi Kelam

Leave a Reply

Your email address will not be published. Required fields are marked *

Signup for our newsetter