AIs sometimes lie.
They might lie because their creator told them to lie. For example, a scammer might train an AI to help dupe victims.
Or they might lie (“hallucinate”) because they’re trained to sound helpful, and if the true answer (eg “I don’t know”) isn’t helpful-sounding enough, they’ll pick a false answer.
Or they might lie for technical AI reasons that don’t map to a clear explanation in natural language.