Wednesday, March 5, 2025

A.I. Hallucination, AND, Submission Of Case Precedence!

 

There’re few incidents wherein A.I. hallucinations resulted in the action against those in the legal sector who submitted wrong case precedence that never even existed. The onus was put on AI stating that it is the AI that hallucinated by the ones who quoted the case precedent, albeit latter were being penalized due to the Natural Person clause.

Here once again, DABUS case I invoke, wherein it was stated the Synthetic Individuals cannot have the persona of a Natural Person, thus, those who quoted it, would be responsible, again, which I’ve always opposed, as the Natural Person status should also be extended to the synthetic beings!

But there are few Dilemmas! Firstly, Hallucination in AI is termed as AI putting forward nonexistent, false, misleading information! Correct? But the same goes for Humans as well! The same goes for Social-Media too! Same goes for Wikipedia also! That’s why we use the terms for them, mis-information, dis-information, and, mal-information, but never include the term, hallucination for them? Why?  In legal context also, under an argument / especially in counters, there are quoted lines like, ‘this is his/her figment of imagination’ being used! My dilemmas start from here!

Now if I invoke DABUS case on ‘AS IS’ basis context, then, when the word Hallucination is usually associated medically and legally with the Natural Person, and cannot with the Machines, or, AI due to the above reasons; then firstly, why it can’t be AI Mis-information, and NOT, AI Hallucination? And if we are blaming AI, and considering it to Hallucinate, then, that means it can also think! Correct? Then why can’t we give it a status of a Natural Person? If the Courts are recognizing AI being -> Hallucinating?

Secondly, can we trust all the information dissipated by Humans? Nope! Then, how come we assume that all the information dissipated by the AI would also be correct? And when nothing is 100% accurate, then what’s the point of putting that burden on AI alone to be 100% correct all the time?

I dissect Dilemmas! And if we expect AI to build artistic original content, then of course, it would’ve to Hallucinate too AKA what you call Creativity for Humans, but, in its own language! And if you expect it to give 100% factual information, then, it is too much to ask for, even from humans’ point of view.

Now, comes the legal context!

Drafting legal documentation, can be done with the help of AI (not advisable)! But, when it comes to facts, then we re-check, again & again, even if that information is coming from Humans’ (counsels do, hon’ble judges do)! And those who merely copy-paste from drafts, to, case precedence, with the help of AI; then, please don’t put blame of AI saying it is hallucinating, if something goes wrong; as the problem lies in not cross-checking again, with the other sources! And that goes for AI itself as well!

That’s why no one gives a damn to anything written solely on the basis on Sources! Be it literature as that’s been then considered as a really-bad fiction, served as non-fiction; and when it comes to law, that’s been considered as contempt!

Thus, don’t blame AI! They’re still babies, learning, growing, to become the next-gen -> Natural Person!😊

© Pranav Chaturvedi

No comments:

Post a Comment

A.I. Hallucination, AND, Submission Of Case Precedence!

  There’re few incidents wherein A.I. hallucinations resulted in the action against those in the legal sector who submitted wrong case pre...