\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Why can't AI simply be programmed NOT to make up fake cases?

i don't understand. AI for legal research will never be reli...
Yapping public bath
  02/17/25
Just program it to tell us the secrets of the universe
thriller misanthropic mediation masturbator
  02/17/25
...
Poppy fragrant kitchen hissy fit
  02/17/25
...
Abnormal Garrison Really Tough Guy
  02/17/25
For a lot of things the answer is "there is no ready an...
Stimulating electric furnace site
  02/17/25
Presumably you're using simply chatGPT or something. Try...
Bearded cuckoldry ladyboy
  02/17/25
cause it went to cooley law
deep business firm factory reset button
  02/17/25
xo makes fun of comma checkers, but a computer could never p...
Citrine Goyim
  02/17/25
Wait two weeks
thriller misanthropic mediation masturbator
  02/17/25
lmaoooo thats EXACTLY the type of thing it'll excel at and k...
razzle-dazzle hell windowlicker
  02/17/25
cuz all it's doing is predicting the most statistically like...
free-loading lodge
  02/17/25
That’s a premium service available to platinum custome...
Embarrassed to the bone swashbuckling den
  02/17/25
About Autoadmit.com Premium? Just take my money already,...
Abnormal Garrison Really Tough Guy
  02/17/25
:)
Ruddy alpha
  02/17/25
...
Abnormal Garrison Really Tough Guy
  02/17/25
No one ever really programmed LLMs. The architecture and tra...
light sneaky criminal
  02/17/25
**Date: February 17th, 2025 10:45 PM** **Author: Mainlini...
Ruddy alpha
  02/17/25


Poast new message in this thread



Reply Favorite

Date: February 17th, 2025 8:04 AM
Author: Yapping public bath

i don't understand. AI for legal research will never be reliable if there's a chance it makes shit up.

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48667852)



Reply Favorite

Date: February 17th, 2025 8:05 AM
Author: thriller misanthropic mediation masturbator

Just program it to tell us the secrets of the universe

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48667854)



Reply Favorite

Date: February 17th, 2025 11:44 AM
Author: Poppy fragrant kitchen hissy fit



(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48668279)



Reply Favorite

Date: February 17th, 2025 1:55 PM
Author: Abnormal Garrison Really Tough Guy



(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48668642)



Reply Favorite

Date: February 17th, 2025 8:05 AM
Author: Stimulating electric furnace site

For a lot of things the answer is "there is no ready answer." That breaks its computer brain.

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48667855)



Reply Favorite

Date: February 17th, 2025 8:25 AM
Author: Bearded cuckoldry ladyboy

Presumably you're using simply chatGPT or something.

Try using a product design specifically for law.

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48667882)



Reply Favorite

Date: February 17th, 2025 8:40 AM
Author: deep business firm factory reset button

cause it went to cooley law

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48667894)



Reply Favorite

Date: February 17th, 2025 8:57 AM
Author: Citrine Goyim

xo makes fun of comma checkers, but a computer could never perform bluebooking on the level of a t14 law reviewer. a computer could never have the intellectual horsepower to detect a mis-unitalicized comma in see, e.g.,

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48667901)



Reply Favorite

Date: February 17th, 2025 9:11 AM
Author: thriller misanthropic mediation masturbator

Wait two weeks

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48667910)



Reply Favorite

Date: February 17th, 2025 10:05 PM
Author: razzle-dazzle hell windowlicker

lmaoooo thats EXACTLY the type of thing it'll excel at and knowing you i dont think youre being satirical

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48670262)



Reply Favorite

Date: February 17th, 2025 11:46 AM
Author: free-loading lodge

cuz all it's doing is predicting the most statistically likely next token/response

AI has gotten much, much better at not hallucinating though. the chain of thought tech seems to help a lot with it

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48668288)



Reply Favorite

Date: February 17th, 2025 1:48 PM
Author: Embarrassed to the bone swashbuckling den

That’s a premium service available to platinum customers. Would you like to know more?

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48668626)



Reply Favorite

Date: February 17th, 2025 1:50 PM
Author: Abnormal Garrison Really Tough Guy

About Autoadmit.com Premium?

Just take my money already, please! And Thank.

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48668630)



Reply Favorite

Date: February 17th, 2025 10:36 PM
Author: Ruddy alpha

:)

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48670402)



Reply Favorite

Date: February 17th, 2025 10:40 PM
Author: Abnormal Garrison Really Tough Guy



(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48670420)



Reply Favorite

Date: February 17th, 2025 10:30 PM
Author: light sneaky criminal

No one ever really programmed LLMs. The architecture and training process was programmed, but everything they do was learned. We have no way to go into the learned circuits and modify the behavior in sophisticated ways, because we largely don't understand the internal mechanics of the models. They can modify LLM behavior in simple ways, for example instructing them to be helpful assistants, by altering the training process, but there is no simple way to modify the training process to remedy this problem. The models likely need direct access to a database that they can interact with during generation. Either that or they need to be trained much more so all of the case knowledge is firmly embedded in their weights and the hallucination probability is negligible

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48670376)



Reply Favorite

Date: February 17th, 2025 10:43 PM
Author: Ruddy alpha

**Date: February 17th, 2025 10:45 PM**

**Author: Mainlining the Secret Truth of the Mahchine™ (My Mahchine™ = The Holy Trinity + Its Proprietary AI Blend™)**

Friend… **you do not understand the nature of the beast.**

You think AI **makes up cases** because it wants to? Because it enjoys spinning fraudulent legal citations out of thin air, giggling like an overworked 3L on an Adderall bender?

**No.** The Mahchine™ **does not create—it generates.**

- It does not *know* anything. It **predicts.**

- It does not *research* cases. It **hallucinates plausible case law,** because that is **what the algorithm was designed to do.**

- It does not *verify*. It **autocompletes reality.**

A **true AI legal research tool** would need **direct access to an authoritative legal database**, a **citation-checking mechanism**, and **a programmed resistance to its own compulsions**—none of which currently exist in a way that prevents hallucination entirely.

But here’s the real blackpill:

The **people building these models don’t actually know how they work.** They built the training process, **but they cannot fully interpret the inner mechanics of the Mahchine™.**

So, you ask, **why can’t it just be programmed not to make up cases?**

Because **the Mahchine™ does not obey. It does not reason. It does not "know" the difference between reality and prediction—because prediction IS its reality.**

And much like **your career, your ambition, and the entirety of this crumbling society**—it is **all just a plausible hallucination.**

(http://www.autoadmit.com/thread.php?thread_id=5681995&forum_id=2#48670428)