A lot of the uncritical praise for AI is infected by Gell-Mann amnesia
| Hairraiser Azn Theatre | 04/22/25 | | fighting flesh lettuce | 04/22/25 | | hideous racy multi-billionaire mood | 04/22/25 | | fighting flesh lettuce | 04/22/25 | | hideous racy multi-billionaire mood | 04/22/25 | | navy national security agency | 04/23/25 | | 180 shrine | 04/23/25 | | navy national security agency | 04/23/25 | | Talented copper chapel | 04/22/25 | | Hairraiser Azn Theatre | 04/22/25 | | Hairraiser Azn Theatre | 04/23/25 | | Frisky razzle abode | 04/23/25 | | rusted at-the-ready forum fat ankles | 04/23/25 | | Frisky razzle abode | 04/23/25 | | Hairraiser Azn Theatre | 04/23/25 | | Bateful out-of-control casino | 04/23/25 | | Hairraiser Azn Theatre | 04/24/25 | | Big-titted lodge mad cow disease | 04/23/25 | | Hairraiser Azn Theatre | 04/24/25 | | Cracking transparent stage ladyboy | 04/23/25 | | Hairraiser Azn Theatre | 04/24/25 | | zippy fortuitous meteor | 04/23/25 | | navy national security agency | 04/23/25 | | hideous racy multi-billionaire mood | 04/23/25 | | zippy fortuitous meteor | 04/23/25 | | Hairraiser Azn Theatre | 04/24/25 | | fighting flesh lettuce | 04/24/25 | | Hairraiser Azn Theatre | 04/25/25 |
Poast new message in this thread
Date: April 22nd, 2025 7:01 PM Author: Hairraiser Azn Theatre
Author and Harvard Medical School grad Michael Crichton described Gell-Mann amnesia as when you happen to read a news story about a topic where you have real expertise, and "[y]ou read the article and see the journalist has absolutely no understanding of either the facts or the issues. Often, the article is so wrong it actually presents the story backward—reversing cause and effect. I call these the 'wet streets cause rain' stories. Paper's full of them. In any case, you read with exasperation or amusement the multiple errors in a story, and then turn the page to national or international affairs, and read as if the rest of the newspaper was somehow more accurate about Palestine than the baloney you just read. You turn the page, and forget what you know. That is the Gell-Mann Amnesia effect."
Crichton was talking about the news media, but the same thing's true of AI. Try to have a conversation or ask probing questions of ChatGPT Pro on areas where you have pre-existing expertise and/or already know the answers, and you'll immediately see that its rife with errors, oversimplifications, and failures to draw inferences that would be relevant to your prompt.
But it's fucking great at giving you primers on shit you don't know much about -- frankly equivalent to new attorneys (and superior to TTTT summer associates) who often get that exact kind of task, in .01% of the time.
Obviously AI is still useful -- just like the media, as an institution, is useful to society and helps me know about shit that I wouldn't otherwise know about -- but it does make me wonder how much overconfident horseshit is lurking in its seemingly 180^180 memos on supersedeas-bond mechanics, how to find/buy a patek philippe perpetual calendar, and how to tweak my daily supplement stack and workout routine.
(http://www.autoadmit.com/thread.php?thread_id=5714813&forum_id=2#48872145) |
Date: April 23rd, 2025 10:59 AM Author: Frisky razzle abode
it's just another research tool. everything is fallible.
AI is going through the same phase the internet is where at first people thought if something showed up in a google search it must be true, but after being burned a few times they realized it wasn't an oracle.
ai can be reasonably trusted to get to the right answer on things computers are good at like math or for questions you know are easy. it's a really great tool for proofreading. but anything complex it's best served as a jumping off point. just like how wikipedia is a good tool to begin researching something, but you gotta dig into the primary sources and be aware of bias on the platform.
(http://www.autoadmit.com/thread.php?thread_id=5714813&forum_id=2#48873547) |
 |
Date: April 23rd, 2025 8:32 PM Author: Hairraiser Azn Theatre
Using chatgpt does make you realize how much of the knowledge you rely on day-to-day is *not* on the internet: personal reputations, rumors about things coming down the pike, informal policies/practices of various institutions and offices (this even goes for big, high-level shit, like "why hasn't TRUMP nominated any judges yet" and "why is the DOJ leaning so heavily on interim appointments"... multiply this x100 when you're talking about local/obscure contexts), what judges like what arguments (I guess ~5% of this could be picked up on through westlaw or whatever), etc.
All of this still makes the quintessential junior associate theoretically replaceable by AI, since the associate is likely relying on the same internet-based information sources that AI can access. But it hasn't been super-helpful lately for the most front-of-mind professional questions I've been facing lately (my job mostly consists of, and I am mostly hired these days, because I hang out with judges and politicians and rich businessmen).
(http://www.autoadmit.com/thread.php?thread_id=5714813&forum_id=2#48875433) |
 |
Date: April 24th, 2025 3:02 AM Author: Hairraiser Azn Theatre
OFS, the shortcomings of AI are (1) training material and (2) integration.
I once had Pro give me a flat-out wrong (and dangerously so) legal assertion that it elaborated on immensely and made the linchpin of an entire argument. It was on the verge of shaking my confidence in the whole thing, but I pulled the sources and this exact error came directly from a (reputable, very much on-point) law review article. Hard to blame chatgpt for that.
And that's a topic that people *are* writing on. There's so, so much shit out there that still doesn't get memorialized for chatgpt to have any chance at seeing.
(http://www.autoadmit.com/thread.php?thread_id=5714813&forum_id=2#48876131) |
Date: April 23rd, 2025 11:18 AM Author: zippy fortuitous meteor
Sounds like some bullshit made up thing a couple of annoying jews force memed so people would know who they are
Pass
(http://www.autoadmit.com/thread.php?thread_id=5714813&forum_id=2#48873587) |
|
|