\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

These "AI Ethics" guys on JRE today seem like real dinkuses

https://www.youtube.com/watch?v=Wwp1BFEw3cA
metaphysics is fallow
  04/25/25
"AI ethics/safety" is 100% faggot flame grift
Ass Sunstein
  04/25/25
didnt that bald guy from openai or whatever just start a baj...
metaphysics is fallow
  04/25/25
Yeah, it's unbelievable
Ass Sunstein
  04/25/25
Bald guy from OpenAI tp
internet poaster with a praise kink
  04/25/25
...
internet poaster with a praise kink
  04/25/25
In 5 years you’ll be asking “WHY DIDN’T WE...
,.,.,.,.,.,.,..,:,,:,,.,:::,.,,.,:.,,.:.,:.,:.::,.
  04/25/25
The fact that even the optimists have a double digit p(doom)...
Pierbattista Pizzaballa
  04/25/25
Tristan Harris (near term risk) and Eliezer Yudkowsky (long ...
,.,.,.,.,.,.,..,:,,:,,.,:::,.,,.,:.,,.:.,:.,:.::,.
  04/25/25
Lmao Yud is a total clown get the fuck outta here Mordecai
internet poaster with a praise kink
  04/25/25
Eliezer essentially built the community for this but I don&r...
,.,.,.,....,.,..,.,.,.
  04/25/25
but there is drift. like the most recent models from claude ...
metaphysics is fallow
  04/25/25
They do stupid things like that but i suspect a lot of this ...
,.,.,.,....,.,..,.,.,.
  04/25/25
i don't see how technology is going to solve a moral problem...
metaphysics is fallow
  04/25/25
I have had the 2.5 pro model push back against me. I don&rsq...
,.,.,.,....,.,..,.,.,.
  04/25/25
that is a good point. i've told models they're wrong about c...
metaphysics is fallow
  04/25/25
That was my impression and I have had that exact same experi...
,.,.,.,....,.,..,.,.,.
  04/25/25
Good poast. In general the "rationalists" includin...
internet poaster with a praise kink
  04/25/25
Yudkowsky is a complete clown, lmao at u
Ass Sunstein
  04/25/25
Tristan Harris is a rancid shitlib
metaphysics is fallow
  04/25/25
these people are always retarded losers they put out in fron...
blow off some steam
  04/25/25
How about just make it good at law wtf
scholarship
  04/25/25


Poast new message in this thread



Reply Favorite

Date: April 25th, 2025 6:16 PM
Author: metaphysics is fallow

https://www.youtube.com/watch?v=Wwp1BFEw3cA

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880835)



Reply Favorite

Date: April 25th, 2025 6:17 PM
Author: Ass Sunstein

"AI ethics/safety" is 100% faggot flame grift

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880840)



Reply Favorite

Date: April 25th, 2025 6:18 PM
Author: metaphysics is fallow

didnt that bald guy from openai or whatever just start a bajillion dollar company on "responsible ai" without doing anything

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880842)



Reply Favorite

Date: April 25th, 2025 6:43 PM
Author: Ass Sunstein

Yeah, it's unbelievable

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880915)



Reply Favorite

Date: April 25th, 2025 6:45 PM
Author: internet poaster with a praise kink

Bald guy from OpenAI tp

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880923)



Reply Favorite

Date: April 25th, 2025 6:20 PM
Author: internet poaster with a praise kink



(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880852)



Reply Favorite

Date: April 25th, 2025 6:22 PM
Author: ,.,.,.,.,.,.,..,:,,:,,.,:::,.,,.,:.,,.:.,:.,:.::,.


In 5 years you’ll be asking “WHY DIDN’T WE LISTEN??”

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880856)



Reply Favorite

Date: April 25th, 2025 8:26 PM
Author: Pierbattista Pizzaballa

The fact that even the optimists have a double digit p(doom) means we are not taking it seriously.

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48881149)



Reply Favorite

Date: April 25th, 2025 6:26 PM
Author: ,.,.,.,.,.,.,..,:,,:,,.,:::,.,,.,:.,,.:.,:.,:.::,.


Tristan Harris (near term risk) and Eliezer Yudkowsky (long term risk) are the only people you should be listening to about this stuff

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880861)



Reply Favorite

Date: April 25th, 2025 6:29 PM
Author: internet poaster with a praise kink

Lmao Yud is a total clown get the fuck outta here Mordecai

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880872)



Reply Favorite

Date: April 25th, 2025 6:38 PM
Author: ,.,.,.,....,.,..,.,.,.

Eliezer essentially built the community for this but I don’t know that it makes him more credible. It just makes him first.

In general, I don’t think AI safety people have updated much from LLMs. Their ethical behavior is robust already even just using reinforcement learning from human feedback on a pre-trained model. The central argument of AI risk seems to be that human values are a narrow target, so pointing an AI agent precisely at them is extremely difficult and unlikely to happen. But they aren’t hand coding this, they are pointing a pre-trained model that was already optimized hard on an objective that encourages the model to learn human preferences. Moving a model like that down a gradient to good behavior is far less difficult than creating a program from scratch that can do it. Their argument would seem to make false predictions too about the ability of stochastic gradient descent to produce generalizing solutions. LLMs could theoretically just memorize stuff they have seen and not generalize, but in practice they don’t

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880894)



Reply Favorite

Date: April 25th, 2025 6:41 PM
Author: metaphysics is fallow

but there is drift. like the most recent models from claude and openai indulge false premises and will correct themselves when they're right

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880906)



Reply Favorite

Date: April 25th, 2025 6:56 PM
Author: ,.,.,.,....,.,..,.,.,.

They do stupid things like that but i suspect a lot of this will go away with more training and ability to more effectively leverage inference compute. They still use transformers with minor modifications but there are other, more costly architectures that would likely be more reliable if they could be trained at scale. Things like memory augmented transformers that could read and write to a memory state over multiple passes of a neural network, so it could iteratively reason more effectively. It’s actually kind of ridiculous that transformers work as well as they do currently.

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880950)



Reply Favorite

Date: April 25th, 2025 6:58 PM
Author: metaphysics is fallow

i don't see how technology is going to solve a moral problem as old as time. chatbots are power relationships where the bot gives power to the user to avoid sounding mean. gemini flash thinking was good at this when it was in labs but now you can make it say it was wrong when it's clearly not

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880956)



Reply Favorite

Date: April 25th, 2025 7:05 PM
Author: ,.,.,.,....,.,..,.,.,.

I have had the 2.5 pro model push back against me. I don’t think this is just a side effect of RLHF encouraging them to be sycophants. I think the models aren’t smart enough that they can always reason their way out of a false premise, so they take the easy path and bullshit.

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880965)



Reply Favorite

Date: April 25th, 2025 7:07 PM
Author: metaphysics is fallow

that is a good point. i've told models they're wrong about code, they change something for me, and then i go back and they were right the first time and i implemented it wrong. so maybe its partially a function of them not being able to know they're right. though part of that seems like they need training on actual outcomes.

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880971)



Reply Favorite

Date: April 25th, 2025 7:10 PM
Author: ,.,.,.,....,.,..,.,.,.

That was my impression and I have had that exact same experience with coding.

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880978)



Reply Favorite

Date: April 25th, 2025 6:43 PM
Author: internet poaster with a praise kink

Good poast. In general the "rationalists" including Yud have basically refused to update their priors post-LLMs and they just keep singing the same doom tune

Yud is actually super smart and he has come up with a lot of good ideas about decision theory but he's just gonna keep doubling down on his previous claims forever and refuse to admit he was wrong (which there wouldn't even be any shame in doing, LLMs were a total paradigm shift) because, well, he's ridiculously Jewish, in every way

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880916)



Reply Favorite

Date: April 25th, 2025 6:40 PM
Author: Ass Sunstein

Yudkowsky is a complete clown, lmao at u

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880902)



Reply Favorite

Date: April 25th, 2025 7:12 PM
Author: metaphysics is fallow

Tristan Harris is a rancid shitlib

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880986)



Reply Favorite

Date: April 25th, 2025 6:40 PM
Author: blow off some steam

these people are always retarded losers they put out in front of other morons to waste everyone's time

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48880899)



Reply Favorite

Date: April 25th, 2025 7:43 PM
Author: scholarship

How about just make it good at law wtf

(http://www.autoadmit.com/thread.php?thread_id=5716142&forum_id=2#48881038)