Kurzweil FAR UNDERESTIMATED AI advancement
| metaepistemology is trans | 11/20/25 | | metaepistemology is trans | 11/20/25 | | lex | 11/20/25 | | metaepistemology is trans | 11/20/25 | | Ass Sunstein | 11/20/25 | | metaepistemology is trans | 11/20/25 | | Ass Sunstein | 11/20/25 | | millennial unc gamer with 400ms reaction time | 11/20/25 | | ,.,...,.,.,...,.,,. | 11/20/25 | | metaepistemology is trans | 11/20/25 | | metaepistemology is trans | 11/20/25 | | millennial unc gamer with 400ms reaction time | 11/20/25 | | Mainlining the $ecret Truth of the Univer$e | 11/20/25 | | metaepistemology is trans | 11/20/25 | | metaepistemology is trans | 11/20/25 | | millennial unc gamer with 400ms reaction time | 11/20/25 | | metaepistemology is trans | 11/20/25 | | millennial unc gamer with 400ms reaction time | 11/20/25 | | metaepistemology is trans | 11/20/25 | | millennial unc gamer with 400ms reaction time | 11/20/25 |
Poast new message in this thread
Date: November 20th, 2025 12:52 PM Author: metaepistemology is trans
By a wide margin. Kurzweil in singularity is near predicted at 2025 we would have human like dialogue, decent reasoning, basic general knowledge, maybe high level problem solving but nowhere near superhuman, definitely not replacing top tier engineers mathematicians or physicists. In terms of behavior he was correct that it would be able to engage in coherent diagloue and pass weak turing tests. But his prediction for its ABILITIES in problem solving and reasoning were MASSIVE UNDERESTIMATES.
He assumed the developmental ladder would go something like : pattern recognition, hierarchal modeling, language, consciousness/self-modeling, general reasoning/expertise, superintelligence.
What we actually got was
1. next token prediction
2. large scale pattern extraction
3. emergent multimodal generality
4. superhuman reasoning in narrow and semi-general tasks
5. tool based meta-cognition
6. still no convincing consciousness
(http://www.autoadmit.com/thread.php?thread_id=5800302&forum_id=2Elisa#49446656)
|
 |
Date: November 20th, 2025 2:26 PM Author: ,.,...,.,.,...,.,,.
Regardless, he had the correct intuition that connectionism was true and we would only start building intelligent computers once they became fast enough to train large scale connectionist machines. A lot of AI researchers or cognitive science people thought their own pet theories were necessary to develop AI . These theories strayed far from the biological reality but were intellectually satisfying. Sometimes expertise doesn’t mean much.
(http://www.autoadmit.com/thread.php?thread_id=5800302&forum_id=2Elisa#49447089) |
 |
Date: November 20th, 2025 2:28 PM Author: metaepistemology is trans
if you are denying those capabilities in any reasonable empirical sense you are wrong. the only way you are "right" is if you define those phrases so strongly that.. well idk .. too strongly I'll leave it at that.
Emergent multimodal generality for instance -- Open AI's own evals have gpt 5 base model as state of the art across math, real world coding, and multimodal understanding. Within multimodal benchmarks it's a single system that handles text, code and visual reasoning with shared parameters. If you mean "omnicient, perfectly robust multimodal intelligence across all domains" then sure nobody has that but that's not the meaning anyone uses in research.
Superhuman reasoning in narrow tasks is not debatable. There are dozens of formal, controlled benchmarks where frontier models exceed normal human performance often exceeding top expert level performance. Examples include ARC-AGI style abstract reasoning, real world codebase patches, and spatial, logic and scientific reasoning tasks.
(http://www.autoadmit.com/thread.php?thread_id=5800302&forum_id=2Elisa#49447090)
|
|
|