Opus 4.5 is disgustingly good
| Hairraiser jap | 11/29/25 | | orange dilemma filthpig | 11/29/25 | | Hairraiser jap | 11/29/25 | | Carnelian spot | 11/29/25 | | zombie-like brethren | 11/29/25 | | Hairraiser jap | 11/29/25 | | pale razzle-dazzle institution | 11/29/25 | | alcoholic theater stage | 12/07/25 | | Coiffed Lodge Electric Furnace | 12/07/25 | | pale razzle-dazzle institution | 11/30/25 | | alcoholic theater stage | 12/07/25 | | chartreuse place of business ladyboy | 12/07/25 | | Laughsome green library trump supporter | 12/07/25 | | irate bat shit crazy jew set | 12/07/25 | | .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:. | 12/21/25 | | Patel Philippe | 12/21/25 | | ,.,....,..,.,.,,,,..,..,.,..,.,.,.,... | 12/21/25 |
Poast new message in this thread
Date: December 21st, 2025 12:59 PM
Author: .,.,...,..,.,.,:,,:,...,:::,...,:,.,.:..:.
Opus 4.5 is up to 4 hours and 49 minutes on the METR time horizon task. this benchmark measures the task length (in terms of human work time) that models can do with SWE/AI research type projects. big increase over 5.1 max, which was 2 hours 53 minutes and faster than the overall trend of doubling every 7 months. with any luck, models will be capable of substantially automating AI research before 2030 and set off an intelligence explosion.
https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/
(http://www.autoadmit.com/thread.php?thread_id=5804093&forum_id=2)#49527643) |
 |
Date: December 21st, 2025 1:53 PM
Author: ,.,....,..,.,.,,,,..,..,.,..,.,.,.,...
They are always claiming this for new model releases and no objective evidence ever materializes for it. It’s almost certainly a psychological bias rather than reality. There is substantial variation in how models respond to a particular problem just based on how they are prompted. It’s hard for a user to reliably measure model capabilities over time based on intuition alone.
(http://www.autoadmit.com/thread.php?thread_id=5804093&forum_id=2)#49527743) |
|
|