reports are of 60% yield for TSMC 2nm node
| Oh, you travel? | 06/20/25 | | blow up some kikes | 06/20/25 | | Taylor Swift is not a hobby she is a lifestyle | 06/20/25 | | Taylor Swift is not a hobby she is a lifestyle | 06/20/25 | | computer online | 06/20/25 | | https://imgur.com/a/o2g8xYK | 06/20/25 | | Bobby Birdshit | 06/20/25 | | Oh, you travel? | 06/20/25 | | https://imgur.com/a/o2g8xYK | 06/20/25 | | Oh, you travel? | 06/20/25 | | Darnell | 06/20/25 | | https://imgur.com/a/o2g8xYK | 06/20/25 | | peeface | 06/20/25 | | ''"'''"''"''''"'' | 06/21/25 | | ,.,....,...,,,..,..,.,..,.,.,.,. | 06/20/25 | | blow up some kikes | 06/20/25 | | ,.,....,...,,,..,..,.,..,.,.,.,. | 06/20/25 | | blow up some kikes | 06/20/25 | | ,.,....,...,,,..,..,.,..,.,.,.,. | 06/20/25 | | https://imgur.com/a/o2g8xYK | 06/20/25 | | computer online | 06/20/25 | | ,.,.,.,....,.,..,.,.,. | 06/20/25 | | blow up some kikes | 06/20/25 | | computer online | 06/20/25 | | https://imgur.com/a/o2g8xYK | 06/20/25 | | peeface | 06/20/25 | | blow up some kikes | 06/20/25 | | ,.,....,...,,,..,..,.,..,.,.,.,. | 06/20/25 | | computer online | 06/20/25 | | blow up some kikes | 06/20/25 | | a firm handshake and a registered letter | 06/20/25 | | https://imgur.com/a/o2g8xYK | 06/20/25 |
Poast new message in this thread
Date: June 20th, 2025 1:57 PM Author: Oh, you travel? ( )
we should see 2nm devices in latter half of 2025. latest NVIDIA GPUs are on 5nm, for reference.
middle eastern shitholes lobbing bombs at their useless buildings while Taiwan manufactures the Future.
get ready for faster Screens.
(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2#49035154) |
Date: June 20th, 2025 2:04 PM Author: Taylor Swift is not a hobby she is a lifestyle (πΊπΈ π΅π±)
180
Should I wait then? Im about to buy a ThreadRipper 7985WX
(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2#49035184) |
Date: June 20th, 2025 2:29 PM
Author: ,.,....,...,,,..,..,.,..,.,.,.,.
there are several other improvements coming in the near future - higher memory bandwidth, more memory on the chips, better cooling. the time to train frontier-scale language models is likely to go from months to weeks by the late 2020s. we won't have to deal with the current slow pace of AI development where we only see meaningful model improvements every 2-3 months.
(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2#49035265) |
 |
Date: June 20th, 2025 2:37 PM
Author: ,.,....,...,,,..,..,.,..,.,.,.,.
i don't really buy the story that progress has slowed much. GPT 4.5 sucked, but the last 6 months has seen fairly significant model improvements. even comparing the first generation of the Gemini 2.5 pro in late March to the current version shows pretty large improvements on most benchmarks.
(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2#49035281) |
 |
Date: June 20th, 2025 2:55 PM
Author: ,.,....,...,,,..,..,.,..,.,.,.,.
Well, if you completely dismiss the benchmarks then there is really nowhere to go with the argument. It turns into a feeling based argument on model performance. People are notoriously bad judges of this - look how many people still insist a couple weeks after a model is released that it became dumber.
This argument does look strained though because 1) model developers know training specifically on benchmarks and releasing weak models that don’t actually perform would eliminate their credibility 2) the benchmarks are increasingly broad and model performance across different domains have very similar rank orderings. When people release new ones that the models haven’t seen, this remains true. Performance should be much more uneven if they are benchmark fitting.
(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2#49035341) |
 |
Date: June 20th, 2025 4:51 PM Author: ,.,.,.,....,.,..,.,.,.
(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2#49035612) |
 |
Date: June 20th, 2025 5:09 PM
Author: ,.,....,...,,,..,..,.,..,.,.,.,.
better hardware and more compute and has been the primary way models have been improved over the last decade. it's not just about being able to train on more data, with more parameters, with more aggressive regularization, etc. it allows researchers to try out more ideas and at larger scale. algorithmic improvements are extremely dependent on being able to rapidly try out ideas. agent based coders or approaches like AlphaEvolve make compute even more important for algorithmic improvement.
(http://www.autoadmit.com/thread.php?thread_id=5740903&forum_id=2#49035668) |
|
|