\
  The most prestigious law school admissions discussion board in the world.
BackRefresh Options Favorite

Does anyone here use the Open API responses API?

How do you guys get around the fact that large token blobs g...
The Wandering Mercatores
  08/16/25
This is one reason why Gemini will win in the end It's on...
Dave Prole
  08/16/25
im doing it using chunked pipline right now where it summari...
The Wandering Mercatores
  08/16/25
the thing that pisses me off is WHAT THE FUCK IS THE POINT o...
The Wandering Mercatores
  08/16/25
I just figured out a way to do it without slow chunking actu...
The Wandering Mercatores
  08/17/25
...
scholarship
  08/17/25


Poast new message in this thread



Reply Favorite

Date: August 16th, 2025 1:31 PM
Author: The Wandering Mercatores (from the Euphrates to the Forum)

How do you guys get around the fact that large token blobs get rejected before hitting the api? Responses doesn't allow top level attachments and rejects per message attachments. do you just split everything into 6K token chunks and make a call to extract and then feed it into a final call? I never used this for huge payloads until now, and its pissing me off

(http://www.autoadmit.com/thread.php?thread_id=5763083&forum_id=2Elisa#49190248)



Reply Favorite

Date: August 16th, 2025 1:53 PM
Author: Dave Prole

This is one reason why Gemini will win in the end

It's only the one that you can start out dumping a huge load of shit on, and then immediately ask extremely specific questions

GPT can do it too but is extremely slow and gets slower and more retarded as more info is accumulated in the same chat

(http://www.autoadmit.com/thread.php?thread_id=5763083&forum_id=2Elisa#49190283)



Reply Favorite

Date: August 16th, 2025 1:55 PM
Author: The Wandering Mercatores (from the Euphrates to the Forum)

im doing it using chunked pipline right now where it summarizes segments at a time and then dumps them into a final extraction adn its still taking forever, its been plodding through this for like 10 min now:

tokens≈21975 (inline_limit 120000)

[route] attempting file-attachment path

[route] attachments unsupported here; using chunked pipeline

[map] summarizing segment 1

[map] summarizing segment 2

slow as fuck its on segment 4 now 5 min later



(http://www.autoadmit.com/thread.php?thread_id=5763083&forum_id=2Elisa#49190287)



Reply Favorite

Date: August 16th, 2025 1:59 PM
Author: The Wandering Mercatores (from the Euphrates to the Forum)

the thing that pisses me off is WHAT THE FUCK IS THE POINT of taking 120K tokens when the nigga endpoint server rjects it before it even hits the NIGGA MODEL??

(http://www.autoadmit.com/thread.php?thread_id=5763083&forum_id=2Elisa#49190293)



Reply Favorite

Date: August 17th, 2025 10:45 AM
Author: The Wandering Mercatores (from the Euphrates to the Forum)

I just figured out a way to do it without slow chunking actually using something called ACK mode, where you can send a blob using begin and end syntax and the model acknowledges it got it and then send another doing the same and it synthesizes the two as long as its within the context window which is huge for the new models

(http://www.autoadmit.com/thread.php?thread_id=5763083&forum_id=2Elisa#49191759)



Reply Favorite

Date: August 17th, 2025 11:16 AM
Author: scholarship



(http://www.autoadmit.com/thread.php?thread_id=5763083&forum_id=2Elisa#49191790)