I've been trying to use the OpenAI API for the last two weeks or so (GPT-4 mostly). This article rubs me the wrong way. "GPT Best Practices" indeed.
Most of my calls end with a time out (on their side) after 10 minutes. I get 524 and 502 errors, sometimes 429, and sometimes a mildly amusing 404 Model not found. The only way I can get reasonable responses is to limit my requests to less than 1400 tokens, which is too little in my application.
And on top of that they actually charge me for every request. Yes, including those 524s, 502s and 429s, where I haven't seen a single byte of a response. That's fraudulent. I reported this to support twice, a week later I haven't even heard back.
Their status page happily states that everything is just fine.
From the forums it seems I'm not the only one experiencing these kinds of problems.
I'd argue "GPT Best Practices" should include having working APIs, support that responds, and not charging customers for responses that are never delivered.
That's odd, I have been heavily using GPT-4 API (near 100 requests a day) and didn't notice any errors like that. I noticed maybe 1 or 2 errors with really chat history.
Are your requests above 1400 tokens in size? Requests, not replies.
Small requests (like what most people need) are just fine. It's the larger ones that being to slow down quickly and then break down completely as one gets above 1400 tokens.
Most of my calls end with a time out (on their side) after 10 minutes. I get 524 and 502 errors, sometimes 429, and sometimes a mildly amusing 404 Model not found. The only way I can get reasonable responses is to limit my requests to less than 1400 tokens, which is too little in my application.
And on top of that they actually charge me for every request. Yes, including those 524s, 502s and 429s, where I haven't seen a single byte of a response. That's fraudulent. I reported this to support twice, a week later I haven't even heard back.
Their status page happily states that everything is just fine.
From the forums it seems I'm not the only one experiencing these kinds of problems.
I'd argue "GPT Best Practices" should include having working APIs, support that responds, and not charging customers for responses that are never delivered.