Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you tried tweaking parameters like temperature, top_p, or seed value when sending the API request?

Beyond that and due to the probabilistic nature of the LLM response I'm not sure how a reproducible "matching" between chat interface and API could be achieved.

I'm working primarily with the API through my own wrapper and I noticed that I tend to give less detailed instructions than when I'm using the OpenAI chat interface often resulting in a less accurate response.



Thanks. I will. I must confess, I have not tried the top_p or seed value. I guess my naive belief was that the the defaults in the api matched the OOTB experience of the web interface. shame on me




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: