An experiment comparing information retrieval performance between Open AI's Assistants API's RAG, GPT-4 Turbo (with context window stuffing) and Llama Index with GPT4.
Pretty striking results, especially when it comes to how the Assistants API beats Llama Index and how useless context-window stuffing is.
The post-AI way to engage with the podcast ecosystem. The AI listens to every episode the comes out, finds the best coherent segments for every user (based on their textual self description, and soon Twitter graph as well).
It sends these segments to any podcast app via a personalized RSS feed.
The podcast episodes become the source material for a newsfeed of personalized segments. Means you can get value of pod episodes without having to listen to the full thing.
Segments also jump to the point, skipping long intros, ads & chit-chat which we somehow tolerate in pods (but you'd never watch a YouTube video that took 15 mins to start).
Is there an option for using other LLMs with it?