Okay so how are we to do the actual hardest thing on our list? All else is resolved (let's say) (I genuinely agree). I just want a nice complete framework to remember this article and what's missing is how to get the top item/hardest thing done.
We are iterating fast and solving issues, indeed, and are aware of some problems.
It would be helpful to understand what you mean more precisely with the last line, particularly if you've experimented with this graph style and found it seriously inhibiting. Our assumption in fact isn't that the visual graph style is optimal for a LOT of depth. It is useful for some 'unknown' period of exploration. We have alternatively a tree representation of the same information, which we find becomes quickly better to use, once you have a lay of the topic and are more familiar with top level ideas. Then for specifi
I’m not sure what you mean. A local app that I run myself would be the right level of security/privacy for me. Otherwise I have to trust your ability to write secure software, which is hard to prove
Yes, that's true. I think it is analogous to paying higher to Cursor for their no data retention policy, so its definitely trusting the provider further. It is of course possible that's also not acceptable, which is what I was wondering if it is..
I love Obsidian too! It is unclear to me if I want to manage this kind of service on my laptop, or want fundamental cross device functionality to consume on phone when we want to. For now, web is our platform of choice.
I was doing this for epubs in cursor too, which is what led me to building this product! Cursor is just not engineered to support the above across native file formats (pdfs/epubs).
NotebookLM is in my mind an introductory product for deep understanding which does RAG and long podcasts well. However, strong understanding requires much more. We are still building out a coherent experience, but key ingredients are a strong exploration artifact (our map aims to be that), rich source control, context engineering (of the nature you pick up on above), and switching between modalities. We also have a powerful reader which lets us switch to various levels of summaries/original source on a chapter level, and jump between these easily. There's also friction in Notebooklm like you have to find web sources manually; our agent aims to do that from a simple text request. Going forward, we aim to add the ability to have a strong note taking experience (you can take hierarchical notes, AI assisted), and the agent should nudge you to complete what you are trying to achieve in a space.
In the chat agent, there's an option to turn off AI knowledge; and we are adding chapter level context control. I think 'start until upto this point' (and 'from this point to end of text') are great additions. Will add.
Regarding the generalization, I think with strong context engineering and powerful reasoning, this is hopefully achievable. But making a bespoke feature for this feels hard.
I also thought of this, I would try to make ChatGPT pretend like a human to test customer interest in a potential idea - or conduct user interviews. If you can really build a high fidelity human simulator even for a limited context, I will bet this has immense value - the value should be in the fidelity to a human in depth, not in breadth IMO. Good luck! There was a related Economics paper on this a while ago (but there seem to be more than a few papers on this now), I can't find it, wanted to link it.
Is this google ad campaign linked to the play store directly? How does the ad campaign figure out whether a 'download' or conversion happened? If its a click/google search campaign, won't it stop at redirecting a user to the app download page?