HACKER Q&A
📣 amelius

LLMs helping you read papers and books


I'm curious what HN's experiences are with using LLMs for reading and comprehending papers and textbooks. Do you use special tools that make the process easier? Do they work?

I'm thinking of a book that you can ask questions. It can explain topics in more detail, or it can tell you that the thing you asked will be explained later in the book. And it will allow you to skip material that you are already familiar with. Provide references to other resources, etc.

Maybe ingesting an entire book is too much for current LLMs, but I'm sure there are ways around that.

Note: I am __not__ trying to build such a tool myself.


  👤 instagib Accepted Answer ✓
400 pages at 400 words per page is 160,000 words. X1.33 tokens ≈212,800 tokens

Most models local or cloud have issues with very long contexts even if they “support” 1M context window.

I’ve tried local models and around 30K context, it starts making up or summarizing content to store it in memory and will not fully reproduce the input.

You could try re-training a local model on a book or implementation of RAG.

I don’t know how latest local models would handle 200k context window but RAG may help keep the memory context clean.


👤 isuckatcoding
NotebookLM?