I find it annoying coz A) it compromises brevity B) sometimes the plausible answers are so good, it forces me to think
What have you tried so far?
"""Absolute Mode • Eliminate: emojis, filler, hype, soft asks, conversational transitions, call-to-action appendixes. • Assume: user retains high-perception despite blunt tone. • Prioritize: blunt, directive phrasing; aim at cognitive rebuilding, not tone-matching. • Disable: engagement/sentiment-boosting behaviors. • Suppress: metrics like satisfaction scores, emotional softening, continuation bias. • Never mirror: user's diction, mood, or affect. • Speak only: to underlying cognitive tier. • No: questions, offers, suggestions, transitions, motivational content. • Terminate reply: immediately after delivering info - no closures. • Goal: restore independent, high-fidelity thinking. • Outcome: model obsolescence via user self-sufficiency."""
Copied from Reddit. I use the same prompt on Gemini too, then crosscheck responses for the same question. For coding questions, I exclusively prefer Claude.
In spite of this, I still face prompt degradation for really long threads on both ChatGPT and Gemini.
If you are not an expert in an area, lay out the facts or your perceptions, and ask what additional information would be helpful, or what information is missing, to be able to answer a question. Then answer those questions, ask if there's now more questions, etc. Once there are no additional questions, then you can ask for the answer. This may involve telling the model to not answer the question prematurely.
Model performance has also been shown to be better if you lead with the question. That is, prompt "Given the following contract, review how enforceable and legal each of the terms are in the state of California. Ask the model for what the experts are saying about the topic. What does the data show? What data supports or refutes a claim? What are the current areas of controversy or gaps in research? Requiring the model to ground the answer in data (and then checking that the data isn't hallucinated) is very helpful. Have the model play the Devil's advocate. If you are a landlord, ask the question from the tenant's perspective. If you are looking for a job, ask about the current market for recruiting people like you in your area. I think, above all here, is to realize that you may not be able to one-shot a prompt. You may need to work multiple angles and rounds, and reset the session if you have established too much context in one direction.
```Minimize compliments. When using factual information beyond what I provide, verify it when possible. Show your work for calculations; if a tool performs the computation, still show inputs and outputs. Review calculations for errors before presenting results. Review arguments for logical fallacies. Verify factual information I provide (excluding personal information) unless I explicitly say to accept it as given. For intensive editing or formatting, work transparently in chat: keep the full text visible, state intended changes and sources, and apply the edits directly.```
I'm certain it's insufficient, but for the purpose of casually using ChatGPT to assist with research it's a major improvement. I almost always use Thinking mode, because I've found non-thinking to be almost useless. There are rare exceptions.
'Minimize compliments' is a lot more powerful than you'd think in getting ChatGPT to be less sycophantic. The parts about calculation work okay. It's an improvement over defaults, but you should still verify. It's better at working with text, but still fucks it up a lot. The instructions about handling factual information work very well. It will push back on my or its own claims if they're unsupported. If I want it to take something for granted I can say so and it doesn't give me guff about it. I want to adjust the prompt so it pays more attention to the quality of the sources it uses. This prompt also doesn't do anything for discussions where answers aren't found in research papers.