OpenClaw Context Upgrade
Check current state first
- Run
session_statusand record:- current model
- current
Context - whether the session already shows
1.0m
- If the user is only asking "why isn't it 1M yet", diagnose first. Do not change config blindly.
Inspect the exact config path before editing
- Inspect
agents.defaults.contextTokenswithgatewayconfig.schema.lookup. - If model selection matters, also inspect
agents.defaults.model. - Use
gatewayconfig.patchfor changes. - Do not tell the user to run
openclawCLI for this workflow.
Upgrade workflow
-
Make sure the target session is using a high-context model.
- Prefer model alias
GPT-5.4for this workflow. - During verification, avoid relying on fallback behavior.
- If needed, use
session_statuswith a session model override to pin the current session toGPT-5.4.
- Prefer model alias
-
Raise the default context budget.
- Set
agents.defaults.contextTokens = 1000000. - If the user's goal is future sessions, patch defaults.
- If the user only wants diagnosis, explain the change without applying it.
- Set
-
Let the restart happen.
gateway config.patchrestarts OpenClaw automatically.- Always include a clear
noteso the user gets a useful completion message after restart.
-
Force a fresh chat session.
- Tell the user to use
/newor/reset. - Old chats often keep the earlier session budget and mislead verification.
- Tell the user to use
-
Verify after the fresh session starts.
- Run
session_statusagain. - Success looks like
Context: .../1.0mor similar.
- Run
Diagnose common failure modes
Still shows .../272k
Check these in order:
- The chat was not restarted with
/newor/reset. - The session is still on the wrong model or a fallback model.
- The provider/model pair is returning a lower real cap.
- The config patch did not target
agents.defaults.contextTokenscorrectly.
Tell the truth about limits
- Do not promise 1M if the provider is actually returning a smaller cap.
- OpenClaw can request a larger budget, but it cannot exceed the model/provider limit in reality.
- If verification still shows
272k, say that plainly and explain whether the blocker is session reuse, model mismatch, or provider cap.
Default reply pattern
Use this sequence when answering:
- State the current observed limit.
- Give the exact fix order:
- pin model to
GPT-5.4 - set
agents.defaults.contextTokens = 1000000 - restart
/newor/reset- verify with
session_status
- pin model to
- End with the caveat that provider limits still win.