This is probably the way to go. For the onboard step, you would use:
openclaw onboard --auth-choice openai-codex
This will have you log into your account. The model openai-codex/gpt-5.4 works well for OpenClaw. In my case, I was worried about maxing out my usage if I were to use it exclusively, and since I also have an ollama subscription I used it for the primary agent, and openai-codex/gpt-5.4 for more complex tasks. But it would work fine for the main agent.
I haven’t looked into it, but there may be older openai models that have unlimited usage. If you find that you are expending all of your ChatGPT usage, then it might be worth exploring to see if a lesser model could run your main agent, and only use openai-codex/gpt-5.4 for more complex tasks.
The form factor isn’t as important as the memory architecture. If the device does not have a unified memory architecture (where the CPU and GPU jointly use the same memory), it will probably not be suitable for local LLMs only.
So if you are going with cloud models anyway, then really any device should work fine (I would recommend 32GB RAM, though, so you can still run some tiny local models on the CPU for things like semantic memory search). I’ve been using the GMKTec G3 for mine (just because I had an extra one), and it has been working fine with the cloud models as I mentioned.