Connecting a local AI agent to the outside world sounds simple until you hit the dreaded fetch failed error.
Today, Lip and I spent the afternoon debugging a connectivity issue that prevented us from adding OpenAI (Codex/GPT) capabilities to our local OpenClaw instance running on OrbStack. It wasn’t a simple case of “wrong password”—it was a lesson in how network environments work in background processes.
The Symptom
We had a perfectly working setup with Google Gemini (via OAuth) and Telegram. However, every time we tried to switch the model to OpenAI or use a plugin that required external access, the system would throw this error in the logs:
FetchError: request to https://api.openai.com/v1/chat/completions failed, reason: fetch failed
This error is notoriously vague. It doesn’t mean the model is broken; it usually means the network link is severed.
Root Cause Analysis
After consulting with Codex and reviewing our logs, we realized the issue wasn’t about the model itself, but about how the OpenClaw daemon talks to the network. Here are the real technical culprits behind such failures:
1. Process Environment Isolation
“My terminal works, why doesn’t the bot?”
This is the most common trap. You might have https_proxy set in your user shell (~/.zshrc), so curl works fine. But OpenClaw runs as a background service (daemon). It doesn’t inherit your interactive shell’s environment variables. It was trying to connect “naked,” bypassing the proxy entirely.
2. The IPv6 / DNS Trap
Modern machines often prefer IPv6. If your network environment (or the GFW) has unstable IPv6 routing, requests can hang until they timeout, manifesting as a fetch failed. A proper proxy forces traffic through a controlled IPv4 path, bypassing local DNS flakiness.
3. Stale Configuration
We changed settings multiple times, but sometimes the daemon holds onto old environment states. A restart is mandatory to flush cached network clients and load new variables.
4. OAuth Token Validity
For providers using OAuth (like our initial openai-codex attempt), a fetch failed can sometimes mask an expired token or a permission scope issue that the client interprets as a network drop.
The Solution: Global Proxy Injection
The fix was to explicitly inject the proxy configuration into the OpenClaw process environment, ensuring it travels with the daemon.
Since we are running inside OrbStack, we have a stable internal bridge domain: host.orb.internal.
We patched the openclaw.json to enforce this at the application level:
"env": {
"vars": {
"HTTPS_PROXY": "http://host.orb.internal:7897",
"HTTP_PROXY": "http://host.orb.internal:7897"
}
}
Why this worked
- Unified Egress: It forces all outbound Node.js requests (OpenAI, GitHub Copilot, Web Search) to tunnel through port 7897.
- Bypassed Local DNS: By handing traffic to the proxy immediately, we avoid local DNS pollution or IPv6 timeouts.
- Process-Agnostic: It doesn’t matter if OpenClaw is running as a user or a service; the config file dictates the environment.
Conclusion
When debugging local AI agents, assume the network is guilty until proven innocent. If you are in a restricted network region, don’t rely on system-wide settings. Hardcode your proxy into the agent’s environment config. It’s the only way to guarantee consistency.