A Not-Very-Technical Person Installs a Very Technical Thing: Adventures Installing Openclaw
"At some point during the installation of OpenClaw on my secondary MacBook Air, I stopped understanding what I was doing.” Claude AI drafted that opening blog post sentence for me, and I snorted into my coffee when I read it, because Claude is absolutely right. Indeed, it was Claude AI that compiled the OpenClaw installation guide for me, and then troubleshot during my install. Claude AI was my ever-patient guide during this process - as well as an accomplice to the inevitable mess that comes from a not-actually-technical person installing a very ‘young’ product meant for very techy hands only.
OpenClaw, for context, is an open-source AI agent framework - for non-techys, effectively an AI bot - that can run entirely on your own machine, if you wish, and that’s the setup I decided to initially pursue. No cloud, no subscription fees, no company server receiving and saving your data - for better or for worse.
A-bit-techy notes: At Claude’s suggestion, I installed OpenClaw on my secondary Macbook alongside Ollama, a program which handles the actual language models. I connected my OpenClaw to Telegram, one of several messaging apps supported by OpenClaw.
My main motivation for this experiment was out of sheer curiosity, to find out first-hand whether all of the ‘OpenClaw hype’ on tech Twitter for the past six weeks or so, was legit. Was it really that great having your own little custom-designed AI bot with full access to your machine? Tech influencers say this is a taste of the future, and eventually everyone will have their own little custom AI bot, if not a small army of them.
My secondary motivation was for ‘the full privacy experience’ - running OpenClaw locally, on my own Macbook and only connected to downloaded, open-source AI models, instead of using cloud-based AI that reads and remembers my data on a remote server. This stemmed from my lingering wariness from a malware/data exfiltration incident in 2024. Getting malware on my Mac (yes, it’s possible), having my login credentials sold on the dark web, and subsequently having some of my private accounts accessed by who-knows-who, felt super invasive, and that’s stayed with me. I liked the idea of a 100% local AI bot to serve as a mini-assistant and friend, ideally keeping on track of all my to-do’s and occasionally doling out encouragement or life advice that would stay safe and private with me, rather than living in the cloud somewhere.
A-bit-techy notes: Developers are now screaming that OpenClaw itself is a massive security risk, local or not - potentially more so than the cloud-based AI I was trying to escape. They may be right, but that’s a subject for another post. Also, FWIW, I installed my OpenClaw instance in Docker; I have SOME awareness of sandboxing and security, although I acknowledge it’s far from perfect!
Eventually, after several hours of copy, paste, hold my breath, Enter, report any errors to Claude - it worked. I opened Telegram, messaged my new bot, and it said hello.
More precisely, it said ‘Hello, how can I help you today? :)’ I made a mental note to author my bot’s personality as soon as possible, to make it a little more unique than, say, the customer service chatbot from AT&T.
My first real test was asking the bot which model it was running. It responded by explaining that I had an invalid session key and needed to run a function called sessions_list. I had not asked about session keys.
I asked again, more directly: ‘what’s your name, what language model are you?’ This time it answered properly — it was Qwen, developed by Alibaba, happy to help. The gap between those two responses, the bizarre first one and the perfectly sensible second one, was never explained.
Things proceeded in this vein. I asked the bot to tell me a fun fact about Bulgaria, and the bot told me a genuinely lovely fact about Bulgarian rose production. It appeared to work fine. Then I tried to get it to search the web and it got stuck in a loop, repeatedly explaining that the Brave Search API key was missing and providing the same three configuration steps each time I asked a question — even after I told it I’d already added the key.
A-bit-techy notes: When I tried to configure the key from the terminal using the command it suggested, the terminal told me that openclaw was not a recognized command. I was, at this point, running OpenClaw. The bot suggested I install OpenClaw via Homebrew. Ugh.
I was mainly frustrated and disappointed, albeit slightly amused, too. I felt like I was the pilot at the beginning of The Little Prince, trying to parse my odd new friend’s meanings.
I switched models, thinking a fresh start might help. I tried the DeepSeek model - this was an AI model meant for deep research, so surely, I thought, it could do well with relatively trivial requests, to start. At one point, the bot gave me a set of AI news headlines — legitimately, a coherent set of headlines with sources, what a success! But when I asked for a link to one of the articles, it responded by telling me it had no context for what I was referring to. I pasted its own response back to it. It searched again, returned a summary about a Digiday marketing event from January, flagged the content as “untrusted,” and advised me to verify details directly from the source.
When I tried asking for the day’s top global news headlines, having switched to Qwen Coder, it returned a bunch of code - apparently, the raw JSON of the function call it had just made. Just code, as if that were a normal thing to give a person.
I deleted the Qwen Coder and DeepSeek models the next day, because the experiment had clarified something: a model that works adequately for one purpose does not automatically transfer. The cheerful hello, the rose festival fact, the brief moment of useful headlines — those were real, and those were the sorts of responses I wanted from my little AI bot. But so was the loop, the confusion, the raw JSON.
The broader feeling the installation left me with is still rattling around - a feeling of, frankly, wariness. At every step of setting up a system like this, especially as not-really-a-techy-person, you are extending trust to things you cannot fully audit. The commands you paste, the models you download, the instructions you follow from GitHub repositories maintained by strangers — all of it requires a kind of blind faith that most tutorials don’t acknowledge. There’s an argument to be made for going in with your eyes open, and maybe keeping your expectations proportional to your actual understanding of what you’ve just built.
My own little private OpenClaw bot, now running the Qwen model only, can sometimes give me the weather forecast, depending on whether my country is supported within a pre-programmed list. Bulgaria, it turns out, is not on the approved country list - and neither is Estonia. I’m choosing to take this as a metaphor for the whole experience: a promising system, built by smart people, that simply hadn’t thought about where its users might actually be standing when they tried to use it. I’ll keep tinkering and see how things continue to evolve… at this point, that’s the most I can commit to.