NEW GFF RECORDS! More Here

I Built a WhatsApp Bot for Women in Rural Kenya. I Cannot Code.

 Dr. James Muchiri , Kenya  Apr 06, 2026

Three AIs, zero coding background, one real product, and a lot more confusion than people imagine.

 

On April 5, 2026, a WhatsApp bot for women aged 30+ went live for our movement series in Nyandarua County, Kenya.

Women can register on WhatsApp, submit rope-skipping and fitness videos, get reminders, appear on leaderboards, and compete for monthly recognition. The bot stores data, syncs with Google Sheets, exports CSVs, supports admin review, and handles real-world messiness like incomplete submissions, wrong formats, and people typing greetings in the way people actually greet in Kenya.

I built it.

I still cannot sit here and pretend I am now a software engineer. I am not. A few weeks before this project, I could not have explained the difference between TypeScript and JavaScript in any useful way. I have no computer science background. I did not go to bootcamp. I am a doctor, a fitness builder, and a person who spends a lot of time thinking about real people in real places, not software abstractions.

And yet, somehow, I built a working WhatsApp bot.

Not alone. With three AI assistants: ChatGPT, Gemini, and Claude.

This is the honest version of how that happened.

Why this had to be on WhatsApp

The project itself was not random.

I have been working on community fitness and preventive health through Global Fast Fit. One of the ideas I cared about most was creating a practical movement platform for women aged 30 and above in Nyandarua. Monthly participation. Measurable activity. Real structure. Something that could become both a useful program and, eventually, a valuable fitness dataset grounded in African reality.

The obvious instinct is to build an app.

That makes sense if your users live inside app stores, stable internet, email logins, and endless phone storage.

That is not the world I was designing for.

In Nyandarua, WhatsApp is the real operating system. If I wanted this thing to live in people’s hands, not just in my imagination, it had to happen there. So the product became a WhatsApp bot that could handle registration, consent, submissions, verification, reminders, and leaderboards inside a tool people already use.

That was the idea.

Then came the tiny issue that I did not know how to code.

Phase 1: ChatGPT helped me think like a builder before I could build

The first serious work I did with ChatGPT was not “write me a bot.”

It was more like: help me think.

What data should I collect? What should count as a valid submission? How many rope-skipping videos per day is fair? Should GFF Standard be unlimited attempts? Should we use GFF Standard or GFF Shuttle for the movement series? What should happen if a video does not show the full body? What does consent need to cover? What fields are useful now, and what fields might become valuable later?

This part mattered more than I realized at the time.

Because for a product like this, the rules are the skeleton. If the rules are weak, the code is just a faster way to create confusion.

ChatGPT was strongest here. It helped me turn a rough idea into a system with logic. It helped define user flows, contest rules, admin commands, data fields, verification logic, storage thinking, monthly cycles, and all the small decisions that make the difference between “nice idea” and “working program.”

It also introduced me to the technology stack. Node.js. TypeScript. Express. Supabase. S3. Twilio. These were not familiar words to me then. ChatGPT explained them patiently, repeatedly, and sometimes like a person teaching a village chief how a post office works.

This was also where I think I did something right as a non-developer: I kept pressing on edge cases.

Not theoretical edge cases. Human ones.

What if two people share one phone?
What if someone keeps submitting until they get lucky?
What if the video is trimmed?
What if the bot becomes too complicated for a first-time user?
What if the scoring logic encourages the wrong behavior?
What if our perfect technical design is a terrible fit for Nyandarua reality?

That phase was slow, but it was good slow.

The weakness was that ChatGPT could sometimes become too architectural when I needed something more brutal and practical. I needed, “create this file, paste this, run this command.” Sometimes I got a blueprint for the city when I needed directions to the nearest spanner.

Still, if ChatGPT had a role in this project, it was the architect. It helped me design the bones.

Phase 2: Gemini entered when the machine started coughing smoke

Once the project moved from rules into actual code and deployment, the mood changed.

Now I was dealing with broken routes, webhook confusion, logs that looked like ancient curses, and that classic software experience where you have many moving parts and none of them are moving in the correct direction.

That is where Gemini became useful.

Gemini was better when I had something concrete and ugly in front of me. Error logs. Failing routes. Broken request flows. State issues. It was less interested in the philosophy of the product and more interested in what exactly was on fire.

One of the most annoying bugs was painfully small. I was dealing with a webhook path mismatch around /webhooks/twilio versus /webhook/twilio. One missing letter. Hours gone. That is software. You build a cathedral and then discover the door is painted on the wall.

Gemini was also helpful in the stretch where Twilio started feeling like a relationship that had run its course. We had designed around Twilio early on, but eventually I shifted toward Meta’s WhatsApp Cloud API. That switch was not elegant, and it definitely was not cheap in terms of energy, but it ended up being the right move.

Another hard area was session state. The bot would lose the plot between messages. A user would answer question three and the bot would behave like it had never met her. We kept circling that until the idea of explicit user state became clearer to me: where is this person in the flow, what did they already submit, what is the next expected thing?

That sounds obvious when written neatly in a blog post. It did not feel obvious at midnight with a broken bot.

If ChatGPT helped me think in systems, Gemini helped me respect logs. That may actually be one of the biggest lessons of this whole project. People talk about prompting. Fine. Prompting matters. But log reading matters more. Once I learned to copy the exact error, not paraphrase it, not soften it, just paste it as-is, the AIs became dramatically more useful.

Phase 3: Claude came in when I needed a builder, not a philosopher

By the time I moved heavily into Claude, the project had history, scars, and fragments everywhere.

There were handover notes. There was partial code. There were missing pieces. There were earlier design decisions that no longer matched the infrastructure. There was at least one machine continuity problem. And there was that feeling every messy project gets, where some of it lives in the codebase, some of it lives in your head, and some of it lives nowhere at all.

Claude was strongest when the task became: inspect this thing, compare it to the spec, identify what is broken, and systematically work through the gaps.

That mode suited the project.

One of the biggest practical breakthroughs came around message handling and Meta integration. At one stage the bot could receive messages but could not send them properly. At another stage menu routing was wrong enough that users could get pushed back into the wrong path. There were also infrastructure headaches that had nothing to do with my intelligence or Claude’s intelligence and everything to do with the fact that Meta can be absurd.

I want to be very clear about that. Some parts were not “hard because I am a beginner.” They were hard because the platform itself was chaotic.

We had cases where a number looked present in the dashboard but behaved like it did not exist at the API layer. That is not a poetic failure of the human spirit. That is just platform nonsense.

Still, Claude was helpful in systematically untangling those parts. It was closer to a builder or auditor. Less “let’s brainstorm” and more “here are the violations, here is the sequence, now fix them.”

That mattered, because by then I needed momentum more than inspiration.

The hardest part was not coding

I know that sounds like a trick line, but it is true.

The hardest part was continuity.

That is the thing people do not understand when they fantasize about building with AI. They imagine you open a chat window, say “build my product,” and receive software like takeaway food.

That is not what happened.

What happened was that the project lived across multiple AIs, changing stack decisions, handover notes, a crash or at least a broken environment, and my own evolving understanding of what I was trying to build.

Every time I switched AI, I gained a fresh pair of eyes and lost context.

Every time I updated the architecture, I solved one problem and created documentation debt somewhere else.

Every time the machine failed or the environment changed, I was reminded that a system can be real in six places and still feel missing.

That was the hardest part. Not syntax. Coherence.

In hindsight, if I had done one thing better, it would have been this: maintain a brutal running document of what works, what is broken, what changed, what the current stack is, and what the next step is. Not a beautiful document. A war diary.

Because without that, building with multiple AIs starts to feel like running three relay races on three different tracks while carrying the same baton in your teeth.

What I think I did well, despite not being a developer

I was not bringing coding skill to the table. So the question is: what was I bringing?

A few things, I think.

First, I knew the users.

That sounds small, but it is not. The AI did not know Nyandarua women. The AI did not know what it means to build for people who are not living in product-demo land. The AI did not know what a confusing prompt feels like to a first-time user, or why a WhatsApp-native flow matters, or why asking for the wrong thing at the wrong moment makes the whole system feel alien.

I kept dragging the project back to real life. That was one of my main jobs.

Second, I was stubborn about logic.

I did not accept “it should work.” I kept asking, how exactly? What happens next? What if the video is wrong? What if the user goes silent? What if the same person tries again? What counts as fair? What breaks comparability? What data is worth collecting and what data is just decoration?

That helped.

Third, I got better at giving raw material instead of vague feelings.

When something broke, I learned to paste the exact error. The exact route. The exact response. The exact strange behavior. AI is much worse with “it’s not working” than with “here is the log, here is the request, here is what happened.”

I am convinced that this was one of the most important skills I developed.

What I did badly

Plenty.

I let version 1 stay liquid for too long. I was trying to think about the future, the dataset, the scale, the architecture, the monetization, the wider movement, and the immediate bug all at once. Vision is good. Too much simultaneous vision becomes fog.

I also switched contexts a lot. Sometimes that was necessary. Sometimes it was just expensive.

And like many non-developers, I occasionally wanted the answer to be conceptual when the truth was painfully local: wrong route, wrong token, wrong variable, wrong state, wrong order of operations.

Software is humbling in that way. It does not care that your idea is noble. It wants the right character in the right place.

The most memorable moment

There were a few.

The funniest category of moments was when the bug was microscopic and the suffering was enormous. A wrong path. A missing assumption. A state mismatch. That happened more than once.

The most frustrating moments were definitely around Meta. Fighting an API that behaves like it is gaslighting you is a special form of modern pain.

But the best moment was simple: the bot replying for real.

Not in theory. Not in a sandbox. Not in a document. Not in a handover plan. Actually replying.

That moment mattered because until then the project was a swarm of ideas, specs, logs, rewrites, and stubbornness. After that, it was a thing.

A real thing.

What I learned from using three AIs

ChatGPT was best for product design, architecture, structure, and forcing me to think clearly.

Gemini was best when I needed a mechanic, when the machine was already broken and I needed someone to stare at the ugly parts with me.

Claude was strongest when the task was systematic execution against a spec.

So yes, the AIs felt different. Not magical, just different. Different habits of thought. Different styles of usefulness.

But the larger lesson is that none of them could replace judgment.

They could generate code. They could explain concepts. They could debug. They could suggest architecture. What they could not do by themselves was care about the actual women this product was for, or decide what kind of experience made sense in this context, or hold the whole mission steady when the tooling got chaotic.

That part was still mine.

What I would tell another non-developer trying this

Start with the rules. Before you ask for code, define the real system.

Keep a running project diary. Every day. What works, what broke, what changed, what is next.

Copy exact errors. Exact logs. Exact routes. Exact outputs.

Freeze version 1 earlier than your ego wants to.

Do not confuse “big vision” with “current step.”

And most importantly, know your users better than the AI does. That is your leverage. The AI can generate the scaffolding. You have to make sure the building belongs in the neighborhood.

The honest conclusion

This project was not a clean triumph. It was not smooth, elegant, or cinematic.

It was fragmented. It was frustrating. It involved loops, rewrites, false starts, handovers, platform nonsense, and the constant feeling that the project might scatter if I stopped holding it together.

But it worked.

That matters to me.

Because on the other side of all the technical confusion is something very simple: women in Nyandarua can now use a familiar tool to join a structured fitness competition, submit performances, and be part of something larger than a spreadsheet or a speech.

And I got there without becoming a traditional developer first.

I got there by combining obsession with the problem, patience with the process, and three AIs that each helped in different ways.

So no, I cannot code in the normal sense.

But I have now shipped software.

That sentence still feels strange in my mouth. But there it is.

 

Login to Comment