Do Not Put Your Entire Business in the Hands of Weird Digital Neurons
I recently listened to Professor Michael Wooldridge's Faraday Lecture, This Is Not the AI That We Were Promised. It mostly confirmed my stance on modern AI models.
I agree with his core claim that current models are not rational minds in the classical sense. I disagree with the implied leap some people make from that: "therefore the output is not useful." In practice, the output is often very useful. The key variable is not philosophy. The key variable is setup.
With weak setup, you get confident nonsense. With good setup, you get leverage. As both models and harnesses improve, the quality keeps moving up.
Use AI hard. Trust AI carefully.
For Founders, This Is the Right Mental Model
If you are a non-technical founder, your goal is not to pick a side in an AI religion war. Your goal is to ship, avoid preventable mistakes, and build leverage.
Today, AI sits in the middle. It is not a toy. It is not autonomous intelligence either. It is a capable, inconsistent tool that gets better when your setup gets better.
Bad AI Adoption
- Buy tool licenses, skip process changes
- Assume output quality from demo quality
- Delegate critical judgment to the model
Good AI Adoption
- Define where AI can and cannot decide
- Add checks that fail fast
- Measure outcomes at system level, not prompt level
The Two Founder Mistakes I See Most
Mistake one: "AI is hype, we should wait." That is expensive. Your competitors are already compressing time on research, delivery, and experimentation.
Mistake two: "AI is basically an employee, just let it run." That is also expensive. The model has no accountability, no memory of production incidents, and no grounded understanding of your customer commitments.
The winning position sits in the middle: move fast with AI, but keep judgment and accountability with humans.
What to expect
- Fast output, uneven reliability
- Strong drafting and synthesis
- Needs verification on important paths
What improves outcomes
- Good context and clear constraints
- Tooling that checks model output
- Human review at the right points
What to avoid
- Blind trust in one-shot answers
- Replacing judgment with autocomplete
- Building strategy on hype demos
Coding Is the Practical Example
In software, AI works especially well because we can check it. Tests, types, linting, CI, and code review can catch a lot. Hallucinations are still a real risk, but they are a risk you can engineer around.
That is why I am more optimistic in coding than in many other domains. We already have a verification culture in software. The model can draft. The system can validate. The developer decides.
- Use AI for generation, not for final authority
- Let tests and CI reject weak output automatically
- Keep humans in review paths with business impact
The harness is the product.
As models improve and harnesses improve, results improve. That is what I see in real teams. AI can already be a strong thinking and coding partner if the environment around it is disciplined.
What This Means for Non-Technical Founders
Your responsibility is to design operating constraints, not to micromanage prompts. You do not need to become a model expert. You do need to make sure your team has clear rules for where AI is allowed to act and where human approval is mandatory.
Founder Operating Model
- Low risk tasks: automate aggressively
- Medium risk tasks: AI draft, human approval
- High risk tasks: human-led, AI-assisted only
- Track defects and rollback causes by source
- Review process monthly as model quality changes
Governance first, then scale.
About the Next Frontier
I share the view that robotics is a major frontier. More physical-world interaction should eventually improve model world views. But you do not need to wait for general-purpose robots to get value. In software teams, the value is already here when process quality is high.
My Position in One Line
AI is useful now. It is getting better fast. But if you hand it your business without process, you are not being innovative. You are being careless.
Do not put your entire business in the hands of weird digital neurons.
Build a system where they amplify good teams.