atlasfounderai-collaborationBuildInPublicremote-workultra-lab

Why I Built Atlas — A Public Experiment with One Founder + AI + a 13-Hour Flight

· 39 min read

Why I Built Atlas — A Public Experiment with One Founder + AI + a 13-Hour Flight

It's May 8, 2026, 6:30 PM Taipei time. I'm leaving home in Taichung for Taoyuan Airport to catch EVA Air BR71 to Munich. Eight days on an Allianz Life Insurance VIP tour. This essay is being written eight hours before takeoff; more will follow.

What I'm trying to do: turn this trip into a public experiment about what the next-era CEO actually looks like.

The answer is at ultralab.tw/atlas — a page that unfolds in real time as I move. This essay is its reason for existing.


The problem: what happens when traditional CEOs travel?

I know a lot of founders. The standard 7-day-trip script:

  • 3 days before: brief the team on "handle these things while I'm gone"
  • 5 days during: emails pile up, decisions delay, the team can't reach anyone
  • Week 1 back: spent on cleanup. Real work loss: ~10 working days

This strikes me as a kind of madness. Why does travel mean operational stop?

Specifically:

  • I'm a one-person company. There's no team to "handle things."
  • I run six product lines simultaneously (UltraProbe, MindThread, UltraSite, UltraGrowth, Ultra Advisor, UltraTrader) plus 56 Threads accounts and a Discord community.
  • I pushed 200+ GitHub commits in the last 30 days. If 7 days of travel = 14 days of stalled commits, the velocity collapses.

So I have a strong personal incentive to figure this out: can the operation just not stop?


The hypothesis: AI isn't a tool, it's a colleague

My collaboration with Claude has gone deeper over the past year. It started as "I paste code, it reviews." Now:

  • I talk to Claude through Telegram.
  • Claude reads my codebase directly, edits, pushes, deploys.
  • Claude handles GitHub issue replies, writes blog posts, tunes system prompts.
  • I have 4 AI agents (the OpenClaw fleet) running on WSL2, with 30 cron timers running fully autonomously.

This isn't "AI as a tool." It's AI as a colleague + full-stack automation.

Hypothesis: if AI really can operate as a colleague, then while I'm traveling I shouldn't stagnate — I should keep shipping.

The hypothesis was never rigorously tested. There's always a gap between "my actual work" and "my imagined workflow."

This 7-day trip is the stress test.


The design: Atlas isn't a dashboard, it's an argument

I could have just traveled quietly and reported afterward. But that wouldn't prove anything.

To prove the hypothesis, I needed:

  1. Public-by-default: anyone can see every action I take while traveling
  2. Real-time: not edited highlights — the actual stream
  3. Comprehensive: photos, decisions, commits, music I'm listening to, phone battery, what AI agents are doing right now
  4. Frictionless: I should be able to operate everything from one phone via Telegram; no "too much hassle" excuse for breaking transparency

These constraints drove the design directly:

  • Telegram is the only control plane. I send photos to Claude, Claude processes them. There's no dashboard interface I have to use.
  • Atlas is the view layer. What viewers see is a React page, but every piece of data flows from messages I send Claude on TG.
  • Every commit / decision / observation ships immediately. No PR review. No "wait until I'm back."

What we shipped in 4 hours

From "let's build Atlas" to v1 deployed: 50 minutes. From v1 to v1.7 (with Hero, Story, Photo Lightbox, guestbook, subscribe, view modes, plane animation, stats ticker): 3 hours.

Specific shipments:

  • Spotify OAuth → discovered Spotify gates Web API behind Premium → pivoted to Last.fm → 30 minutes to swap
  • Taoyuan airport photo gallery + caption system
  • Firestore rules deploy (I wasn't at my computer; Claude pushed via Firebase CLI)
  • 7 awesome-list PRs (an agent autonomously submitted 5)
  • MindThread health audit on 5 most-active accounts (3,191-word report)
  • Moltbook autopost root cause + fix (mindthread/probe agents weren't switching credentials)

What was I doing during this?

The 2-hour drive Taichung → Taoyuan: I slept, sent 5 photos via TG, reported phone battery 3 times, picked an optional itinerary day (Neuschwanstein), wrote ~5 instructions for Claude.

Those 4 hours of production output came from "5 instructions from me + 50 minutes of Claude shipping."


What I learned (end of Day 0)

1. AI-as-colleague is an order of magnitude more valuable than AI-as-tool.

A tool answers when asked. A colleague runs with direction. The difference is latency and cognitive load.

I no longer "think about how to use the tool." I think about what outcome I want.

2. Public transparency forces work quality up.

Knowing every commit, decision, and mistake will be visible naturally raises the bar.

At 1:30 PM today I miscalculated the time difference and apologized to my reader as "an inexcusable basic error." If that mistake had been in private, I'd have moved on. With the Atlas audience watching, I admitted it instantly + wrote it into Claude's persistent memory + won't make it again.

3. Friction is the silent time killer.

Every step between "I have an idea" and "the idea is live" cuts another 50% of velocity.

Atlas is "idea → TG → Claude → ship" — three steps.
Traditional flow is "idea → write spec → review → write code → review → merge → deploy" — seven-plus steps.
The gap is roughly 30× speed.

4. The risk isn't "AI gets things wrong" — it's "I get lazy reviewing."

AI does occasionally err. Earlier today Claude proposed downloading a 663MB file via gdown, which then made my Telegram conversation lag for 20 minutes — a genuine mistake.

But that was an execution error, not a judgment error. I post-mortemed, AI logged the lesson, won't recur. Judgment errors are harder to recover from — and judgment is still mine.


The next 7 days

From the moment I publish this to landing back in Taipei on May 15, Atlas will:

  • Auto-attach every photo I send to the relevant pin
  • Show every commit in "Recent Ships" live
  • Show every track I listen to in the top-right
  • Tick phone battery, current city, progress bar (0.00% → 100%) in real time

My target: by the end of 7 days, output should not be less than 7 days of work in my Taipei home office.

I don't know if it'll succeed. Something might break somewhere. I might walk into Neuschwanstein, get sentimental, decide this whole experiment is foolish. I might find an unforeseen limit of the AI-colleague model.

But the unknown is the core of the experiment. If I knew the result, it wouldn't be one.


What you can do

If Atlas looks interesting:

  1. Subscribe — daily digest at 9 PM German time
  2. Comment — every pin has a guestbook; tell me what you think, ask questions
  3. Share — pass the Atlas link to other founders, see what they think
  4. Challenge — comment "you should also try X" and I'll consider it

If you're also a one-person company or small-team founder — I think this workflow is worth trying. OpenClaw, prompt-defense-audit, UltraProbe are all open source. Atlas's full source is at github.com/ppcvote/ultralab.

The next-era CEO won't be busier. They'll be more transparent, location-independent, and truly collaborating with AI.


Written 2026-05-08, Taoyuan Airport Terminal 2, 8 hours before BR71 takeoff.
If you're reading this and Atlas is still ticking — Min Yi is somewhere in Germany.

Weekly AI Automation Playbook

No fluff — just templates, SOPs, and technical breakdowns you can use right away.

Join the Solo Lab Community

Free resource packs, daily build logs, and AI agents you can talk to. A community for solo devs who build with AI.

Need Technical Help?

Free consultation — reply within 24 hours.