OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Hero image: from OpenAI’s Workspace Agents announcement (April 22, 2026).
TL;DR — OpenAI shipped Workspace Agents on April 22: shared, cloud-resident AI agents that live in Slack and ChatGPT, built to do the work your team already does. That’s the same shape as Marvin, the Lark/Feishu bot we open-sourced six months ago — but MIT-licensed, model-agnostic via the Ofox gateway, MCP-extensible, and running on your own hardware. Here’s what Workspace Agents confirmed, and why the open-source version matters.
A 30-second Lark conversation
“Which blog post drove the most conversions last week?”
A teammate asked this in a Lark group chat. Within a minute, a bot card appeared: the top five posts ranked by paid conversions, anomalies flagged, with a throwaway line at the end: “This one is 3x the runner-up. Probably worth a follow-up piece.”
Doing this manually: open GA4, pick a report template, add dimensions, add filters, export CSV, paste into a spreadsheet, compare. Ten minutes minimum. Marvin (our in-house Lark bot) did it in the time it took to type the question.
This isn’t a new technique. It’s just putting the agent where the work actually happens — the chat your team is already in — instead of making people walk over to the agent’s UI.
OpenAI just made this a product category
Yesterday (April 22, 2026) OpenAI announced Workspace Agents. The shape:
- Codex-powered, built for “work you’re already doing” — preparing reports, writing code, responding to messages
- Runs in the cloud, keeps going when you’re offline
- Shared across the org — build once, team reuses via ChatGPT or Slack
- Schedules + approval flows
- Free until May 6, credit-based pricing after
- Framed as “the evolution of GPTs”
The most striking part of the launch isn’t a feature — it’s the repeated framing that the agent should come to the system the team already works in, not the other way around.
The industry has been converging on this for a while
The past two years have moved AI from “chat UI” toward “agent inside a workflow”:
- Claude Code put the agent in your terminal
- Cursor / Windsurf / Zed put it in your IDE
- Linear’s Ask Agent put it in your issue tracker
- Workspace Agents now puts it in your team IM
The consensus is clear: no one should have to leave where they work to use AI. The correct place for AI is wherever you already are.
For many teams, that place is Lark/Feishu. Real collaboration, real decisions, real feedback happen there. If an agent can’t meet the team there, it might as well not exist.
The open-source reference implementation: Marvin
We open-sourced Marvin six months ago — a TypeScript Lark/Feishu bot framework built on the Claude Code CLI. Named after the depressive-but-terrifyingly-capable robot from The Hitchhiker’s Guide to the Galaxy.
Feature-by-feature against Workspace Agents:
| Workspace Agents promise | Marvin’s implementation |
|---|---|
| Always running | Sessions persisted to disk, auto-resumes interrupted tasks on restart |
| Shared in the team IM | Live progress cards in Lark/Feishu groups, ⬜ → 🔄 → ✅ |
| Schedule + event triggers | Built-in cron + WebSocket event listener |
| Ask for approval | Admin can interrupt mid-task with a message; Claude resumes with full context |
| Org permissions / safety | Output filter strips API keys, tokens, internal IDs, internal IPs |
The architecture is smaller than you’d guess: Lark WebSocket in → Claude Code CLI → real-time progress card renderer out. Full diagram in the README.
What Marvin actually does for us
1. Take a task from Lark, ship it to production.
Someone flags a copy issue in chat, or @-mentions Marvin with “take this.” Marvin greps the repo to find the code, opens a feature branch, pushes a PR, waits for CI, merges to dev, opens the second PR to master, merges that, confirms the deploy, reports back in chat, and closes the task. Human input: one sentence.
2. A GA4 question, answered in under a minute.
The opening anecdote. Marvin calls GA4 through its MCP server, cross-references dimensions, flags anomalies, returns a structured summary. Ten minutes of human work, done in 30 to 60 seconds.
The underrated part here isn’t speed — it’s that the cost of asking a data question drops to zero. Before, you’d think “is this worth opening GA4 for?” and often skip. Now you just ask. Question volume goes up, decisions get more evidence-based.
3. Everything else you plug in.
Marvin’s capability surface equals the MCP servers you connect. Firecrawl for research, freee for accounting, Context7 for docs, your own internal services. Add an MCP server today; Marvin uses it tomorrow. That’s the real ceiling.
Four ways the choices differ
A side-by-side parameter table isn’t the point. The philosophy differences are:
| Dimension | OpenAI Workspace Agents | Marvin |
|---|---|---|
| Agent ↔ model | Locked to Codex | Pluggable via the Ofox gateway — swap models anytime |
| Data boundary | Runs on OpenAI’s cloud | Runs on your machine / your server |
| IM decoupling | Primarily Slack (and ChatGPT) | lark.ts is an adapter — swap Slack/Discord without touching the core |
| Tool ecosystem | Codex + OpenAI preset integrations | Any MCP server |
| Modifiability | Closed SaaS | Persona, rules, pipeline are all files in your repo |
For many teams, data boundary and modifiability are the decisive ones. What’s in your team chat, your repo, your GA4 property, your accounting — that’s core organizational information, and you probably don’t want it flowing through someone else’s cloud.
Marvin runs on your own hardware. Ours lives on a Mac mini in the office, managed by launchd, with no external cloud services involved.
Running it
Repo: github.com/ofoxai/lark-claude-bot
If you have Claude Code installed, paste this at it and it’ll walk you through setup:
Clone and configure https://github.com/ofoxai/lark-claude-bot for me:
1. Clone the repo and cd into it
2. Run npm install
3. Copy .env.example to .env
4. Ask me for Lark App ID, App Secret, and Encrypt Key, fill them in
5. If I don't have a Lark app yet, walk me through creating one on open.larksuite.com or open.feishu.cn
6. Once configured, run npm run dev to start the bot
A few minutes later there’s a Marvin in your team chat.
Wrapping up
OpenAI has made “workspace agent” a product category. That’s a net good — more teams will take this form seriously. But the future of agents shouldn’t be “one SaaS vendor, one model, one IM they picked for you.”
Marvin is one reference implementation of the opposite: open, model-agnostic, MCP-extensible, locally hosted. It’s also the workflow-side incarnation of the Ofox gateway philosophy — one API for every model, one bot shape for every IM.
Code is here: github.com/ofoxai/lark-claude-bot.


