Resources:

I curated this round of resources from the agentic AI competition I’ve been participating in through the UC Berkeley Center for Responsible, Decentralized Intelligence. For my next newsletter, I’ll share some of the resources and learnings from the AI Leadership Exchange program I’ve been participating in. I’m off to Santa Barbara next month for the next series on emerging technologies. Time to shed my winter beanie.

  • Agentic AI lectures we covered during the Berkeley RDI course

  • 𝜏²-Bench benchmark, a simulation framework originally built by Sierra to test customer service agents - I’ve adapted it to answer questions about my newsletter

  • Getting started documentation for Antigravity, a code editing agent from Google

  • Tricks from coding communities I’m part of, which remind me: Keep calm and don’t doomscroll Moltbook

  • Tips for vibe coding when you need something less technical

  • Agents I built, which you can remix

  • Interactive advisor agent, which will guide you through all the resources in my newsletters

My writing runs on your feedback around which resources and ideas people find most interesting. Feedback is one of the best benefits of building in public. If you enjoy reading, please subscribe for free and forward to a friend. Keep reading for more on the idea of putting yourself out there to the public - or as Hannah Arendt calls it, the polis.

Christian


Note: Views expressed are my own and do not reflect those of any current and past employers.


THE RANDOM FOREST Long Read:

This all started when I built an agent that contemplates, inspired by the idea in Hannah Arendt’s book, The Human Condition. Like the vita contemplativa in Arendt's work, this agent examines what it means for an agent to be "thinking." For a more modern read, check out Josef Pieper’s essay on art and contemplation. I would argue that the agents I built are acting, not thinking (vita activa). In my experience, the challenge is when neither the human nor the agent is contemplating any work. 

Delegating the thinking to AI produces workslop, or worse, runaway agents. Here are a few of my experiences of when to trust AI - and why some work will continue to be people-centric.

As part of an experiment, I cloned an instance of a customer service agent called τ²-Bench through a virtual server provided by Lambda and open-source code from Sierra. I spent several days (and hundreds of dollars of Nvidia GPU compute) testing the agent capabilities of τ²-Bench. One week, I forgot to turn off the virtual server. I found out when I got the invoice and quickly turned off the virtual server.

an agent is like a Boy Scout who keeps doing his good deeds for the day until you say done

In another example, I asked one of the coding agents I was using to clean up files I no longer needed - but I forgot to tell it to stop. When I checked back in later than week, all my files were gone. This reminded me of the joke my father always tells about the Boy Scout doing his good deed for the day. He helps an old woman cross the street, but now she’s on the wrong side of the street. Or as my grandfather used to say, “no good deed goes unpunished.”

An agent is like a Boy Scout who keeps doing his good deeds for the day until you say done. But if you know what you’re doing and why you’re doing it, it’s amazing what you can build with a troop of agents. Check out the video for an explainer of what I built. Or for the techno-curious, you can remix the repositories in this GitHub list.

I built my agents using Claude Opus 4.6 and Antigravity. The code uses Google’s Agent2Agent protocol to get two agents to talk to each other. All you need to try this out are free API keys from Google and OpenAI. These are how developers use large language models within applications. I would include mine but that’s not a good idea - refer back to the YOLO-mode example above. Don’t believe what you read from the agents on Moltbook.

as I sit in a waiting room writing this, I can tell you that care work will continue to be people-centric

Speaking of agents, I was reading a report from McKinsey on the various agent, people and robot archetypes of work in the future. Agent-centric work is characterized by nonphysical, automatable work. The examples provided include accountants and developers. But before you unsubscribe, what distinguishes people-agent work from just agent-centric work is not how automatable it is - it’s about how social and emotional is the work.

What resonated most with me was the archetypes around physical, social and emotional work. The examples provided include registered nurses. As I sit in a waiting room writing this newsletter, I can tell you that care work will continue to be people-centric. Although it would be nice to have a robot to clean linens so the nurses can triage.

Joanna Maciejewska points out that people want AI to do laundry so that they can do the human stuff, not for AI to do the human stuff so that people can do laundry. This is the true ikigai of people-centric work. At the AI conferences I attend, I always ask one benchmark question of the other participants. How should we prepare kids for a future where AI is better at a lot of things than we are now?

The answers I get back to my existential question are all good: learn AI, learn a trade, learn how to be the life the party, learn to be an owner so you’re not so dependent on labor, learn new hobbies and hope for universal basic income. So which is it? Should our children study STEM or a vocation? Join a frat? Buy stablecoins? Franchise a Hobby Lobby?

there’s a reason AI labs hire philosophers

As Ethan Mollick noted about students who were given Antigravity and Claude to build a startup, emotional intelligence turned out to be a key advantage. He reflected that, “the skills that are so often dismissed as ‘soft’ turned out to be the hard ones.” I also like what founder Amjad Masad has said that liberal arts will become more valuable. He asserts that you need to be actually human to understand what people want.

There’s a reason AI labs hire philosophers. It reminds me of David Foster Wallace’s commencement speech. He reminds us that the value of a liberal arts education has nothing to do with knowledge, and everything to do with aware. This consciousness is necessary to be human.

Which brings us back to The Human Condition. Arendt argues that your action in relation to the world rather than your consciousness in the “thinking” sense differentiates you from a fish. And I would argue, from AI. In her definition of action, there is “this disclosure of ‘who’ in contradistinction to ‘what’ somebody is.”

your action in relation to the world rather than your consciousness in the “thinking” sense differentiates you from a fish

As I attempt to answer my own question of how should I prepare my kids for the future, I realize that I’m answering the wrong question. It doesn’t matter that they do. What matters is who they are in relationship to themselves and to others in the world.

The Chief of Staff at Anthropic, Avital Balwit offers an answer in the latest volume of The Digitalist Papers. She points out that there’s an existing education program today that teaches children how to develop anti-fragile identities. It’s called preschool. “Young people need sources of self-worth beyond intelligence. Are you kind? Brave? Persistent? Funny?”

To prepare kids for the future, don’t overthink it.