8. Fall 2025

THE RANDOM FOREST Newsletter

Author’s Note:

I want to start by sharing a few helpful AI resources I’m using as assistants to accelerate my creative process. I learned about these resources primarily through Y Combinator’s AI Startup School, the Berkeley Center for Responsible, Decentralized Intelligence’s Agentic AI Summit and the Stanford Next Revolution of AI Summit.

  • STORM out of Stanford OVAL, an open-source deep research solution that I run locally on my own small language model

  • Gitingest, which allows me to stack resources like STORM with my private GitHub repo

  • Jules out of Google Labs, an autonomous AI coding agent I use to de-bug and deploy the above resources into working prototypes

  • AI Prompting Guide, created by M&A Science and Grata

  • Wharton Generative AI Labs Prompt Library, created by Ethan and Lilach Mollick

  • Tips & Tricks from vibe coding communities I’m part of, which remind me not to take things to seriously, and to keep calm and code on

  • THE RANDOM FOREST shareable AI assistants, licensed under Creative Commons Attribution 4.0 - if you read past the first 2,000 words of this newsletter, I’ll provide you 30-day access

AI resources like these have helped me take my small language insights model to the next level over the last few months. I’m using my model to summarize and make available resources I use in advising sessions. This saves me time on the administrative parts of researching and editing, so I can focus my attention on creating and curating high-quality, long-form insights.

All of this helped me in my latest round of advising leading up to the 7th annual Bronco Ventures demo day. Without these resources, I wouldn’t have had time to take on a new mentee this year - and I’m glad I did. I’m excited to see him and other founders pitch in a few weeks. If you’re coming to the Santa Clara University Grand Reunion, hit me up! Stay tuned for my annual demo days review as part of the Winter newsletter. In that, I’ll also share themes from emerging startups that pitched at Berkeley’s Agentic AI Summit last month and SkyDeck’s demo day this week. Takeaways from the keynote portions of the summit below.

I want to wrap up by asking for one small favor: Subscribing to this newsletter is quick, easy and free - if you make it to the bottom of this page, please subscribe. Subscribing to THE RANDOM FOREST community lets me know whether I’m reaching people in this long-form medium. I believe I am! But as the Zen koan goes, “if a tree falls in the forest and no one is around to hear it, does it make a sound?” Subscribers can access resources and reading lists I’ve curated, which I share with a small network of others interested in this space.

In the spirit of building on these curations, I’m making these community resources free to subscribers. Now enough about that - in the words of Stringer Bell, let’s “get on with it…”

Christian


THE RANDOM FOREST Long Read:

To set the scene, we’re in Mexico on vacation. I’m swimming with my toddler, and we get pulled into in a riptide. I know to not fight the current. We swim parallel to the shore and then back in. No problem. The next morning, we’re at breakfast and I see one of our vacation friends who’s here with three kids. I ask him what’s his secret, and he tells me: Coffee and self-help books. “Let go,” he says.

Later that night, we’re watching the kids run around a polo field in the middle of a jungle while we eat asado. The service is slow, it’s hot out and there are flies from the stables. The food is incredible. And it’s easily the most beautiful place we’ve been with our boys. I stop trying to shoo the flies, or keep the boys from playing on the polo field or fight the heat. I let go, and let them play.

is agency an act of surrender or an act of acceptance?

I often find myself fighting what Dr. Seuss called “games you can’t win ‘cause you play against you.” Leisure without labor - and more often, labor without leisure. But what if we didn’t have to grind so hard to shine? I still pull up a Jupyter Notebook to start a project with Python - when I could easily vibe code from my phone.

In the gym, I do a summer “shred” - when just staying consistent with the major lifts would lead to a better outcome. As Barbell Logic coach Michael Wolf says, “simple and heavy doesn’t sell as well as complicated and light.” Coach André Crews reminds me each week that "we work hard in the gym to make life easier.” But when should hard mean heavy, and when should it mean complicated? I’ve been saying to myself: I can do hard things. But then I also remembered that objet trouvé - there are “the things we think and do not say.” Like, I’m making things more complicated than they have to be.

I’m not advocating for just doing easy things. Imagine if JFK’s speech ended with: Maybe we’ll go to the moon next decade and do some other things first, because they are easy. Or if Robert Frost took the road more traveled by. I’m all for doing hard things. There’s a place for a B minor seventh barre chord, and a place for an easy E minor. In the words of Lao Tzu, “the difficult and the easy complement each other.” The difficult part for me is accepting when to choose the barre chord, and when to choose the easy E.

From a technology perspective, the agentic AI current is strong. And while agentic AI is recent, the idea of agency, and its precursor - determinism - is age-old. The debate goes something like this: If the die is already cast, is agency an act of surrender or an act of acceptance? One of my favorite lines from Infinite Jest comes to mind: “99.9% of what goes on in one’s life is actually none of one’s business, with the .1% under one’s control consisting mostly of the option to accept or deny.”

AI is getting better at understanding the intent of our prompts - intent-driven programming - in part by identifying the patterns of other prompts just like it. As a result, we’re no longer really directing the AI, we’re accepting its proposed changes. This reminds me of Barack Obama’s system for reviewing decision memos. Agree, disagree or let’s discuss.

an AI equivalent of Wooderson’s realization in Dazed and Confused: AI gets smarter, I stay the same way

Vibe coding has become so accessible that an elementary schooler can do it - accept / reject interfaces are the new normal. Soon enough, the agents will be prompting us - not the other way around. Vibe coding shows us that AI can do a lot of the suggesting these days. This promotes us to the role of reviewing the proposals, and agreeing or disagreeing. And sure, when AI becomes too agreeable - like the GPT-4o release earlier this year - we find ourselves in a self-validating feedback loop. As Ethan Mollick predicted in Co-Intelligence, “our own perfect echo chambers.”

Fast forward to Mollick’s reflections since the book’s release 18 months ago: “We’re shifting from being collaborators who shape the process to being supplicants who receive the output.” The case in point here is OpenAI’s latest GPT-5 release. It’s less sycophantic, and puts the model more in the driver’s seat. Determining the AI model from a drop-down will soon be reserved for purists in the way that we tell our kids how we learned to drive stick the hard way - and they remind us that cars drive themselves now. AI is entering its unsupervised self-driving era. Not because we shouldn’t have humans-in-the-loop, but because we’re facing the Peter Principle - an AI equivalent of Wooderson’s realization in Dazed and Confused: AI gets smarter, I stay the same way.

Alright, back to the difficulty of knowing when to choose the hard chord versus the easy one, will the latest wave of consumer AI make it even harder to know what to choose? One argument is that “unified” consumer AI models like GPT-5 will make humans lazier AI drivers. That we’ll lose the skill and control that comes with the manual transmission that we still see in enterprise AI models - and won’t be able to handle harder tasks when we need to. But there are still things we can do to configure and control consumer AI models, and not just paying for the premium versions.

you’re not doing anything ground-breaking unless half the people you talk to about it say it’s impossible

One parameter we can use to control creativity is called “temperature.” A lower temperature tells AI to be more probabilistic, and as a result less creative. One approach is to use “dual-zone” temperatures - a low temperature for precision work, and a high temperature for creative work. We need prompt engineering rules of thumb equivalent to the ones my grandfather used for aeronautical engineering. Like, you’re not doing anything ground-breaking unless half the people you talk to about it say it’s impossible.

My rule of thumb for AI temperature: You’re not doing anything creative with AI unless the temperature parameter is turned up halfway. When I prompt Gemini to finish my sentence about determinism, a 0.3 versus a 0.7 temperature yields the same definition, until the end: “Determinism… sparks debate about the existence of free will and moral responsibility.” Let’s have that debate. The Stanford Encyclopedia of Philosophy offers two sides of this debate. One says determinism and free will can co-exist. The other says the two concepts are incompatible.

John Steinbeck’s books started out much more deterministic, before finding their way to the free will of timshel at the end of East of Eden. In the context of AI, think of free will as the agent driving, with high-level direction from a human. Unlike with determinism, the agent doesn’t need step-by-step directions - it can get there more easily without them. At the risk of aging myself, there’s something freeing about following a map - not the same as following directions. When my devices died on a ride through the high desert in 2018, I got home safely by following fence lines and the sun.

Whether we’re following directions, or vibes to get where we’re going, we’re the ones responsible for getting there safely. I heard this loud and clear from the Agentic AI Summit speakers at the Berkeley Center for Responsible, Decentralized Intelligence. Snehal Antani reminded, “you can’t just use agents to throw a bunch of commands against the wall and see what sticks. There is a balance of discovery and determinism.” Writer CEO May Habib made the case for security, transparency, control and alignment. Databricks’ Ion Stoica talked about the challenges of multi-agent alignment. It was great to see Palo Alto Networks represented at the AI Safety, Alignment and Security session.

with AI as with parenting, we need to develop limits, without limiting development

AI has come a long way from the small, bot meetups I used to drop by on my walk home to Telegraph Hill a decade ago. All grown up, AIs are what De Kai calls “poorly parented, feral tweens” in his new book Raising AI. Two is more than twice as hard as one - no shame, parents. But it begs the question: How are we supposed to align multiple, mature agents with each other, when it’s hard enough to align a single, simple agent? We can start with the lessons of agency theory.

In agency theory, the common solution to the challenges of principal and agent alignment is the contract. With a contract in place, the principal and agent outline agreed-upon behaviors and outcomes. In the timeless Stanford paper, Kathleen Eisenhardt draws the distinction between behavioral versus outcome-based contracts to address the agency alignment challenge. The more deterministic the conditions, the more behavioral contracts make sense (e.g. like wages paid to an agent). Alternatively, if the agent has a higher risk tolerance than the principal, outcome-based contracts make more sense.

Here’s what I’d propose: Just like we have contracts between principals and human agents, I think we need a modern-day equivalent with AI agents. In practice, what this looks like are standards that we build into prompts, and Model Context Protocol (MCP). I use a P-L-A-T-O structure for custom GPTs and Gemini gems prompts - persona, limit, aim, task, outcome. Now that gems are shareable, you can request 30-day access to my PLATO prompt protégé, licensed under Creative Commons Attribution 4.0.

With a “contract” like P-L-A-T-O in place, we can strike a balance between driving the car and being driven. As a parent, I’ve learned the hard way that you can agree on the aim, task and outcome - but without limits, no deal. I now pre-negotiate “extensions” to screen time. My boys find a way to set the temperature to 1.0, and get creative with these limits, but at least there are limits. In the season finale of The Most Interesting Thing in A.I. podcast, futurist Kevin Kelly talks about “surrendering some amount of control to the agency we are creating… to be more like lighthouse parents to the AIs.” An interesting contrast in AI parenting philosophies to De Kai’s AI PTA parent. With AI as with parenting, we need to develop limits, without limiting development.

And as with any social contract, it goes both ways. What limits will we put on the principal, not just the agents? As a rule of thumb, I don’t use AI to determine what I want to write. I generate ideas myself. I read primary sources. I write, and think and write again to get my thinking and my writing clear. Paul Graham has a recent essay on not replacing the thinking part of good writing. And I’d add: Don’t replace the thinking part of reading or listening either.

Sure, sometimes summarization saves time. But is it worth losing the flow, the rhythm and the intent that comes with understanding something complex? It’s easy to search for Garden State and find the iconic clip where Natalie Portman passes her headphones to Zach Braff to listen to The Shins. This scene, the chorus of Caring Is Creepy in E minor - so easy and immediately recognizable.

Now I’m really going to age myself: There’s something about listening to the full soundtrack. Iron & Wine’s cover of Such Great Heights fades out, and Frou Frou’s Let Go plays through to Bonnie Somerville’s Winding Road. As she plays B minor seventh, we get lost in the lyrics - we forget it’s hard to play:

Still don't know
Where it goes
And it's a long way home
I've been searching for a long time
Still have hope