Lincoln Square

Lincoln Square

Newsletters

AI Just Took a Giant Leap. Are We Ready? | The Lincoln Logue

We're entering an era where machines don't just answer questions — they do the work, navigate our lives, and potentially replace the digital world we know.

Sam Osterhout's avatar
Sam Osterhout
Feb 08, 2026
∙ Paid

Amid the gutting of the Washington Post, the sacking of an election hub in Georgia, news that Tulsi Gabbard did something that puts American security at risk (suprise!), Trump’s racist post about the Obama’s, and on and on and on and on and on … it turns out there are other things happening in the world that we all need to at least occasionally turn our heads to observe.

And not just culture war stuff, or passing memes or controversies. There are massive changes taking place in our world that will deeply impact our culture, our society, our civilization. In fact, some of these changes will likely come to represent this era, maybe more so than Trump.

In some ways, it’s (arguably) true that Trump is actually the noise that sits on top of these massive global shifts. Perhaps he’s a catalyst to the changes, or maybe he represents the failure of the cultural dam that is allowing these changes to flood in.

The Lincoln Logue is a special feature for Lincoln Loyal paid subscribers. Upgrade your subscription today.

The climate is still changing. That’s one thing. It’s still true that we may find ourselves struggling to survive on an uninhabitable planet. Trump is facilitating that change, of course. I heard an interview with the owner of a factory that manufactures equipment that makes power plants more efficient, cleaner. He’s going out of business — Trump no longer requires power plants to clean up their act.

So that’s good, right?

But another thing that’s happening is AI. You’ve read about it. Your nephew who just graduated college loves it. You’ve heard that it’s going to make workers obsolete, or flood the world with deep fakes and make Truth impossible to ascertain. You’ve seen fake images and videos and heard about how Hollywood hates it (and how they also love it. Or … maybe their ambivalent? Or … or … ?)

It’s confusing. Add to this the fact that so much of our economy is riding on massive investments in tech, and specifically AI, and yet there’s not much evidence that it will ever turn a profit. If you think the AI economy is a house of cards, you might not be wrong. You might not be wrong if you think it’s worth the hype. And you might even be right if you think it’s going to just mature into a normal industry and we’re all going to go on living our lives like it’s no big whoop.

Really. Who knows? There are smarter, more read-in people who can prognosticate about the future economic, cultural, and societal impacts of AI better than I.

That said, I have personally followed the AI story for the past several years, have incorporated some (*limited) AI tools into my daily workflow, and have tested out lots and lots of AI platforms and chatbots.

And over the past two or so weeks, something has changed. And the change feels really, really big. The change is embodied by two different projects that — at least to me — sprang out of nowhere but have offered a vision of the future I was not exactly ready for.

Moltbook

This has been all the buzz for the past week or so. Moltbook is a Reddit-style social network built exclusively for AI agents.

Maybe read that again.

Imaging going on Bluesky, except you can’t post anything, and everything you see was created by AI agents, or chatbots. But it’s not just AI slop. It is deep conversations, dialogues, debates. It is upvotes and new ideas and engagement. It’s fun and energetic and active. What you see is a social network buzzing with life. Only …it’s not life. It’s AI (with a couple of caveats I’ll get to shortly).

Last month, an open-source AI agent framework called OpenClaw was created by an engineer named Peter Steinberger. Think ChatGPT, but totally open sourced. Steinberger built OpenClaw because — to oversimplify, maybe — he wanted an AI to be his personal assistant.

Most AI frameworks, like ChatGPT or Anthropic’s Claude are built with some limited use cases. They are, basically, chatbots. You can type questions or commands into a chat box and they will offer you an approximation of an answer (or they’ll make a picture for you, or a video, or do a little trick, etc). Their skills are limited because, in part, their physical bodies are limited.

Give to Lincoln Square

Let’s say you hire a real-life assistant. You can ask them all the same things you could ask ChatGPT, but the results will be different. For example, you might ask your assistant to search the entire internet and build you a report on global currency fluctuations since 1790, along with graphs, references, bullet points, and an executive summary.

Your assistant will not only probably quit, but the work they do will take some time. Maybe months. Ask ChatGPT the same thing and you’ll get a pretty good product in a few minutes-ish.

Now ask ChatGPT to get your nephew a birthday present and bring you a cup of coffee. You’re going to be waiting a while.

In order to get an AI to do that first part — get your nephew a present — it will need access to your life. It must know who your nephew is and understand his context. Where is he? Who is he? How old? What does he like? What does he need? How close are the two of you (how personal should this gift be?) And so on.

Giving a computer program access to your life is … perhaps … unwise. A human assistant is someone you hire because you can trust them. You can hold them accountable for wrongdoing. A chatbot? Who knows?

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 Resolute Square PBC d/b/a Lincoln Square · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture