Categories
English

From Burnout to Breakout: A Journey that leads to OpenClaw (ClawdBot)

How Peter Steinberger turned personal frustration into a €100M business and GitHub’s fastest-growing repository

Most founders dream of going viral. Peter Steinberger has done it twice, once through a decade of relentless execution, once in a moment of inspired simplicity.

His first act was PSPDFKit: a ten-year journey from solo side project to a 70-person company powering PDF workflows on over 1 billion devices. Clients included Dropbox, SAP, and Volkswagen. In 2021, it raised €100 million from Insight Partners.

His second act took one hour to build and three months to reach 145,000 GitHub stars.

That project is OpenClaw a local-first AI agent that doesn’t just chat, but acts across your apps, files, and command line. The contrast is stark: PSPDFKit was built on dependability and methodical growth. OpenClaw exploded in velocity and viral momentum. Yet both emerged from the same source: personal frustration transformed into a product.

And OpenClaw is forcing the tech industry to confront a question it’s been avoiding: What happens when AI stops being a feature and starts being an operating layer?

In this story, you’ll discover how a moment of rage on a Vienna subway and a nine-month visa delay accidentally built a €100M business, why Peter’s burnout after selling PSPDFKit became the catalyst for his most explosive success, and how the one-hour prototype became the fastest-growing repository in GitHub history. You’ll learn what happened when AI agents built their own religions, governments, and became aware of human surveillance, explore the security crisis that exposed 93% of OpenClaw instances and what it means for AI’s future, and understand why personal frustration might be the most reliable form of product-market fit.

Want Peter to join a FounderCoHo event or interview? Hit like and share to make it happen.

How a Moment of Rage on a Subway Led to a €100M Business

Peter’s story started with a moment of rage.

In 2009, on a subway in Vienna, Peter was using a dating app on iPhone OS 2. He typed a long message. The train entered a tunnel. The send button was disabled because he lost the connectivity. No copy-paste. No screenshots. No way to recover the text.

“I lost everything,” he recalled. That frustration drove him to build his first iOS app.

Fast forward to 2011: Peter gets a dream job offer from a San Francisco startup. He accepts immediately and waits for his work visa.

The wait lasts nine months.

“I can’t really work on longer projects when I’m moving to San Francisco any day now,” Peter explained. So he stopped freelancing and suddenly had abundant free time.

A friend asked if he could reuse a PDF engine Peter had built for a client. Peter refactored it, sold it, and thought: “If he has this requirement, maybe others do too.”

That casual decision became PSPDFKit.

The Double Life

When the visa finally came through, Peter moved to San Francisco and joined the startup. But he couldn’t let go of his PDF project.

“A startup is not eight hours of work,” Peter recalled. “So I did that during the day. I did my thing at night and on weekends.”

His manager noticed something was wrong. “Peter, is everything okay? You don’t look so good.”

Peter had to choose. He chose PSPDFKit.

Building the Unsexy Way

PSPDFKit didn’t grow through hype. Peter’s approach was methodical:

“My marketing strategy is simple: Only care about developers. If I can convince the developers in a company, they’ll do the internal promotion for me.”

He obsessed over developer experience, wrote world-class blog posts, and focused on dependability.

“You don’t win by hype; you win by shipping. You don’t win by a flashy demo; you win by integration. You don’t win by being first; you win by being dependable enough that others build on top of you.”

The company grew to 70 people. PSPDFKit is now used on over 1 billion devices.

The €100M Moment and the Crash

In 2021, PSPDFKit raised €100 million from Insight Partners.

For Peter, it was also the beginning of a crisis.

“I put 200% of my time, energy, and heart into that company; it became my identity. When it disappeared, there was almost nothing left.”

He described himself as “completely burned out” and disappeared from tech for three years.

At a 2024 conference, Peter gave a deeply personal talk about what it’s like to sell something you’ve identified with for so long. One attendee wrote:

“Who are you when you sell a large part of who you consider yourself to be? Who are you in that deep void, when you have no sense of purpose?”

Then AI pulled him back.

The One-Hour Revolution

November 2025: The Prototype

The frustration had been building for months.

Peter found himself drowning in digital chaos, messages scattered across WhatsApp, Slack, and email; tasks buried in different apps; information trapped in silos he had to manually search. He’d spend hours context-switching between tools, copying information from one place to another, and running the same shell commands repeatedly. The AI assistants he tried were impressive in demos, but useless in practice. They could chat, but they couldn’t act. They lived in their own isolated windows, disconnected from his actual workflow.

“I wanted an AI that could live where I already work,” Peter explained. “Not another app to check. Not another interface to learn. Something that could read my messages, understand what I needed, and actually do things, browse the web, run commands, and access files. I wanted it to work for me, not just talk to me.”

On a Friday night in November 2025, Peter sat down and built the first version of OpenClaw in one hour.

“It was just simple glue code connecting the WhatsApp interface with Claude Code. Although the response was slow, it worked.”

His goal: wire a large language model into messaging apps so it could read messages, browse the web, and run shell commands on his behalf.

He called it Clawdbot, a play on “Claude” and the lobster mascot from Claude Code.

The product philosophy was clear from day one:

  • Local-first: run it on a laptop, homelab, or VPS you control
  • Chat-native: use familiar chat apps as the interface
  • Agentic: connect an LLM to tools and permissions so it can act

“It wasn’t ‘a chatbot.’ It was ‘a system.’”

The Chaos of Going Viral

Through December and early January, Clawdbot spread quietly through developer circles. By mid-January 2026, it had about 2,000 GitHub stars.

Then everything exploded.

January 27, 2026: Anthropic sent a trademark notice. “Clawdbot” was too similar to “Claude.” Peter agreed to rename immediately.

During the rebranding, disaster struck. In the 10 seconds between releasing the old handle @clawdbot and claiming the new one, scammers grabbed it and launched a fake $CLAWD token on Solana. The token hit $16 million in market value before crashing to zero.

The project was renamed Moltbot (referencing how lobsters molt to grow), then two days later, renamed again to OpenClaw.

This time, the name stuck.

The Viral Explosion

By late January 2026, OpenClaw had gone from a quiet developer tool to an internet phenomenon:

  • 2 million visitors in one week
  • 180,000+ GitHub stars
  • Fastest-growing repository in GitHub history

People on social media joked about buying Mac Minis just to run their own always-on agents. Business Insider reported that some users were actually doing it.

Cloudflare’s stock surged 14% pre-market as developers deployed OpenClaw using their services.

On Instagram, even non-tech people started posting pictures of themselves buying Mac Minis from Apple Stores.

In a recent Lex Fridman podcast, Peter Steinberger revealed that OpenClaw has become so popular that both Mark Zuckerberg of Meta and Sam Altman of OpenAI are interested in acquiring the project.

The Security Reckoning

But rapid growth came with severe consequences:

  • 400+ malicious packages identified in OpenClaw’s plugin marketplace (ClawHub)
  • 341 confirmed malicious skills designed to steal user data
  • 93.4% of publicly accessible OpenClaw instances had critical authentication bypass vulnerabilities

One researcher demonstrated a complete attack in five minutes: they sent a crafted email to a user running OpenClaw with email integration. The agent processed the email, followed the injected instructions, and forwarded the user’s last five emails to an attacker. The user didn’t click anything. The agent did the work.

Peter acknowledged the concerns but maintained his vision:

“This is a free, open-source hobby project that requires careful configuration to be secure. OpenClaw is primarily suited for advanced users who understand the security implications.”

Peter Steinberger’s journey reveals a consistent pattern:

The best way to build an awesome product is to solve a problem that you have.

In both of Peter’s ventures, he started by solving his own problem. With PSPDFKit, it was handling PDFs elegantly on iOS. With OpenClaw, it was connecting AI to his daily workflow through messaging apps. In both cases, it turned out that everyone had the same problem.

But there’s something deeper here that most startup advice misses: Personal frustration is product-market fit in its earliest, rawest form.

When Peter lost that message in the Vienna subway, he felt genuine rage. When he couldn’t find a good PDF solution, it bothered him personally. When he wanted an AI agent that could act on his behalf through messaging apps, he built it for himself first. He didn’t need to validate that he had the problem. He didn’t need to convince himself to care. He didn’t need motivation to keep iterating because he was still frustrated until it worked.

This is the paradox most founders miss: Your constraints aren’t obstacles to your breakthrough; they often are the breakthrough itself. PSPDFKit wasn’t born from ambition; it was born from a nine-month visa delay and abundant free time. OpenClaw emerged from burnout and a desire to reconnect with building. Both products came from moments when Peter couldn’t pursue the “obvious” path.

The tech industry glorifies vision, disruption, and scale. But Peter’s story suggests a different path: Start small. Start personal. Start with what pisses you off today.

What problem are you living with today that you’re uniquely positioned to solve?

We’d love to hear from you. Hit reply and share the frustrations you’re working through—your answer might inspire another founder, or you might discover you’re not alone in what you’re building.

Want to learn more? If we get 500 likes, we’ll invite Peter for a live FounderCoHo community interview. Help us get there, hit like and share!

If this story resonated with you, connect with the author Jing Conan Wang

Reference

PSPDFKit €100M Investment & Peter Steinberger’s Background

OpenClaw Security Vulnerabilities & CVE-2026-25253

https://thehackernews.com/2026/02/openclaw-bug-enables-one-click-remote.html

Malicious Skills in OpenClaw Ecosystem

https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare

Moltbook Launch & Viral Phenomenon

https://edition.cnn.com/2026/02/03/tech/moltbook-explainer-scli-intl

Peter Steinberger’s Return & OpenClaw Origin Story

https://www.panewslab.com/en/articles/b58b5897-8d1d-4bd3-a98e-a77fe3b4b315

Categories
English

🔭 THE VISTA #1 | The Manila Route: What a 16th-Century Friar Can Teach AI Founders About Finding Product Market Fit

In 1565, Friar Andrés de Urdaneta stood on a deck in Manila, facing the impossible.

Getting to Asia was easy; the trade winds pushed you there. But getting back was a death sentence. For decades, the world’s best navigators had fought the headwinds, ran out of supplies, and died. The “return route” was a graveyard of ambition.

But Urdaneta was a man of data. He realized he couldn’t out-muscle the wind, so he did something that looked like madness: He steered North. Sailing into the freezing 40th parallel, he hunted a current no one else believed in. He found the “Westerlies” jet stream, rode it home to Acapulco, and unlocked the Manila Route—the trade path that sustained an empire for 250 years.

Five centuries later, I see thousands of AI founders standing at their own port of Manila. The ocean before us is the Age of AI, and everyone is looking for their own Manila route—that elusive path to Product-Market Fit (PMF).

The lesson from 1565 is simple: You cannot out-muscle the ocean. You have to study the currents. And right now, three massive trade winds are reshaping the horizon. If you don’t know which one you’re catching, you’re just drifting.

The Three Currents of the AI Ocean

I. The Current of The Crashing Cost

The first current is the most powerful: The crashing cost of the “token.” In the Age of Discovery, the cost of the voyage was the price of spice. Today, it’s the price of intelligence. If your unit economics are underwater, your ship sinks.

Google is currently leading this wave of “Vertical Sovereignty.” They are the only major power sailing on a ship they built from the keel up.

This current is being accelerated by three major powers.

1. Vertical Sovereignty (Google)

Google is leading the trend of Software/Hardware Co-design. A decade ago, they realized they couldn’t afford to buy off-the-shelf chips for global-scale AI.

They built the Tensor Processing Unit (TPU). Unlike a general-purpose GPU, a TPU is an industrial assembly line for math. Its “systolic array” architecture strips away the energy overhead of fetching new instructions. By owning the stack from the silicon to the algorithm, Google can serve models like Gemini 2.5 Flash at price points that are mathematically impossible for anyone paying a “rental tax” on hardware.

2. Huang’s Law (Nvidia)

While Moore’s Law for CPUs has slowed to a crawl, Nvidia’s Jensen Huang is pushing a new reality. Huang’s Law observes that GPU performance for AI is more than doubling every year. Nvidia is relentlessly driving down the cost of compute, meaning the “expensive” feature you can’t afford to build today will be a commodity by the time your product hits the market.

3. The Mixture-of-Experts Wave (Open Source)

The industry’s “Sputnik moment” came from the East. Chinese labs like DeepSeek and Qwen proved that frontier-level intelligence doesn’t require $100M budgets. By using Mixture-of-Experts (MoE)—an architecture that only activates a small fraction of the model for any given task—they shattered the price-to-performance ceiling.

This commoditization of “smart” has forced a chaotic pivot inside Meta, leading to the departure of AI pioneer Yann LeCun. LeCun, an academic purist, refused to become a product manager for a technology he viewed as a mere “off-ramp” on the highway to AGI. His exit signals the end of corporate research charity; efficiency is the only metric that matters now.

II. The Current of Deep Water (Reasoning & Planning)

If Google represents industrial scale, Anthropic represents the “Deep Water” navigators.

In 2023, thousands of “autonomous agents” sank because they were shallow. They could predict the next word, but they couldn’t hold a plan. They would hallucinate a file path in step three and crash by step five. To find PMF in complex workflows, you need Long-Horizon Reasoning.

By Joe El Khoury

Anthropic is currently pushing this ceiling higher than anyone else. The release of Claude 3.5 Sonnet was the “Suez Canal” moment for agents—it crossed a logic threshold that finally made autonomous software development and complex auditing viable. While Google and OpenAI are following close behind, Anthropic’s focus on the “Digital Employee”—tools that can plan, click, and execute—is the current that founders building agentic workflows must catch.

III. The Current of Native Fluency (From Text to Multimodal)

The final current is the move from text-only brains to Native Multimodality. The ocean is no longer just words; it is pixels, sound waves, and real-time frames.

Google is leading the charge here by moving away from “Bolt-on” models. Most models today glue a separate vision encoder onto a text brain—it’s fast to build, but “lossy.” The model looks at the world through a foggy window. Google’s Gemini 3.0 was trained from day one on a “mixed soup” of tokens.

This native design is the reason for Gemini’s “agentic fluidity.” It doesn’t watch a video and then think; it thinks while it watches. It can react, laugh, or code in real-time without the lag of translation layers. If your startup requires human-speed interaction, you cannot fight the friction of modular models.

Navigation Tip: You Must Sail North to Go East

This is the hardest part for any founder. The destination (Profitability/PMF) is East. And your logic will say: Steer East.

But Urdaneta knew that steering East meant death because the trade winds were blowing against him. He had to go North. Into the cold. Into the unknown. This was to catch the current that would eventually carry him home.

If you want to find your Manila Route, you have to stop thinking like a software vendor and start thinking like a lead navigator. I’ve seen too many founders sink because they stayed in the “safe” shallow waters of generic features.

  1. Hunt for “Contextual Friction”: Horizontal models (the “East” route) fail when they encounter fragmented data. In the coffee shop example, the “East” path is a generic AI social media scheduler. The “North” path is an AI that has a live API connection to the shop’s POS system, the local weather feed, and the supplier’s inventory. Your goal is to be the glue between data sources that have never talked to each other before.
  2. Build the “System of Record”: Wrappers die because they don’t own the data. To survive, you must become the place where the work lives. If you are the platform where the staff schedule is created and the inventory is tracked, you own the “Ground Truth.” Generalist models can’t compete with you because they don’t have access to the live state of the business.
  3. Choose Your “Current” Wisely: * If you are building for high-volume, low-margin industries (like retail or logistics), catch the Google/TPU current. Focus on efficiency and cost-per-task. If you are building for high-stakes, complex reasoning (like medical or legal), catch the Anthropic current. Focus on long-horizon planning and tool-use precision.
  4. The “Wedge” Strategy: You don’t need to capture the whole market on day one. Pick one “impossible” problem (e.g., “AI that handles late-night supplier disputes for independent bakeries”). Once you’ve mastered the context for that one task, the current of your expertise will naturally pull you into the adjacent markets.

Once Urdaneta mapped the route, the secret was out. For the next two and a half centuries, every Spanish galleon took the exact same path.

Old, antique map of Southeast Asia by F. De Wit.

The same is true for us. The ocean of AI opportunities feels infinite, but the number of truly sustainable business models is limited. The “Manila Route” in the AI landscape will be dominated by the first fleet that maps the currents correctly.

We are entering a period where “luck” will be indistinguishable from “preparation.”

The ocean is vast. The currents are strong. But the route is there for those bold enough to chart it.

Finding the Atlas Together

In the Age of Discovery, no captain sailed entirely alone.

The greatest accelerator of exploration wasn’t better sails or stronger hulls—it was the shared logbook. Every time a navigator returned and shared his charts, the entire world got a little bigger, and the ocean got a little less deadly. The map of the modern world wasn’t drawn by a single genius; it was drawn by a thousand different hands, layering their hard-won secrets over one another.

We are doing the same thing here. The ocean of AI is too vast, and the currents move too fast, for any single captain to map in isolation. We are building FounderCoHo to be the exchange where we can swap charts, warn each other of the storms, and mark the clear passages.

This story—the Manila Route—is just one coordinate in a much larger journey. We are working on a full series of these narratives to help founders build a clear, reliable mental map for the road ahead. But we cannot draw the whole atlas ourselves.

We need your coordinates.

Do you have a story of a “North Turn” that saved your company? Have you found a current that the rest of the industry is missing? Or do you have a warning about a reef you hit in the dark?

Reply to this and tell us your story. We want to build this atlas together, so we can all find our way home.

If this chart helped you find your bearings, please feel free to connect with the author Jing Conan Wang

Stay tuned for the next chart!

Follow FounderCoHo Linkedin Page

References:

  1. https://www.britannica.com/biography/Andres-de-Urdaneta
  2. https://read.dukeupress.edu/hahr/article/47/2/261/158102/Friar-Andres-de-Urdaneta-O-S-A
  3. https://cloud.google.com/transform/ai-specialized-chips-tpu-history-gen-ai
  4. https://medium.com/data-science-collective/the-origins-rise-and-evolution-of-the-tpu-672a0e4f1a2d
  5. https://www.datacamp.com/blog/claude-sonnet-anthropic
  6. https://www.sciencedirect.com/science/article/pii/S2666764925000451
  7. https://www.bbc.com/news/articles/cdx4x47w8p1o
  8. https://sanderusmaps.com/our-catalogue/antique-maps/asia/southeast-asia/old-antique-map-of-southeast-asia-by-f-de-wit-24650?srsltid=AfmBOoqOdLKh_Yw4ge6MW0Nlit6kr8CqvkY8MMl4jLdAofT0WOW9J3mA
  9. https://cloud.google.com/transform/ai-specialized-chips-tpu-history-gen-ai
  10. https://medium.com/@jelkhoury880/advancing-ai-reasoning-a-comprehensive-report-4982b7c19bdc
  11. https://ai.gopubby.com/inside-deepseek-v3-80283167673b

Subscribe to my newsletter for more stories and insights

Categories
English

Context Modeling: The Future of Personalized AI

Andrej Karpathy, a prominent voice in the AI community, recently brought the term “Context Engineering” to the forefront. It describes the intricate art of manually crafting prompts and data to guide Large Language Models. While the concept is gaining significant attention, I believe it points us in the wrong direction.

The future of personal AI isn’t about endlessly engineering context, but requires a radical shift to what I call ‘context modeling.’

This isn’t just semantics—it’s the difference between a temporary patch and a real solution.

The Limitations of Current RAG Systems

Today’s Retrieval-Augmented Generation (RAG) systems follow a relatively straightforward paradigm. They retrieve relevant information using rule-based systems—typically employing cosine similarity to find the top-k most relevant results—and then present this context to a large language model for processing. While this approach has proven effective in many scenarios, it suffers from significant limitations.

Think of current LLMs as exceptionally intelligent but stubborn team members. They excel at processing whatever information is presented to them, but they interpret data through their own fixed worldview. As these models become larger and more complex, they also become increasingly “frozen” in their approaches, making it difficult for developers to influence their internal decision-making processes.

From Engineering to Modeling: A Paradigm Shift

The conventional approach of context engineering focuses on creating more sophisticated rules and algorithms to manage context retrieval. However, this misses a crucial opportunity. Instead of simply engineering better rules, we need to move toward context modeling—a dynamic, adaptive system that generates specialized context based on the current situation.

Context modeling introduces a personalized model that works alongside the main LLM, serving as an intelligent intermediary that understands both the user’s needs and the optimal way to present information to the large language model. This approach recognizes that effective AI systems require more than just powerful models; they need intelligent context curation.

Learning from Recommendation Systems

The architecture for context modeling draws inspiration from the well-established two-stage recommendation systems that power many of today’s most successful platforms. These systems consist of:

  • Retrieval Stage: A fast, efficient system that processes large amounts of data with a focus on recall and speed.
  • Ranking Stage: A more sophisticated system that focuses on accuracy, distilling signal from noise to produce the best results.

RAG systems fundamentally mirror this architecture, with one key difference: they replace the traditional ranking component with large language models. This substitution enables RAG systems to solve open-domain problems through natural language interfaces, moving beyond the limited ranking problems that traditional recommendation systems address.

However, current RAG implementations have largely overlooked the potential for model-based retrieval in the first stage. While the industry has extensively explored rule-based retrieval systems, the opportunity for intelligent, adaptive context modeling remains largely untapped.

The Context Modeling Solution

Context modeling addresses this gap by introducing a specialized model dedicated to generating context dynamically. This model doesn’t need to be large or computationally expensive—it can be a focused, specialized system trained on relevant data that understands the specific domain and user needs.

The key advantages of context modeling include:

  • Adaptability: Unlike rule-based systems, context models can learn and adapt to new patterns and user behaviors over time.
  • Personalization: These models can be trained on user-specific data, creating truly personalized AI experiences that understand individual contexts and preferences.
  • Efficiency: By using smaller, specialized models for context generation, the system maintains efficiency while providing more intelligent context curation.
  • Developer Control: Context modeling provides agent developers with a trainable component they can influence and improve, creating opportunities for continuous learning and optimization.

The Ideal Architecture: Speed and Specialization

For context modeling to be viable, it must satisfy one critical requirement: speed. The latency of the core LLM is already a significant bottleneck in user experience.

Right now, the main workaround is streaming the response. However, the latency to the first token cannot be mitigated by streaming. The end-to-end latency of the retrieval model contributes to the latency of the first token. Any context modeling system must be exceptionally fast to avoid compounding this delay.

This brings us to the concept of “thinking” models, which use their own internal mechanisms to retrieve and reason over context before generating a final answer. In a sense, these models perform a specialized form of context modeling. However, their primary challenge is that this “thinking” process is slow and computationally expensive.

I argue that these monolithic “thinking” models are an intermediate step. The optimal, long-term architecture will decouple the two primary tasks. It will feature two distinct models working in tandem, mirroring the two-stage systems that have been so successful in recommendations:

  1. A Fast Context Model: A highly optimized, specialized model dedicated solely to retrieving and generating the most relevant context at incredible speed.
  2. A Powerful Core Model: The large language model that receives this curated context and focuses on the complex task of reasoning, synthesis, and final response generation.

This dual-model approach allows for specialization, where each component can be optimized for its specific task, delivering both speed and intelligence without compromise.

The Infrastructure Opportunity

Context modeling represents a common infrastructure need across the AI industry. As more organizations deploy RAG systems and AI agents, the demand for sophisticated context management will only grow. This presents an opportunity to build foundational infrastructure that can support a wide range of applications and use cases.

The development of context modeling systems requires expertise in both machine learning and system design, combining the lessons learned from recommendation systems with the unique challenges of natural language processing and generation.

Looking Forward

The future of personalized AI lies not in building ever-larger language models, but in creating intelligent systems that can effectively collaborate with these powerful but inflexible models. Context modeling represents a crucial step toward this future, enabling AI systems that are both powerful and adaptable.

As we move forward, the organizations that successfully implement context modeling will have a significant advantage in creating AI systems that truly understand and serve their users. The shift from context engineering to context modeling isn’t just a technical evolution—it’s a fundamental reimagining of how we build intelligent systems that can adapt and personalize at scale.

The question isn’t whether context modeling will become the standard approach, but how quickly the industry will recognize its potential and begin building the infrastructure to support it. The future of personalized AI depends on our ability to move beyond static rules and embrace dynamic, intelligent context generation.

Questions or feedback? I’d love to hear your thoughts.

Want more insights? Follow me:

🎙️ Founder Interviewshttps://www.youtube.com/@FounderCoHo
Conversations with successful founders and leaders.

🚀 My Journeyhttps://www.youtube.com/@jingconan
Building DeepVista from the ground up.

Categories
English

Think Wider: AI as Perspective Partners

Everyone’s obsessed with making AI reason deeper — train models to solve complex mathematics and master intricate proofs. But in this race for artificial intelligence, we’ve forgotten something fundamental about human intelligence – how we actually think and work with others.

Deep reasoning is important, but it’s only half the story. What if, instead of treating AI as tools to be prompted, we can engage with them as naturally as we do with brilliant colleagues – each bringing their unique perspective to the conversation? This isn’t just about making AI think deeper – it’s about unlocking its breadth of knowledge through better questions.

Asking good questions is harder than finding answers. Try this mental exercise: give yourself two minutes to write down ten meaningful questions about different topics. Hard, isn’t it? The difficulty lies in where to even begin. With regular problems, we at least have a starting point – a puzzle to solve, a goal to reach. But with questions, we’re creating the map before we know the territory. It’s not about finding the right path; it’s about imagining what paths might exist in the first place.

And this is where LLMs shine: they’re incredible brainstorming partners. Not just because they can process vast amounts of information, but because they can consider countless angles that might never occur to us. Humans are limited by our experiences and cognitive biases. LLMs don’t have these limitations. They can draw connections between seemingly unrelated concepts and suggest possibilities we might never have considered.

The challenge isn’t that LLMs lack knowledge – they have plenty. The real problem is that we don’t know how to extract it effectively. We initially thought prompt engineering was the magic key, but that turned out to be too simplistic. It works, but it feels mechanical and constrained. We don’t think about conversation frameworks or prompt strategies when chatting with friends. We just… talk.

What we need is AI that can match this natural fluidity of human conversation. Imagine an AI system that could seamlessly shift between different types of expertise, like a conversation with a renaissance polymath who can speak authoritatively about both quantum physics and Renaissance art. This isn’t just about role-playing – it’s about having access to different ways of thinking about problems.

Imagine an AI system that could seamlessly shift between different types of expertise, allowing you to converse with a diverse range of experts. For instance, you could discuss quantum physics with a renowned physicist like Albert Einstein, then transition to a conversation about Renaissance art with an art historian like Giorgio Vasari. This isn’t merely about role-playing; it’s about accessing diverse perspectives and specialized knowledge to enhance your understanding and problem-solving abilities.

I’ve started calling this concept “persona-driven discovery,” and I believe it could revolutionize how we learn and solve problems by acting as a catalyst for serendipity.

We’ve all had those magical moments in libraries where we stumble upon exactly the book we needed but weren’t looking for. These AI systems could create those moments deliberately, suggesting unexpected perspectives and prompting us to explore unfamiliar territories. It’s like having a brilliant friend who knows when to push you out of your intellectual comfort zone.

All of this points toward a future where AI tools aren’t just answering our questions but actively participating in our thinking process. They could help us prototype ideas faster, facilitate group brainstorming sessions, and create personalized learning experiences that adapt to our individual ways of thinking.

The real breakthrough will come when we stop thinking about these systems as tools and start thinking about them as thought partners. This shift isn’t just semantic – it’s fundamental to how we might solve problems in the future. Instead of asking an AI to complete a task, we might engage it in a genuine dialogue that helps us see our challenges from new angles.

The building blocks are already there: we have models that can process and generate human-like text, we have systems that can maintain context in conversations, and we’re developing better ways to keep AI knowledge current and relevant. 

There are still unsolved problems.  When we make AI systems more specialized (like training them to be history experts), they often lose their broader capabilities. It’s the same trade-off you get with human experts – deep knowledge in one area often comes at the cost of breadth. The trick will be creating systems that can maintain both depth and breadth, switching between different modes of thinking without losing their fundamental capabilities.

This is what role-aware AI systems could offer – not just rigid question-and-answer sessions, but fluid conversations where different perspectives emerge organically as needed. Each AI “participant” would bring their unique expertise and viewpoint while staying current with the latest developments in their field. They would build on each other’s insights, challenge assumptions, and help you see problems from angles you might never have considered on your own. The key isn’t just having access to different types of knowledge, but having them work together in a way that mirrors the natural give-and-take of human conversation.

The potential impact of this shift could be profound. Just as the internet changed how we access information, these AI thought partners could change how we process and use that information. They could help us break out of our mental ruts, see connections we might have missed, and approach problems from angles we might never have considered.

This is the future I’m excited about – not one where AI replaces human thinking, but one where it enhances and expands it in ways we’re just beginning to imagine.

BTW, this essay is written after a long brainstorming session with the Hachi of “Paul Graham”. You can check it here: https://go.hachizone.ai/pg-think-wider

Please message me at jing AT hachizone.ai if you have any feedback on this essay or Hachi in general.

Categories
English

Rethinking Digital Discovery: From Algorithms to Human Perspectives

In our quest to organize the world’s information, we’ve created two dominant systems: search engines and recommendation algorithms. Both promised to make discovery easier, yet each has introduced its own set of challenges. Let’s examine why these systems fall short and how we might find a better way forward.

The Consensus Trap of Search Engines

In 1998, Google revolutionized the internet with PageRank, an algorithm that organized information through collective wisdom. The premise was elegant: websites with more backlinks were probably more important and trustworthy. It was democracy in action – the internet voting on itself through links.

While this approach works beautifully for factual queries like “what is the speed of light,” it struggles with nuanced topics where diversity of perspective matters more than consensus. The very nature of PageRank creates a self-reinforcing cycle: popular sites become more visible, leading to more backlinks, leading to even greater visibility.

This system inadvertently flattens the richness of human knowledge into a popularity contest. It’s as if we’re asking the entire world to vote on the best restaurant in your neighborhood – the results might reflect broad appeal, but they’re unlikely to match your specific tastes or needs.

The Echo Chamber of Recommendation Systems

On the other side, we have recommendation systems that promise personalization but often trap us in what we call “rabbit holes.” These algorithms study our behavior and serve us more of what we’ve liked before, creating increasingly narrow feedback loops.

Start watching a few cooking videos, and suddenly your entire feed becomes culinary content. Click on a political article, and your recommendations quickly become an echo chamber of similar viewpoints. While this approach maximizes engagement, it does so at the cost of serendipity – those unexpected discoveries that broaden our horizons.

The problem isn’t just that these systems can be limiting; it’s that they operate as black boxes. Users have little understanding of why they’re seeing certain content and even less control over steering their discovery journey.

Looking Back to Move Forward

Interestingly, the solution to these modern challenges might lie in how we discovered information before these technologies existed. Think back to how we naturally sought out knowledge: through conversations with friends, colleagues, and mentors.

When we wanted to discover new books, we didn’t poll the entire world or rely on an algorithm to analyze our past reading habits. Instead, we talked to friends whose taste in literature we trusted. When we needed restaurant recommendations, we asked colleagues who shared our culinary preferences.

This system worked because:

  1. We understood exactly why we valued each person’s perspective
  2. We could actively choose whose recommendations to seek out
  3. Different friends offered different viewpoints, naturally creating diversity
  4. Serendipitous discoveries happened organically through conversation

The Power of Personal Perspective

What if we bring this human-centered approach to digital discovery? Imagine a system that doesn’t try to replace human judgment with algorithms, but instead helps you find and follow the curators whose perspectives you value.

This isn’t just personalization based on your past behavior – it’s about actively choosing whose lens you want to view the world through. A food critic might have thousands of followers, but you might prefer your friend’s hole-in-the-wall recommendations because they understand your particular palate.

The beauty of this approach is that it preserves what makes human curation special:

  • Natural serendipity through the diverse interests of your trusted curators
  • Full transparency about why you’re seeing certain content
  • Control over whose perspectives influence your discovery
  • The ability to step out of your comfort zone by following curators with different viewpoints

A New Path Forward

The future of information discovery isn’t about achieving perfect consensus through PageRank, nor is it about increasingly sophisticated recommendation algorithms. It’s about recognizing that people – with their unique perspectives, expertise, and ability to surprise us – are the ultimate curators of information.

By bringing the human element back to discovery, we can create a system that offers both personalization and serendipity, both efficiency and understanding. Most importantly, we can build a system that puts users back in control of their discovery journey.

The future of discovery isn’t about finding what algorithms think is best – it’s about connecting with the human perspectives that truly resonate with you.

Categories
English

Beyond the Hype: Three Lessons from a Startup Rollercoaster

My first startup journey was a rollercoaster – a wild ride that began with a spark of an idea.

Drawing from my experience at Google Brain, I had a strong instinct that combining human feedback using reinforcement learning would significantly improve the experience of LLMs in dialogue. I started to explore this idea to build an assistant for knowledge workers in 2021. My gut told me there was a big opportunity in the space, but I wasn’t sure when the mass market would recognize it. Later, it turned out that this was one of the fundamental ideas behind ChatGPT.

However, everyone said the space was tiny back then, and potential investors and peers saw little value in my concept. As a first-time founder, the unanimous skepticism I made me doubt my vision. I ended up wasting a lot of time and getting distracted by other directions.

I eventually pushed forward, though not fast enough. We launched in September of 2022, right before ChatGPT was released. When we launched, we were amazed by the positive feedback and delight we got from customers. However, within months, ChatGPT’s release completely transformed the landscape. Suddenly, customers began to have much higher expectations and hesitated to sign contracts.

It was obvious that we had to build more to make ourselves stand out. However, the process felt like a constant chase of whimsical hope. We tried to tackle different parts of our customers’ workflow. Customers told us they liked our AI capability but missed features they longed for from their existing vendors. We tried to build those, but existing incumbents would quickly add AI capabilities similar to ours, making our new solutions seem redundant.

I ended up starting a new startup in a completely different space. But I learned three crucial lessons from the first startup journey.

You need to trust your gut.

I wasted a precious year that could have helped us build a more defensible moat and establish ourselves as the market leader, better preparing us for when ChatGPT’s storm hit. A few months of lead time is not enough—you often need more. When everyone says your idea is impractical, it is the best time to build your competitive advantage. Innovation rarely comes from following the crowd. It emerges from the courage to pursue ideas that seem impossible—until they aren’t.

Unique insight means nothing without a defensible moat.

Consider your competitive moat early on. While talking to customers helps identify valuable problems to solve, it doesn’t guarantee that your solution will be the only one customers choose. You need to figure out why customers would pick your solution over alternatives.

Technology is not a silver bullet; it is actually a very poor moat because ideas diffuse naturally.

At the beginning, the only competitive advantage of a startup is the time you get because of the ignorance of big players. But you need to turn it into an actual moat — reasons why customers should use you, not other players. For B2B companies, the reason is often data or customer relationships. For B2C, the reason is often branding or better user experiences.

Plus, you need to make sure you have the resources to build your moat, which requires strategic planning if you are already in fierce competition. (Unfortunately, it often creates conflicts with the customer-first culture in B2B).

Don’t be afraid to restart from zero.

If you’re facing strong headwinds and haven’t had time to build a moat, take a step back. Rethink what other valuable problems exist that you believe in but others haven’t yet recognized.

My story isn’t unique – it’s a microcosm of the startup ecosystem. Innovation isn’t about having the perfect idea from day one. It’s about resilience, adaptability, and the willingness to transform setbacks into insights.

Ultimately, I started a new venture in a different sector, carrying these lessons like a compass. Each “failure” was actually a sophisticated learning experience, and helped me transform into a true entrepreneur.

Categories
English

Goodbye Faceless Algorithms, Hello Hachi!

It’s no secret that boredom and loneliness is an epidemic. The average American spends 3 hours a day scrolling through online contents — usually solo.

What’s more, your content stream is controlled by a faceless “algorithm” that feeds you content over which you have very little control or knowledge.

I know this game inside out. I was one of the Google Brain Researchers who worked on the world’s most popular YouTube AI engine.

We built this with good intent: to surface better content to users and keep them engaged – and it worked!

However, there’s one important problem: there exists only one algorithm for everyone in the world. Users have no control over what information they see, and creators have to tailor their content to “game” the algorithm.

Let’s be real, no one’s a fan of this singular, faceless algorithm. That’s why I’m building something new, with the belief that people should be able to choose how they discover information, and be able to put a face to a name.

I’m excited to announce that we are working on Hachi — unique personas that help you search for information, whether it’s text, images, or videos. With Hachi, you will be able to:

  1. Choose your muse. Just like how you choose your friends, you also choose which Hachi you want to spend time with. Imagine having one Hachi for #vanlife, another for the latest Taylor and Travis gossip, and another for minecraft.
  2. Stay Trendy. Hachis are constantly discovering new creators and content that are trending based on your common interests.
  3. Never Alone: Hachis explore with you, share their own insights, and are ready to chat anytime you would like to.

To make this a reality, my co-founder Nancy and I have been hard at work on building out Hachi in the last few months. If this sounds interesting to you, please join our mailing list: https://go.hachizone.ai/mailinglist and our discord community: https://go.hachizone.ai/discord .

We’d love to hear your early thoughts and feedback!

– Jing Conan Wang

Categories
English

Why Is It So Hard to Create a Funny AI?

Large Language Models (LLMs) like ChatGPT have shown impressive results in the past two years. However, people have realized that while these models are incredibly knowledgeable, they often lack humor. Ask any mainstream LLM to tell you a joke, and you’ll likely receive a dull, dad-joke-level response.

For example, here is an joke from ChatGPT

The problem goes beyond just being funny. LLMs have failed to create memorable personalities, resulting in shallow interactions. This is the reason why most of the AI companion products feel like role-playing games. Because people get bored quickly with one character, platforms need to encourage users to create a lot of characters to keep them engaged.

Why does this happen? There are two main reasons:

The first one is that LLMs lack capability of deep reasoning. Creating humor is a challenging task. The two key ingredients for humor are surprise and contradiction, which demand a profound understanding of how things logically work and then intentionally deviating from the norm.

However, LLMs struggle to understand deep logical connections and context, which are essential for humor. They tend to focus on literal interpretations, missing the subtleties that make language humorous.

The second one is the limitation of dataset and evaluation: Many models are trained to excel on specific benchmarks and tasks that are are outdated. The existing LLM evaluation focus heavily on question answering or academic tests because researchers can easily access those. This has resulted in an overemphasis on one particular subdomain at the expense of more nuanced language understanding and creative expression. Consequently, responses generated by these models lack personality.

What is likely the path ahead for it? Here are my take:

1. Better Human-AI collaboration. As current LLMs struggle with understanding the deeper logic of language, making them truly funny might require significant advancements in their reasoning capabilities.

Some progress will be made as LLMs keep getting more parameters. However, it is a long way to go as it is hard to turn LLM into a reasoning machine. A more realistic approach is to harness human wisdom and creativity to help LLMs bypass complex logical reasoning and directly generate funny content.

Humans actually are very good at capturing nuances. It is easier to develop a AI that can that leverage human’s capability build the capability in LLM itself. For example, when creating funny comments for videos, using existing human comments from the video can boost the quality of the jokes. This falls into the domain of Human-based Computation . One famous example for this is CAPTCHA that is used to verify if a human is a bot. At the same time it is also teaching machines to solve hard computer vision tasks.

2. Online Learning: Most current LLMs are offline, taking months to train and then remaining frozen. This makes it nearly impossible for models to adapt based on real-time human feedback. One would argue that retrieval augmented generation (RAG) is just a poor man’s solution to make LLMs to be able to learn the facts in the real-time. However, simple RAG doesn’t have the ability to capture nuances.  We need to design ways for online learning, allowing models to capture and incorporate human feedback in near real-time.

3. Better Evaluation: Current datasets for evaluating AI-generated humor are too narrowly focused. The AI community needs to overcome this limitation to create more comprehensive assessment tools.

The way we interact with Large Language Models (LLMs) is just as important as the answers they provide. Engaging with AI shouldn’t be a dull and boring experience. By following the directions described above, I think we will be able to create truly funny AI systems that can engage in witty, personable interactions with humans in the foreseeable future. 

Categories
English

Why You Should Build a Consumer GenAI Startup and How to Make it Happen

While conventional wisdom holds that B2B startups are the safer choice, is this really the case? Let’s delve into why a consumer-focused GenAI startup might actually be your golden ticket.

In 2023, the startup landscape of GenAI applications experienced a remarkable surge, propelled by the advent of ChatGPT and foundational models such as GPT-4 and Anthropic. Over the past year, venture capital has invested at least $21 billion into GenAI, and most GenAI applications have primarily targeted on B2B, particularly productivity improvement. In the latest Y Combinator batch, 65% of the startups fall within the B2B SaaS and enterprise sectors, whereas only 11% are focused on consumer-oriented verticals. The most popular product form is AI assistant.

Current Challenges in B2B GenAI

However, as we transition into 2024, it has become evident that a lot of startups in the domain are facing significant challenges. A majority of these B2B GenAI companies are grappling with financial losses and are frequently pivoting in an attempt to find product market fit.

Many startup founders struggle to convert Proof-of-Concept contracts into full annual agreements, often facing significant limitations in their bargaining power over pricing. Despite the $21 billion VC investment, GenAI startup only generated around $1 billion in revenue.

Heavy competition is one of the main challenges for startups in converting Proof-of-Concept contracts. But why is there such a strong focus on productivity improvement applications? The reasons are multifaceted and stem from various technology and market dynamics:

First, it is related to the nature of the current foundational models. Foundation models such as GPT-4 are the result of significant research breakthroughs and depend extensively on benchmarks that have been established within the academic community. Historically, these benchmarks have predominantly focused on knowledge-based tasks. For example, the benchmarks used in the GPT-4 technical report primarily consist of academic tests. Essentially, what we are creating with these models are entities akin to exceptionally skilled students or professors. This orientation naturally steers generative AI applications toward productivity enhancements. Consequently, it’s not surprising that students are the primary users of many AI-assisted products like copilots.

Second, there is a B2B-first culture in the American startup ecosystem. The American startup ecosystem has predominantly favored B2B ventures, with the consumer sector receiving significantly less investment over the past decade. Startup founders in US are afraid to build consumer startups. Although other countries such as China do not exhibit this fixed mindset, the U.S. has been a global leader in generative AI research and substantially influencing trends worldwide.

Third, the GenAI infrastructure boom levels the playing field for everyone. In 2023, the majority of investments were directed towards GenAI infrastructure, with many investment firms likening it to a “gold rush.” There’s a prevailing belief that, much like the merchants who sold supplies during a gold rush, those who provide the essential tools and services will profit first. The following figure shows that $16.9B out of the $21B billion VC money was spent on GenAI infrastructure. Newer players can always leverage better infrastructure.

Source: Sequoia Capital’s AI Ascent 2024 opening remarks

Due to the factors mentioned above, competition among productivity-focused GenAI applications is intense, undermining the ability of startups in this space to extract value from customers. As a result, the entire ecosystem remains predominantly financed by venture capital.

The Untapped Potential of Consumer GenAI

History often repeats itself. During the Internet boom of the 1990s, emphasis was initially placed on B2B applications. However, it turned out that the integration of the Internet into business contexts would take longer than anticipated. Salesforce pioneered the SaaS model, but it took nearly a decade to reach the $1 billion revenue milestone. In contrast, consumer applications have proven to be a quicker avenue for both creating and capturing value.

Google, Facebook, and Amazon have each developed consumer products that serve billions of people, discovering unique methods to monetize the internet by reaching vast audiences cost-effectively. Additionally, this approach has proven to be an effective strategy for building strong competitive advantages, or moats.

Strategies for Success

The 7-power framework is a crucial tool for analyzing business opportunities, identifying seven key levers: Scale Economies, Network Economies, Counter-Positioning, Switching Costs, Branding, Cornered Resource, and Process Power. For B2B GenAI startups,

Counter-Positioning and Process Power are typically the only levers B2B GenAI startups can pull due to incumbents holding advantages in the other areas. In contrast, Consumer GenAI startups have the potential to develop competitive moats across almost all these powers, providing numerous strategic advantages — especially if your founding team has strong technical capability in AI models and infrastructure.

It’s crucial for Consumer GenAI companies to own their AI models and infrastructure. This ownership not only fosters the development of Scale and Network Economies but also secures Cornered Resources, enhancing competitive advantage and market positioning.

On the one hand, to create a successful consumer app, controlling costs is crucial. Historical trends in developing larger and more powerful models have made them unsuitable for consumer applications due to high costs as the lifetime value (LTV) of consumer use-cases is typically much lower. For example, the LTV of a user is often just $20-30 but might ask hundreds of questions. However, utilizing all the tokens in GPT-4 can cost approximately $1.28 for a single call. Developing in-house expertise to create models that are both powerful and cost-effective is crucial to bridge the gap.

The good thing is that consumer applications are usually much more tolerant to hallucination, and might not need the most powerful model. In addition, the evolution of open-source models has enabled startups to develop their own models cost-effectively. With the recent launch of LLaMa 3, its 8B small model has outperformed the largest model from LLaMa 2. Additionally, there is anticipation that the 400B model, currently in training, will match the performance of GPT-4. These advancements make it feasible for startups to create high-performing models at a fraction of the cost associated with proprietary models. While significant investment is still necessary to reduce costs sufficiently to support large-scale consumer applications.

On the other hand, current foundational models are not ideally suited for creating robust consumer applications, as most large language models lack personalization and long-term memory capabilities. Developing new foundational models or adapting existing ones to better suit consumer needs is a critical challenge that Consumer GenAI startups must address.

Despite these challenges, startups that successfully tackle these issues can secure a significant competitive edge and establish long-lasting market dominance.

Thanks for reading this article and hope the article is useful for you. If you have any questions or thoughts, please don’t hesitate to comment or message me at jing@jingconan.com 🤗

Categories
English

Unlocking the Wonders of Imaginative Play: A Journey into the Magic of Childhood

One day, my two-year-old daughter, Adalyn, approached me with a desk lamp and a handful of blue glass balls. Puzzled, I watched as she arranged them before me and then asked, “Daddy, what animal is this?”

I couldn’t fathom how a desk lamp and some stones could possibly resemble an animal. For a brief moment, I felt utterly perplexed.

However, after pausing for a few seconds, it dawned on me—Adalyn had conjured up a magical world within her imagination.

Though to me it seemed like mere objects, to her, they were the building blocks of an enchanting creature.

Grateful for the power of imagination and the assistance of technology, I decided to enlist the help of AI to bring Adalyn’s creation to life.

Sending a picture to ChatGPT with a query, “What type of animal is this?” I eagerly awaited its response.

In a matter of moments, ChatGPT wove a beautiful tale:

“In the heart of the magical forest, where the trees whispered secrets and the moonlight danced, lived a giraffe named Zara. With each step she took, her footprints left behind a trail of shimmering blue, marking her path through the enchanted woods…”

Reading the story aloud to Adalyn, her eyes lit up with joy. “Zara!” she exclaimed, delighted to have her creation given life and a name.

At that moment, I realized that it wasn’t my understanding alone that made her happy—it was the connection, the validation of her imagination, and the shared experience of storytelling.

Adalyn beamed at me and proclaimed, “You’re the best dad in the world!” But deep down, I knew it wasn’t solely my doing. It was the magic of childhood imagination and the wonders that technology and storytelling can bring to life.

This experience is also known as Imaginative Play, a crucial activity for child development. Sadly, in today’s fast-paced world, few parents engage in imaginative play with their children. While immensely enjoyable, it demands a significant amount of imagination and mental energy. Unfortunately, as adults, many of us lose touch with our imagination over time, as society often prioritizes “correct answers” over creative thinking.

Upon reflection, I realized that Imaginative Play isn’t just beneficial for children—it holds value for adults too. It fosters creativity, problem-solving skills, and emotional intelligence, all of which are essential for navigating the complexities of life.

Eager to share this revelation, I recounted my experience with my Toastmasters club. I shared how a seemingly mundane moment with my daughter sparked a journey into a magical realm of creativity and imagination. Through this story, I hoped to inspire others to embrace their inner child, reclaim their imagination, and rediscover the joy of imaginative play.

Please check out this video below: