Categories
English

Context Modeling: The Future of Personalized AI

Andrej Karpathy, a prominent voice in the AI community, recently brought the term “Context Engineering” to the forefront. It describes the intricate art of manually crafting prompts and data to guide Large Language Models. While the concept is gaining significant attention, I believe it points us in the wrong direction.

The future of personal AI isn’t about endlessly engineering context, but requires a radical shift to what I call ‘context modeling.’

This isn’t just semantics—it’s the difference between a temporary patch and a real solution.

The Limitations of Current RAG Systems

Today’s Retrieval-Augmented Generation (RAG) systems follow a relatively straightforward paradigm. They retrieve relevant information using rule-based systems—typically employing cosine similarity to find the top-k most relevant results—and then present this context to a large language model for processing. While this approach has proven effective in many scenarios, it suffers from significant limitations.

Think of current LLMs as exceptionally intelligent but stubborn team members. They excel at processing whatever information is presented to them, but they interpret data through their own fixed worldview. As these models become larger and more complex, they also become increasingly “frozen” in their approaches, making it difficult for developers to influence their internal decision-making processes.

From Engineering to Modeling: A Paradigm Shift

The conventional approach of context engineering focuses on creating more sophisticated rules and algorithms to manage context retrieval. However, this misses a crucial opportunity. Instead of simply engineering better rules, we need to move toward context modeling—a dynamic, adaptive system that generates specialized context based on the current situation.

Context modeling introduces a personalized model that works alongside the main LLM, serving as an intelligent intermediary that understands both the user’s needs and the optimal way to present information to the large language model. This approach recognizes that effective AI systems require more than just powerful models; they need intelligent context curation.

Learning from Recommendation Systems

The architecture for context modeling draws inspiration from the well-established two-stage recommendation systems that power many of today’s most successful platforms. These systems consist of:

  • Retrieval Stage: A fast, efficient system that processes large amounts of data with a focus on recall and speed.
  • Ranking Stage: A more sophisticated system that focuses on accuracy, distilling signal from noise to produce the best results.

RAG systems fundamentally mirror this architecture, with one key difference: they replace the traditional ranking component with large language models. This substitution enables RAG systems to solve open-domain problems through natural language interfaces, moving beyond the limited ranking problems that traditional recommendation systems address.

However, current RAG implementations have largely overlooked the potential for model-based retrieval in the first stage. While the industry has extensively explored rule-based retrieval systems, the opportunity for intelligent, adaptive context modeling remains largely untapped.

The Context Modeling Solution

Context modeling addresses this gap by introducing a specialized model dedicated to generating context dynamically. This model doesn’t need to be large or computationally expensive—it can be a focused, specialized system trained on relevant data that understands the specific domain and user needs.

The key advantages of context modeling include:

  • Adaptability: Unlike rule-based systems, context models can learn and adapt to new patterns and user behaviors over time.
  • Personalization: These models can be trained on user-specific data, creating truly personalized AI experiences that understand individual contexts and preferences.
  • Efficiency: By using smaller, specialized models for context generation, the system maintains efficiency while providing more intelligent context curation.
  • Developer Control: Context modeling provides agent developers with a trainable component they can influence and improve, creating opportunities for continuous learning and optimization.

The Ideal Architecture: Speed and Specialization

For context modeling to be viable, it must satisfy one critical requirement: speed. The latency of the core LLM is already a significant bottleneck in user experience.

Right now, the main workaround is streaming the response. However, the latency to the first token cannot be mitigated by streaming. The end-to-end latency of the retrieval model contributes to the latency of the first token. Any context modeling system must be exceptionally fast to avoid compounding this delay.

This brings us to the concept of “thinking” models, which use their own internal mechanisms to retrieve and reason over context before generating a final answer. In a sense, these models perform a specialized form of context modeling. However, their primary challenge is that this “thinking” process is slow and computationally expensive.

I argue that these monolithic “thinking” models are an intermediate step. The optimal, long-term architecture will decouple the two primary tasks. It will feature two distinct models working in tandem, mirroring the two-stage systems that have been so successful in recommendations:

  1. A Fast Context Model: A highly optimized, specialized model dedicated solely to retrieving and generating the most relevant context at incredible speed.
  2. A Powerful Core Model: The large language model that receives this curated context and focuses on the complex task of reasoning, synthesis, and final response generation.

This dual-model approach allows for specialization, where each component can be optimized for its specific task, delivering both speed and intelligence without compromise.

The Infrastructure Opportunity

Context modeling represents a common infrastructure need across the AI industry. As more organizations deploy RAG systems and AI agents, the demand for sophisticated context management will only grow. This presents an opportunity to build foundational infrastructure that can support a wide range of applications and use cases.

The development of context modeling systems requires expertise in both machine learning and system design, combining the lessons learned from recommendation systems with the unique challenges of natural language processing and generation.

Looking Forward

The future of personalized AI lies not in building ever-larger language models, but in creating intelligent systems that can effectively collaborate with these powerful but inflexible models. Context modeling represents a crucial step toward this future, enabling AI systems that are both powerful and adaptable.

As we move forward, the organizations that successfully implement context modeling will have a significant advantage in creating AI systems that truly understand and serve their users. The shift from context engineering to context modeling isn’t just a technical evolution—it’s a fundamental reimagining of how we build intelligent systems that can adapt and personalize at scale.

The question isn’t whether context modeling will become the standard approach, but how quickly the industry will recognize its potential and begin building the infrastructure to support it. The future of personalized AI depends on our ability to move beyond static rules and embrace dynamic, intelligent context generation.

Questions or feedback? I’d love to hear your thoughts.

Want more insights? Follow me:

🎙️ Founder Interviewshttps://www.youtube.com/@FounderCoHo
Conversations with successful founders and leaders.

🚀 My Journeyhttps://www.youtube.com/@jingconan
Building DeepVista from the ground up.

Categories
English

Think Wider: AI as Perspective Partners

Everyone’s obsessed with making AI reason deeper — train models to solve complex mathematics and master intricate proofs. But in this race for artificial intelligence, we’ve forgotten something fundamental about human intelligence – how we actually think and work with others.

Deep reasoning is important, but it’s only half the story. What if, instead of treating AI as tools to be prompted, we can engage with them as naturally as we do with brilliant colleagues – each bringing their unique perspective to the conversation? This isn’t just about making AI think deeper – it’s about unlocking its breadth of knowledge through better questions.

Asking good questions is harder than finding answers. Try this mental exercise: give yourself two minutes to write down ten meaningful questions about different topics. Hard, isn’t it? The difficulty lies in where to even begin. With regular problems, we at least have a starting point – a puzzle to solve, a goal to reach. But with questions, we’re creating the map before we know the territory. It’s not about finding the right path; it’s about imagining what paths might exist in the first place.

And this is where LLMs shine: they’re incredible brainstorming partners. Not just because they can process vast amounts of information, but because they can consider countless angles that might never occur to us. Humans are limited by our experiences and cognitive biases. LLMs don’t have these limitations. They can draw connections between seemingly unrelated concepts and suggest possibilities we might never have considered.

The challenge isn’t that LLMs lack knowledge – they have plenty. The real problem is that we don’t know how to extract it effectively. We initially thought prompt engineering was the magic key, but that turned out to be too simplistic. It works, but it feels mechanical and constrained. We don’t think about conversation frameworks or prompt strategies when chatting with friends. We just… talk.

What we need is AI that can match this natural fluidity of human conversation. Imagine an AI system that could seamlessly shift between different types of expertise, like a conversation with a renaissance polymath who can speak authoritatively about both quantum physics and Renaissance art. This isn’t just about role-playing – it’s about having access to different ways of thinking about problems.

Imagine an AI system that could seamlessly shift between different types of expertise, allowing you to converse with a diverse range of experts. For instance, you could discuss quantum physics with a renowned physicist like Albert Einstein, then transition to a conversation about Renaissance art with an art historian like Giorgio Vasari. This isn’t merely about role-playing; it’s about accessing diverse perspectives and specialized knowledge to enhance your understanding and problem-solving abilities.

I’ve started calling this concept “persona-driven discovery,” and I believe it could revolutionize how we learn and solve problems by acting as a catalyst for serendipity.

We’ve all had those magical moments in libraries where we stumble upon exactly the book we needed but weren’t looking for. These AI systems could create those moments deliberately, suggesting unexpected perspectives and prompting us to explore unfamiliar territories. It’s like having a brilliant friend who knows when to push you out of your intellectual comfort zone.

All of this points toward a future where AI tools aren’t just answering our questions but actively participating in our thinking process. They could help us prototype ideas faster, facilitate group brainstorming sessions, and create personalized learning experiences that adapt to our individual ways of thinking.

The real breakthrough will come when we stop thinking about these systems as tools and start thinking about them as thought partners. This shift isn’t just semantic – it’s fundamental to how we might solve problems in the future. Instead of asking an AI to complete a task, we might engage it in a genuine dialogue that helps us see our challenges from new angles.

The building blocks are already there: we have models that can process and generate human-like text, we have systems that can maintain context in conversations, and we’re developing better ways to keep AI knowledge current and relevant. 

There are still unsolved problems.  When we make AI systems more specialized (like training them to be history experts), they often lose their broader capabilities. It’s the same trade-off you get with human experts – deep knowledge in one area often comes at the cost of breadth. The trick will be creating systems that can maintain both depth and breadth, switching between different modes of thinking without losing their fundamental capabilities.

This is what role-aware AI systems could offer – not just rigid question-and-answer sessions, but fluid conversations where different perspectives emerge organically as needed. Each AI “participant” would bring their unique expertise and viewpoint while staying current with the latest developments in their field. They would build on each other’s insights, challenge assumptions, and help you see problems from angles you might never have considered on your own. The key isn’t just having access to different types of knowledge, but having them work together in a way that mirrors the natural give-and-take of human conversation.

The potential impact of this shift could be profound. Just as the internet changed how we access information, these AI thought partners could change how we process and use that information. They could help us break out of our mental ruts, see connections we might have missed, and approach problems from angles we might never have considered.

This is the future I’m excited about – not one where AI replaces human thinking, but one where it enhances and expands it in ways we’re just beginning to imagine.

BTW, this essay is written after a long brainstorming session with the Hachi of “Paul Graham”. You can check it here: https://go.hachizone.ai/pg-think-wider

Please message me at jing AT hachizone.ai if you have any feedback on this essay or Hachi in general.

Categories
English

Rethinking Digital Discovery: From Algorithms to Human Perspectives

In our quest to organize the world’s information, we’ve created two dominant systems: search engines and recommendation algorithms. Both promised to make discovery easier, yet each has introduced its own set of challenges. Let’s examine why these systems fall short and how we might find a better way forward.

The Consensus Trap of Search Engines

In 1998, Google revolutionized the internet with PageRank, an algorithm that organized information through collective wisdom. The premise was elegant: websites with more backlinks were probably more important and trustworthy. It was democracy in action – the internet voting on itself through links.

While this approach works beautifully for factual queries like “what is the speed of light,” it struggles with nuanced topics where diversity of perspective matters more than consensus. The very nature of PageRank creates a self-reinforcing cycle: popular sites become more visible, leading to more backlinks, leading to even greater visibility.

This system inadvertently flattens the richness of human knowledge into a popularity contest. It’s as if we’re asking the entire world to vote on the best restaurant in your neighborhood – the results might reflect broad appeal, but they’re unlikely to match your specific tastes or needs.

The Echo Chamber of Recommendation Systems

On the other side, we have recommendation systems that promise personalization but often trap us in what we call “rabbit holes.” These algorithms study our behavior and serve us more of what we’ve liked before, creating increasingly narrow feedback loops.

Start watching a few cooking videos, and suddenly your entire feed becomes culinary content. Click on a political article, and your recommendations quickly become an echo chamber of similar viewpoints. While this approach maximizes engagement, it does so at the cost of serendipity – those unexpected discoveries that broaden our horizons.

The problem isn’t just that these systems can be limiting; it’s that they operate as black boxes. Users have little understanding of why they’re seeing certain content and even less control over steering their discovery journey.

Looking Back to Move Forward

Interestingly, the solution to these modern challenges might lie in how we discovered information before these technologies existed. Think back to how we naturally sought out knowledge: through conversations with friends, colleagues, and mentors.

When we wanted to discover new books, we didn’t poll the entire world or rely on an algorithm to analyze our past reading habits. Instead, we talked to friends whose taste in literature we trusted. When we needed restaurant recommendations, we asked colleagues who shared our culinary preferences.

This system worked because:

  1. We understood exactly why we valued each person’s perspective
  2. We could actively choose whose recommendations to seek out
  3. Different friends offered different viewpoints, naturally creating diversity
  4. Serendipitous discoveries happened organically through conversation

The Power of Personal Perspective

What if we bring this human-centered approach to digital discovery? Imagine a system that doesn’t try to replace human judgment with algorithms, but instead helps you find and follow the curators whose perspectives you value.

This isn’t just personalization based on your past behavior – it’s about actively choosing whose lens you want to view the world through. A food critic might have thousands of followers, but you might prefer your friend’s hole-in-the-wall recommendations because they understand your particular palate.

The beauty of this approach is that it preserves what makes human curation special:

  • Natural serendipity through the diverse interests of your trusted curators
  • Full transparency about why you’re seeing certain content
  • Control over whose perspectives influence your discovery
  • The ability to step out of your comfort zone by following curators with different viewpoints

A New Path Forward

The future of information discovery isn’t about achieving perfect consensus through PageRank, nor is it about increasingly sophisticated recommendation algorithms. It’s about recognizing that people – with their unique perspectives, expertise, and ability to surprise us – are the ultimate curators of information.

By bringing the human element back to discovery, we can create a system that offers both personalization and serendipity, both efficiency and understanding. Most importantly, we can build a system that puts users back in control of their discovery journey.

The future of discovery isn’t about finding what algorithms think is best – it’s about connecting with the human perspectives that truly resonate with you.

Categories
English

Beyond the Hype: Three Lessons from a Startup Rollercoaster

My first startup journey was a rollercoaster – a wild ride that began with a spark of an idea.

Drawing from my experience at Google Brain, I had a strong instinct that combining human feedback using reinforcement learning would significantly improve the experience of LLMs in dialogue. I started to explore this idea to build an assistant for knowledge workers in 2021. My gut told me there was a big opportunity in the space, but I wasn’t sure when the mass market would recognize it. Later, it turned out that this was one of the fundamental ideas behind ChatGPT.

However, everyone said the space was tiny back then, and potential investors and peers saw little value in my concept. As a first-time founder, the unanimous skepticism I made me doubt my vision. I ended up wasting a lot of time and getting distracted by other directions.

I eventually pushed forward, though not fast enough. We launched in September of 2022, right before ChatGPT was released. When we launched, we were amazed by the positive feedback and delight we got from customers. However, within months, ChatGPT’s release completely transformed the landscape. Suddenly, customers began to have much higher expectations and hesitated to sign contracts.

It was obvious that we had to build more to make ourselves stand out. However, the process felt like a constant chase of whimsical hope. We tried to tackle different parts of our customers’ workflow. Customers told us they liked our AI capability but missed features they longed for from their existing vendors. We tried to build those, but existing incumbents would quickly add AI capabilities similar to ours, making our new solutions seem redundant.

I ended up starting a new startup in a completely different space. But I learned three crucial lessons from the first startup journey.

You need to trust your gut.

I wasted a precious year that could have helped us build a more defensible moat and establish ourselves as the market leader, better preparing us for when ChatGPT’s storm hit. A few months of lead time is not enough—you often need more. When everyone says your idea is impractical, it is the best time to build your competitive advantage. Innovation rarely comes from following the crowd. It emerges from the courage to pursue ideas that seem impossible—until they aren’t.

Unique insight means nothing without a defensible moat.

Consider your competitive moat early on. While talking to customers helps identify valuable problems to solve, it doesn’t guarantee that your solution will be the only one customers choose. You need to figure out why customers would pick your solution over alternatives.

Technology is not a silver bullet; it is actually a very poor moat because ideas diffuse naturally.

At the beginning, the only competitive advantage of a startup is the time you get because of the ignorance of big players. But you need to turn it into an actual moat — reasons why customers should use you, not other players. For B2B companies, the reason is often data or customer relationships. For B2C, the reason is often branding or better user experiences.

Plus, you need to make sure you have the resources to build your moat, which requires strategic planning if you are already in fierce competition. (Unfortunately, it often creates conflicts with the customer-first culture in B2B).

Don’t be afraid to restart from zero.

If you’re facing strong headwinds and haven’t had time to build a moat, take a step back. Rethink what other valuable problems exist that you believe in but others haven’t yet recognized.

My story isn’t unique – it’s a microcosm of the startup ecosystem. Innovation isn’t about having the perfect idea from day one. It’s about resilience, adaptability, and the willingness to transform setbacks into insights.

Ultimately, I started a new venture in a different sector, carrying these lessons like a compass. Each “failure” was actually a sophisticated learning experience, and helped me transform into a true entrepreneur.

Categories
English

Goodbye Faceless Algorithms, Hello Hachi!

It’s no secret that boredom and loneliness is an epidemic. The average American spends 3 hours a day scrolling through online contents — usually solo.

What’s more, your content stream is controlled by a faceless “algorithm” that feeds you content over which you have very little control or knowledge.

I know this game inside out. I was one of the Google Brain Researchers who worked on the world’s most popular YouTube AI engine.

We built this with good intent: to surface better content to users and keep them engaged – and it worked!

However, there’s one important problem: there exists only one algorithm for everyone in the world. Users have no control over what information they see, and creators have to tailor their content to “game” the algorithm.

Let’s be real, no one’s a fan of this singular, faceless algorithm. That’s why I’m building something new, with the belief that people should be able to choose how they discover information, and be able to put a face to a name.

I’m excited to announce that we are working on Hachi — unique personas that help you search for information, whether it’s text, images, or videos. With Hachi, you will be able to:

  1. Choose your muse. Just like how you choose your friends, you also choose which Hachi you want to spend time with. Imagine having one Hachi for #vanlife, another for the latest Taylor and Travis gossip, and another for minecraft.
  2. Stay Trendy. Hachis are constantly discovering new creators and content that are trending based on your common interests.
  3. Never Alone: Hachis explore with you, share their own insights, and are ready to chat anytime you would like to.

To make this a reality, my co-founder Nancy and I have been hard at work on building out Hachi in the last few months. If this sounds interesting to you, please join our mailing list: https://go.hachizone.ai/mailinglist and our discord community: https://go.hachizone.ai/discord .

We’d love to hear your early thoughts and feedback!

– Jing Conan Wang

Categories
English

Why Is It So Hard to Create a Funny AI?

Large Language Models (LLMs) like ChatGPT have shown impressive results in the past two years. However, people have realized that while these models are incredibly knowledgeable, they often lack humor. Ask any mainstream LLM to tell you a joke, and you’ll likely receive a dull, dad-joke-level response.

For example, here is an joke from ChatGPT

The problem goes beyond just being funny. LLMs have failed to create memorable personalities, resulting in shallow interactions. This is the reason why most of the AI companion products feel like role-playing games. Because people get bored quickly with one character, platforms need to encourage users to create a lot of characters to keep them engaged.

Why does this happen? There are two main reasons:

The first one is that LLMs lack capability of deep reasoning. Creating humor is a challenging task. The two key ingredients for humor are surprise and contradiction, which demand a profound understanding of how things logically work and then intentionally deviating from the norm.

However, LLMs struggle to understand deep logical connections and context, which are essential for humor. They tend to focus on literal interpretations, missing the subtleties that make language humorous.

The second one is the limitation of dataset and evaluation: Many models are trained to excel on specific benchmarks and tasks that are are outdated. The existing LLM evaluation focus heavily on question answering or academic tests because researchers can easily access those. This has resulted in an overemphasis on one particular subdomain at the expense of more nuanced language understanding and creative expression. Consequently, responses generated by these models lack personality.

What is likely the path ahead for it? Here are my take:

1. Better Human-AI collaboration. As current LLMs struggle with understanding the deeper logic of language, making them truly funny might require significant advancements in their reasoning capabilities.

Some progress will be made as LLMs keep getting more parameters. However, it is a long way to go as it is hard to turn LLM into a reasoning machine. A more realistic approach is to harness human wisdom and creativity to help LLMs bypass complex logical reasoning and directly generate funny content.

Humans actually are very good at capturing nuances. It is easier to develop a AI that can that leverage human’s capability build the capability in LLM itself. For example, when creating funny comments for videos, using existing human comments from the video can boost the quality of the jokes. This falls into the domain of Human-based Computation . One famous example for this is CAPTCHA that is used to verify if a human is a bot. At the same time it is also teaching machines to solve hard computer vision tasks.

2. Online Learning: Most current LLMs are offline, taking months to train and then remaining frozen. This makes it nearly impossible for models to adapt based on real-time human feedback. One would argue that retrieval augmented generation (RAG) is just a poor man’s solution to make LLMs to be able to learn the facts in the real-time. However, simple RAG doesn’t have the ability to capture nuances.  We need to design ways for online learning, allowing models to capture and incorporate human feedback in near real-time.

3. Better Evaluation: Current datasets for evaluating AI-generated humor are too narrowly focused. The AI community needs to overcome this limitation to create more comprehensive assessment tools.

The way we interact with Large Language Models (LLMs) is just as important as the answers they provide. Engaging with AI shouldn’t be a dull and boring experience. By following the directions described above, I think we will be able to create truly funny AI systems that can engage in witty, personable interactions with humans in the foreseeable future. 

Categories
English

Why You Should Build a Consumer GenAI Startup and How to Make it Happen

While conventional wisdom holds that B2B startups are the safer choice, is this really the case? Let’s delve into why a consumer-focused GenAI startup might actually be your golden ticket.

In 2023, the startup landscape of GenAI applications experienced a remarkable surge, propelled by the advent of ChatGPT and foundational models such as GPT-4 and Anthropic. Over the past year, venture capital has invested at least $21 billion into GenAI, and most GenAI applications have primarily targeted on B2B, particularly productivity improvement. In the latest Y Combinator batch, 65% of the startups fall within the B2B SaaS and enterprise sectors, whereas only 11% are focused on consumer-oriented verticals. The most popular product form is AI assistant.

Current Challenges in B2B GenAI

However, as we transition into 2024, it has become evident that a lot of startups in the domain are facing significant challenges. A majority of these B2B GenAI companies are grappling with financial losses and are frequently pivoting in an attempt to find product market fit.

Many startup founders struggle to convert Proof-of-Concept contracts into full annual agreements, often facing significant limitations in their bargaining power over pricing. Despite the $21 billion VC investment, GenAI startup only generated around $1 billion in revenue.

Heavy competition is one of the main challenges for startups in converting Proof-of-Concept contracts. But why is there such a strong focus on productivity improvement applications? The reasons are multifaceted and stem from various technology and market dynamics:

First, it is related to the nature of the current foundational models. Foundation models such as GPT-4 are the result of significant research breakthroughs and depend extensively on benchmarks that have been established within the academic community. Historically, these benchmarks have predominantly focused on knowledge-based tasks. For example, the benchmarks used in the GPT-4 technical report primarily consist of academic tests. Essentially, what we are creating with these models are entities akin to exceptionally skilled students or professors. This orientation naturally steers generative AI applications toward productivity enhancements. Consequently, it’s not surprising that students are the primary users of many AI-assisted products like copilots.

Second, there is a B2B-first culture in the American startup ecosystem. The American startup ecosystem has predominantly favored B2B ventures, with the consumer sector receiving significantly less investment over the past decade. Startup founders in US are afraid to build consumer startups. Although other countries such as China do not exhibit this fixed mindset, the U.S. has been a global leader in generative AI research and substantially influencing trends worldwide.

Third, the GenAI infrastructure boom levels the playing field for everyone. In 2023, the majority of investments were directed towards GenAI infrastructure, with many investment firms likening it to a “gold rush.” There’s a prevailing belief that, much like the merchants who sold supplies during a gold rush, those who provide the essential tools and services will profit first. The following figure shows that $16.9B out of the $21B billion VC money was spent on GenAI infrastructure. Newer players can always leverage better infrastructure.

Source: Sequoia Capital’s AI Ascent 2024 opening remarks

Due to the factors mentioned above, competition among productivity-focused GenAI applications is intense, undermining the ability of startups in this space to extract value from customers. As a result, the entire ecosystem remains predominantly financed by venture capital.

The Untapped Potential of Consumer GenAI

History often repeats itself. During the Internet boom of the 1990s, emphasis was initially placed on B2B applications. However, it turned out that the integration of the Internet into business contexts would take longer than anticipated. Salesforce pioneered the SaaS model, but it took nearly a decade to reach the $1 billion revenue milestone. In contrast, consumer applications have proven to be a quicker avenue for both creating and capturing value.

Google, Facebook, and Amazon have each developed consumer products that serve billions of people, discovering unique methods to monetize the internet by reaching vast audiences cost-effectively. Additionally, this approach has proven to be an effective strategy for building strong competitive advantages, or moats.

Strategies for Success

The 7-power framework is a crucial tool for analyzing business opportunities, identifying seven key levers: Scale Economies, Network Economies, Counter-Positioning, Switching Costs, Branding, Cornered Resource, and Process Power. For B2B GenAI startups,

Counter-Positioning and Process Power are typically the only levers B2B GenAI startups can pull due to incumbents holding advantages in the other areas. In contrast, Consumer GenAI startups have the potential to develop competitive moats across almost all these powers, providing numerous strategic advantages — especially if your founding team has strong technical capability in AI models and infrastructure.

It’s crucial for Consumer GenAI companies to own their AI models and infrastructure. This ownership not only fosters the development of Scale and Network Economies but also secures Cornered Resources, enhancing competitive advantage and market positioning.

On the one hand, to create a successful consumer app, controlling costs is crucial. Historical trends in developing larger and more powerful models have made them unsuitable for consumer applications due to high costs as the lifetime value (LTV) of consumer use-cases is typically much lower. For example, the LTV of a user is often just $20-30 but might ask hundreds of questions. However, utilizing all the tokens in GPT-4 can cost approximately $1.28 for a single call. Developing in-house expertise to create models that are both powerful and cost-effective is crucial to bridge the gap.

The good thing is that consumer applications are usually much more tolerant to hallucination, and might not need the most powerful model. In addition, the evolution of open-source models has enabled startups to develop their own models cost-effectively. With the recent launch of LLaMa 3, its 8B small model has outperformed the largest model from LLaMa 2. Additionally, there is anticipation that the 400B model, currently in training, will match the performance of GPT-4. These advancements make it feasible for startups to create high-performing models at a fraction of the cost associated with proprietary models. While significant investment is still necessary to reduce costs sufficiently to support large-scale consumer applications.

On the other hand, current foundational models are not ideally suited for creating robust consumer applications, as most large language models lack personalization and long-term memory capabilities. Developing new foundational models or adapting existing ones to better suit consumer needs is a critical challenge that Consumer GenAI startups must address.

Despite these challenges, startups that successfully tackle these issues can secure a significant competitive edge and establish long-lasting market dominance.

Thanks for reading this article and hope the article is useful for you. If you have any questions or thoughts, please don’t hesitate to comment or message me at jing@jingconan.com 🤗

Categories
中文

2024: 关于中美创业生态的思考

借着我筹备新创业项目的契机, 过去三个星期我去中国跑了好几个城市,和很多朋友进行了深入交流,也有非常多的感悟和收获。这些也希望和大家分享一下。

这次创业之初为什么要去中国考察?我的职业和创业经历都在美国,上次创业基本上仅仅依托美国的创业生态系统,很少利用到作为华人的优势。但是我周围最优秀的朋友和师长都告诉我:一个优秀的企业Day 1就要是一个国际化的企业。如何能够整合全球的资源是一个成功企业的必要要素。尤其是我的华科师兄姚欣帮我系统的梳理了中国国内创业生态系统相比美国的生态系统的三大优势和劣势。1)产品思维。2)供应链优势,3)人才红利 。我在过去一年的创业过程中见到了很多非常好的中国出海企业, 对这些优势已经有了一定的了解。但是耳听为虚,眼见为实。这次回国也是希望能够更加系统的了解和观察。思考如何利用好这三大优势帮助构建下一次创业的护城河。

我先聊聊产品思维。上一波移动互联网对中国的影响力远远超过了美国。在这个过程中锤炼出了一大批非常优秀的产品人。通常来说对于国内产品人对产品的使用场景想得特别的细致。比如对比一下谷歌和百度的地图APP。谷歌地图是一个非常简洁的APP。用户选择目的地,获得导航信息,除此以外没有其他的设计。相比之下,百度地图至少就包括了 1)推荐车道的指示,2)语音包,2)一键打车。4)路况实时提示。5)违规拍照提示。这还只是一小部分,基本上常见的痛点都有一整套完整的功能设计。而且这个并非个例,比如滴滴和Uber相比对场景想得也精细得多。同时国内还需要大量美国完全没有出现的应用,比如本地生活这个门类在美国就完全没有出现。

我在回国之初我们项目只是想到了这些点 1)要做个性化大模型,2)首先切入陪伴,3)可能包含一些硬件。但是国内的一些优秀产品人聊过之后我意识到我的想法显然是过于粗了。至少我需要有这些问题的答案:产品形态是什么?用户为什么需要做陪伴?获得的价值是什么,什么样的产品功能能够体现这个价值?谁是获得主要价值的人?这个是给我的一个系统性的启发,让我开始仔细的思考产品的场景。

接着说说供应链优势。这次对我印象非常深刻的一点就是深圳的硬件生态系统在全球范围是遥遥领先的。在美国做硬件项目的难度极高,里面只要还有一点硬件元素基本难倒了99.999%的创业者。但是在深圳,具有相当完备的硬件生态的完善,使得硬件创业的门槛大幅降低。比如我短短几天至少找到三家可以大致生产我所需要硬件的供应商。而且大家对合作都非常的开放。

当然我还是感受到虽然深圳的硬件系统发达,在创业的起始阶段,碰硬件是一个非常危险的事情。主要两个问题。

  1. 虽然深圳硬件的生态系统很发达,但是如果需要真正用上深圳的供应链优势,你必须对硬件指标有了很明确的需求。一旦出现错误,对于时间和资金的浪费是创业公司难以承受的。
  2. 库存对于资金的消耗是非常大的,容易引发资金链断裂。

做硬件比较适合公司已经找到了产品的Product market fit,需要进一步稳固护城河和第二增长极的时刻,比如是B轮之后。

最后说说人才红利。在过去十几年互联网的发展也积累了大量的人才。而且随着过去两年的互联网的形势变化,和人民币对于美元的贬值,人才成本普遍下降了30%以上。使得在国内进行招聘变得非常的有吸引力。总的来说,国内一线城市的用工成本大概是旧金山湾区的一半左右。二线城市(比如成都武汉)是一线城市的70%左右。中国本身其实也分为几个小的创业生态系统,各个系统的人才还是不一样。一线城市里面,北京在互联网人才集中程度上具有明显优势。上海人的产品经理(尤其是在本地生活领域)特别强,深圳的硬件人才非常的多。二线城市里面我只去了成都和武汉。成都的软件人才是明显多于武汉。但是武汉有很多光学和硬件的人才,相对比较均衡。

另外我的感受是从人才密度和环境来说,国内二线城市和一线城市的差距要远远大于一线城市和硅谷的差距。尤其是AI方向的人才,二线城市非常的缺乏。一线城市在人才的密度,人的信息上来说和美国基本比较接近的。但是二线城市还是信息流通不畅。很多在一线城市大多已经比较常见的信息,在二线城市基本上还是很少人知道。相对来说,如果仅仅是比较传统的软件开发或者测试,放在二线城市没有问题。但是如果需要非常快速更新的领域(比如AI岗位),还是要放在硅谷或者一线城市。加拿大也是不错的选项,成本略高于国内一线城市,但是和硅谷没有时差,沟通起来容易很多。

另外,国内的APP开发已经达到了流水线化生产的地步。基本上APP开发的速度可以达到美国类似团队的两倍以上。如果能够合理的利用,这是一个非常大的优势。

说了这么多的优势,我也想说说我看到了需要注意的地方:

  1. 一是国内对于diversity较不看重。这个导致了生态系统偏向同质化竞争, 创新性不足。如果有大量的国内团队,如何保证团队的diversity和创新性是一个需要注意的事情。
  2. 二是中国的很多经验不能直接照搬到美国。前几年有一阵子Copy From China的热潮,有很多公司把国内的支付和本地生活的项目照搬到美国 ,但是没有特别成功的。这次我也对这个问题有了系统的反思。中国是一个单一文化国家,全中国14亿人的消费习惯都比较接近。而且由于竞争压力大,很多产品设计是对中国消费者的习惯过度拟合的。与此相比美国不是一个单一文化文化国家。和中国文化类似的人群只有600万左右的华裔和2000万左右的亚裔。每个人群具有非常明显不同的消费习惯。仅仅照搬中国的产品形态会对于华人群体形成过拟合,使得产品在切入美国其他族裔人群的时候遇见困难。但是美国的华裔的人口只相当于中国的一个地级市,仅仅服务这个人群又不足以支撑一个大型的公司。

中国的消费商业模式向美国溢出是一个大概率事件。但是并不是所以消费业务形态都会适合。 在美国要做好消费类的业务,必须打的是人类的普遍感情。不能够对于具体的人群过分的优化。什么事普遍情感?迪士尼的产品其实主打的就是一个非常朴素的人类普遍感情 — 真善美。这是它为什么能够迈向全球的原因。中国的亲子陪伴类业务基本上都完全转向了教育 — 比如课本绘本或者学习机之类。这个如果直接照搬是不行的。华人文化对于学习是非常的重视的,但其他族裔对于教育的关注的程度要远低于华人。如果针对教育过分优化,会难以跳脱华人群体。但是害怕孤单是一个普遍的情感,从陪伴这个角度切入会是更好的选择。

回过头来说这次的考察的思考。我觉得整合全球的资源是不仅是一个优秀企业家的机会而且也是一种责任。过去的几年似乎全球化碰到了一些波折,但是我觉得全球融合的大趋势是不会发生改变的 — 不论是大量的优秀中国公司出海,还是更多的外国企业整合中国的资源。无论文化或者族裔,人对于美好生活的向往是共通的,只有全球资源一体化,才能够更加有效的创造出优质的产品,为每一个人创造更加美好的生活。

link of this post: https://s.jing.me/2024-cn-us-startup-ecosystem

Categories
中文

从零到一:大模型技术的创造史

虽然大模型是在OpenAI推出了ChatGPT之后进入公众视野的,但是在ChatGPT之前大模型技术已经经历了很长一段时间的演化。我从2014年到2019年在谷歌工作,大模型的许多关键技术是在这段时间在谷歌大脑成熟的。整个技术的成型不是一蹴而就,而是由许多优秀的研究者和工程师接力完成。我有幸参与其中,作为一个亲历这个过程的人,这里希望记录一下这段经历。让大家了解一下这项划时代技术从零到一时期的一些故事。

大模型的开发不仅需要强大的计算能力,还需要解决于三个关键问题:架构、训练算法和数据。因为搜索引擎本质就是用自然语言进行互联网数据的查询,谷歌一直在自然语言处理方向的研究上面重金投入,最终逐渐解决了了这三个问题。

首先是架构。谷歌大脑一个很重要的任务是为谷歌的翻译任务提供模型支持。语言本身就是一个序列,所以很自然大家就开开始使用一种称为序列模型(Sequence Models)的神经网络来做这个。与上一代的卷积神经网(CNN)相比,序列模型是专门用于处理序列数据的。每一层的输出又称为下次迭代的输入。这样就可以处理很长的序列。

谷歌翻译是支持多种语言的相互翻译的,所以需要有一种架构可以非常容易扩展语言而不需要重新训练。从直观上来说,一种语言不过是为了表达特定含义的符号,翻译的本质就是使用不同的符号系统来表示同一个含义。于是大家考虑到如果能够用一个序列模型提取一种语言要表达的含义(编码器encoder),然后在用另外一个序列模型将将之转化成为另一种语言的符号(解码器decoder)。这样encoder-decoder模型架构就出现了。

Encoder-decoder模型架构刚刚出现(2014年左右)的时候效果不好,主要问题就是序列模型(sequence model)很容易遗忘信息,一般只能记住几个上下词,这就是我们提到的context window limit滥觞。这种问题之前也已经有一种方法解决,(LSTM, 长短时记忆模型)。LSTM效果不错,但是特别复杂,很难写。所以大家一直在寻在更加简化的架构。

Transformer架构(变压器架构)就是在这个背景下面诞生的 (2017年)。比起LSTM来说,Transformer架构简单很多,效果也非常的好。后来所有的大模型都是基于这个架构。Transformer的八位作者后来都离开了谷歌,几乎每个人都创立了一个独角兽公司,这就是所谓的“Transformer八子”。 Transformer架构刚刚出来的时候并没有引起大家的注意。主要是这样的架构在当时非常的难以训练,只有少数的人可以使用。

第二个部分是训练算法。2013年左右时候,谷歌发明了词嵌入技术(Word Embedding或者 Word2Vec)。这个技术能够把一个单词映射成为向量。如果两个词在语义上面比较接近,那么他们在向量空间的距离也会比较接近。词嵌入技术被首先用到了谷歌的搜索系统和广告机器学习系统里面作为特征,带来了很大的提升。但是很快大家发现了词嵌入技术存在一些问题。自然语言里面的词都具有多义性。同样的词随着它所处的上下文会具有完全不同的语义。

如何能够生成一个能够考虑上下文的词嵌入呢? 这个时候大家考虑到可以将一串词序列顺序的处理。如果从前往后扫一遍就可以知道上文,如果从后往前扫一遍就可以知道下文。这样上下文就都有了。这种考虑上下文的思路就演进成为了BERT(双向变压器表征编码器, 2018年)。

BERT这种双向的特点和后来OpenAI的GPT只是从前往后的范式稍有不同,但是原理都是一样。BERT最大的优点就是特别适合生成词嵌入(Embedding)。当时BERT的词嵌入用到谷歌的搜索广告系统里面让产生了1%的营收增长。谷歌每年大概2000亿美金的收入,光这一项改进,就是一年20亿美金的收入。BERT的出现也直接催生了谷歌的TPU项目。

BERT论文一个重大的贡献是让预训练-微调范式发扬光大了。这个范式其实在谷歌内部很早就开始有了。主要是谷歌研究部门是一种混合制的研究部门。既有人做基础模型研究,也有人(Applied Scientist)到各个部门去做落地。最开始每个部门的模型都是从头训练。但是随着模型的规模变大,对训练资源的要求越来越高。而且基础研究的人对于各个部门的业务也不是十分了解。这个时候如果负责基础研究的人能够做一个预训练的模型。各个应用的人再依据各自的业务问题做微调。会起到事半功倍的效果。

而且由于将所有的训练资源汇聚到一起,预训练模型可以做的很大。BERT是第一个将预训练模型发布到谷歌以外的团队。而且当时他们除了模型权重,也一并开源了模型代码。掀起了一波自然语言处理(NLP)热潮。著名的HuggingFace其实最开始就是BERT的一个PyTorch开源实现。

第三个部分是数据。预训练大模型需要大量的数据。传统的机器学习训练都是通过监督学习,需要大量的文本和标注。文本这个事情好说,英文里面有维基百科,里面有大量的文本,维基百科本身是非营利性组织,而且谷歌是维基百科的最大赞助商,获取数据没有什么太大的难度。但是标注需要人工做,欧美的标注成本很高,非常的不实际。半监督学习(Semi-Supervised Learning)就是在这个时候诞生的。半监督学习利用某种规则可以自动生成标注。比如说BERT从一段文本里面随机挖掉一个词,然后让模型来预测这个词。这个原理其实和我们人做完形填空的原理是一样的。

随着BERT的完善,谷歌内部开始基于这个能力做很多应用。很重要的就是多轮对话式推荐(Conversational Recommender)。传统的推荐系统是基于排序的,但是多轮对话式推荐是基于自然语言处理的,并且利用强化学习进行对齐。其实这已经就是后来ChatGPT的原理了。而且当时已经达到了类似ChatGPT的效果。但是谷歌对外发布的一些产品因为一些偏见多样化问题招致美国主流媒体的抨击,这使得谷歌研究院高层对于大模型方向的支持开始犹豫起来。同时谷歌内部模型落地的时候遇到了强大的阻力。因为这个是对于已有业务的一种颠覆式创新。但是在谷歌内部业务线的负责人或是无法看清这个趋势或是担心带来的冲击,不愿意引入,导致除了在YouTube取得一些成绩外其他的落地项目全部折戟。多方原因导致很多优秀人才流散,很多人后来或加入了OpenAI或自行创业继续这一事业。

起初OpenAI并不是专注于大模型的,但是在2019年谷歌内部动荡之际,OpenAI扛起了大模型的大旗。他们在谷歌的工作基础上推陈出新,专注于模型的泛化和Zero-Shot learning — 也就是用户不需要微调模型,只需要一些提示词(Prompt)就可以在多个领域达到相当程度可用的程度。事实证明,这个策略是非常成功的。因为大多数开发者其实并不具备微调能力。Prompt Engineering进一步降低了大模型的使用门槛,让人人都可以用大模型。

整个大模型发展的过程绝非是事前规划好的,而是大家在好奇心的驱动下不断摸着石头过河,最终趟出了道路。科技创新虽然不能被规划,但是是可以孕育的。只要有合适的创新环境,就会不断有新的技术涌现。当时的谷歌大脑和如今的OpenAI都是一些共同的特点:高密度的人才,充分的自由,海量的资源。这是这些地方能够孕育革命性技术,不断拓展科技边界的原因。

Categories
English

Unlocking the Wonders of Imaginative Play: A Journey into the Magic of Childhood

One day, my two-year-old daughter, Adalyn, approached me with a desk lamp and a handful of blue glass balls. Puzzled, I watched as she arranged them before me and then asked, “Daddy, what animal is this?”

I couldn’t fathom how a desk lamp and some stones could possibly resemble an animal. For a brief moment, I felt utterly perplexed.

However, after pausing for a few seconds, it dawned on me—Adalyn had conjured up a magical world within her imagination.

Though to me it seemed like mere objects, to her, they were the building blocks of an enchanting creature.

Grateful for the power of imagination and the assistance of technology, I decided to enlist the help of AI to bring Adalyn’s creation to life.

Sending a picture to ChatGPT with a query, “What type of animal is this?” I eagerly awaited its response.

In a matter of moments, ChatGPT wove a beautiful tale:

“In the heart of the magical forest, where the trees whispered secrets and the moonlight danced, lived a giraffe named Zara. With each step she took, her footprints left behind a trail of shimmering blue, marking her path through the enchanted woods…”

Reading the story aloud to Adalyn, her eyes lit up with joy. “Zara!” she exclaimed, delighted to have her creation given life and a name.

At that moment, I realized that it wasn’t my understanding alone that made her happy—it was the connection, the validation of her imagination, and the shared experience of storytelling.

Adalyn beamed at me and proclaimed, “You’re the best dad in the world!” But deep down, I knew it wasn’t solely my doing. It was the magic of childhood imagination and the wonders that technology and storytelling can bring to life.

This experience is also known as Imaginative Play, a crucial activity for child development. Sadly, in today’s fast-paced world, few parents engage in imaginative play with their children. While immensely enjoyable, it demands a significant amount of imagination and mental energy. Unfortunately, as adults, many of us lose touch with our imagination over time, as society often prioritizes “correct answers” over creative thinking.

Upon reflection, I realized that Imaginative Play isn’t just beneficial for children—it holds value for adults too. It fosters creativity, problem-solving skills, and emotional intelligence, all of which are essential for navigating the complexities of life.

Eager to share this revelation, I recounted my experience with my Toastmasters club. I shared how a seemingly mundane moment with my daughter sparked a journey into a magical realm of creativity and imagination. Through this story, I hoped to inspire others to embrace their inner child, reclaim their imagination, and rediscover the joy of imaginative play.

Please check out this video below: