While conventional wisdom holds that B2B startups are the safer choice, is this really the case? Let’s delve into why a consumer-focused GenAI startup might actually be your golden ticket.
In 2023, the startup landscape of GenAI applications experienced a remarkable surge, propelled by the advent of ChatGPT and foundational models such as GPT-4 and Anthropic. Over the past year, venture capital has invested at least $21 billion into GenAI, and most GenAI applications have primarily targeted on B2B, particularly productivity improvement. In the latest Y Combinator batch, 65% of the startups fall within the B2B SaaS and enterprise sectors, whereas only 11% are focused on consumer-oriented verticals. The most popular product form is AI assistant.
Current Challenges in B2B GenAI
However, as we transition into 2024, it has become evident that a lot of startups in the domain are facing significant challenges. A majority of these B2B GenAI companies are grappling with financial losses and are frequently pivoting in an attempt to find product market fit.
Many startup founders struggle to convert Proof-of-Concept contracts into full annual agreements, often facing significant limitations in their bargaining power over pricing. Despite the $21 billion VC investment, GenAI startup only generated around $1 billion in revenue.
Heavy competition is one of the main challenges for startups in converting Proof-of-Concept contracts. But why is there such a strong focus on productivity improvement applications? The reasons are multifaceted and stem from various technology and market dynamics:
First, it is related to the nature of the current foundational models. Foundation models such as GPT-4 are the result of significant research breakthroughs and depend extensively on benchmarks that have been established within the academic community. Historically, these benchmarks have predominantly focused on knowledge-based tasks. For example, the benchmarks used in the GPT-4 technical report primarily consist of academic tests. Essentially, what we are creating with these models are entities akin to exceptionally skilled students or professors. This orientation naturally steers generative AI applications toward productivity enhancements. Consequently, it’s not surprising that students are the primary users of many AI-assisted products like copilots.
Second, there is a B2B-first culture in the American startup ecosystem. The American startup ecosystem has predominantly favored B2B ventures, with the consumer sector receiving significantly less investment over the past decade. Startup founders in US are afraid to build consumer startups. Although other countries such as China do not exhibit this fixed mindset, the U.S. has been a global leader in generative AI research and substantially influencing trends worldwide.
Third, the GenAI infrastructure boom levels the playing field for everyone. In 2023, the majority of investments were directed towards GenAI infrastructure, with many investment firms likening it to a “gold rush.” There’s a prevailing belief that, much like the merchants who sold supplies during a gold rush, those who provide the essential tools and services will profit first. The following figure shows that $16.9B out of the $21B billion VC money was spent on GenAI infrastructure. Newer players can always leverage better infrastructure.
Due to the factors mentioned above, competition among productivity-focused GenAI applications is intense, undermining the ability of startups in this space to extract value from customers. As a result, the entire ecosystem remains predominantly financed by venture capital.
The Untapped Potential of Consumer GenAI
History often repeats itself. During the Internet boom of the 1990s, emphasis was initially placed on B2B applications. However, it turned out that the integration of the Internet into business contexts would take longer than anticipated. Salesforce pioneered the SaaS model, but it took nearly a decade to reach the $1 billion revenue milestone. In contrast, consumer applications have proven to be a quicker avenue for both creating and capturing value.
Google, Facebook, and Amazon have each developed consumer products that serve billions of people, discovering unique methods to monetize the internet by reaching vast audiences cost-effectively. Additionally, this approach has proven to be an effective strategy for building strong competitive advantages, or moats.
Strategies for Success
The 7-power framework is a crucial tool for analyzing business opportunities, identifying seven key levers: Scale Economies, Network Economies, Counter-Positioning, Switching Costs, Branding, Cornered Resource, and Process Power. For B2B GenAI startups,
Counter-Positioning and Process Power are typically the only levers B2B GenAI startups can pull due to incumbents holding advantages in the other areas. In contrast, Consumer GenAI startups have the potential to develop competitive moats across almost all these powers, providing numerous strategic advantages — especially if your founding team has strong technical capability in AI models and infrastructure.
It’s crucial for Consumer GenAI companies to own their AI models and infrastructure. This ownership not only fosters the development of Scale and Network Economies but also secures Cornered Resources, enhancing competitive advantage and market positioning.
On the one hand, to create a successful consumer app, controlling costs is crucial. Historical trends in developing larger and more powerful models have made them unsuitable for consumer applications due to high costs as the lifetime value (LTV) of consumer use-cases is typically much lower. For example, the LTV of a user is often just $20-30 but might ask hundreds of questions. However, utilizing all the tokens in GPT-4 can cost approximately $1.28 for a single call. Developing in-house expertise to create models that are both powerful and cost-effective is crucial to bridge the gap.
The good thing is that consumer applications are usually much more tolerant to hallucination, and might not need the most powerful model. In addition, the evolution of open-source models has enabled startups to develop their own models cost-effectively. With the recent launch of LLaMa 3, its 8B small model has outperformed the largest model from LLaMa 2. Additionally, there is anticipation that the 400B model, currently in training, will match the performance of GPT-4. These advancements make it feasible for startups to create high-performing models at a fraction of the cost associated with proprietary models. While significant investment is still necessary to reduce costs sufficiently to support large-scale consumer applications.
On the other hand, current foundational models are not ideally suited for creating robust consumer applications, as most large language models lack personalization and long-term memory capabilities. Developing new foundational models or adapting existing ones to better suit consumer needs is a critical challenge that Consumer GenAI startups must address.
Despite these challenges, startups that successfully tackle these issues can secure a significant competitive edge and establish long-lasting market dominance.
Thanks for reading this article and hope the article is useful for you. If you have any questions or thoughts, please don’t hesitate to comment or message me at jing@jingconan.com 🤗
二是中国的很多经验不能直接照搬到美国。前几年有一阵子Copy From China的热潮,有很多公司把国内的支付和本地生活的项目照搬到美国 ,但是没有特别成功的。这次我也对这个问题有了系统的反思。中国是一个单一文化国家,全中国14亿人的消费习惯都比较接近。而且由于竞争压力大,很多产品设计是对中国消费者的习惯过度拟合的。与此相比美国不是一个单一文化文化国家。和中国文化类似的人群只有600万左右的华裔和2000万左右的亚裔。每个人群具有非常明显不同的消费习惯。仅仅照搬中国的产品形态会对于华人群体形成过拟合,使得产品在切入美国其他族裔人群的时候遇见困难。但是美国的华裔的人口只相当于中国的一个地级市,仅仅服务这个人群又不足以支撑一个大型的公司。
One day, my two-year-old daughter, Adalyn, approached me with a desk lamp and a handful of blue glass balls. Puzzled, I watched as she arranged them before me and then asked, “Daddy, what animal is this?”
I couldn’t fathom how a desk lamp and some stones could possibly resemble an animal. For a brief moment, I felt utterly perplexed.
However, after pausing for a few seconds, it dawned on me—Adalyn had conjured up a magical world within her imagination.
Though to me it seemed like mere objects, to her, they were the building blocks of an enchanting creature.
Grateful for the power of imagination and the assistance of technology, I decided to enlist the help of AI to bring Adalyn’s creation to life.
Sending a picture to ChatGPT with a query, “What type of animal is this?” I eagerly awaited its response.
In a matter of moments, ChatGPT wove a beautiful tale:
“In the heart of the magical forest, where the trees whispered secrets and the moonlight danced, lived a giraffe named Zara. With each step she took, her footprints left behind a trail of shimmering blue, marking her path through the enchanted woods…”
Reading the story aloud to Adalyn, her eyes lit up with joy. “Zara!” she exclaimed, delighted to have her creation given life and a name.
At that moment, I realized that it wasn’t my understanding alone that made her happy—it was the connection, the validation of her imagination, and the shared experience of storytelling.
Adalyn beamed at me and proclaimed, “You’re the best dad in the world!” But deep down, I knew it wasn’t solely my doing. It was the magic of childhood imagination and the wonders that technology and storytelling can bring to life.
This experience is also known as Imaginative Play, a crucial activity for child development. Sadly, in today’s fast-paced world, few parents engage in imaginative play with their children. While immensely enjoyable, it demands a significant amount of imagination and mental energy. Unfortunately, as adults, many of us lose touch with our imagination over time, as society often prioritizes “correct answers” over creative thinking.
Upon reflection, I realized that Imaginative Play isn’t just beneficial for children—it holds value for adults too. It fosters creativity, problem-solving skills, and emotional intelligence, all of which are essential for navigating the complexities of life.
Eager to share this revelation, I recounted my experience with my Toastmasters club. I shared how a seemingly mundane moment with my daughter sparked a journey into a magical realm of creativity and imagination. Through this story, I hoped to inspire others to embrace their inner child, reclaim their imagination, and rediscover the joy of imaginative play.
Recently, there’s been an interesting development in the big model industry. Perplexity AI, a hot big model company in Silicon Valley, completed a financing round two months ago, valuing it at over $500 million. However, using Lepton AI’s middleware, Lepton’s co-founder Jiayang Qing managed to create an open-source version with just 500 lines of code over a weekend, sparking a heated discussion in the industry. The related demo on GitHub quickly garnered five thousand stars in just a few days. This incident reflects a broader trend, and I’ll analyze it based on my own year and a half of entrepreneurial experience.
Currently, big model companies can be categorized into three types:
Base model companies, which primarily provide big model capabilities. This area requires significant capital and resources are highly concentrated among leading companies and giants.
Middleware companies, which offer middleware between big models and applications. Jiayang Qing’s Lepton AI falls into this category.
Application-layer companies, which directly provide consumer-facing applications. These can further be divided into platform-type application companies, like Perplexity AI, and vertical application companies focusing on niche markets, such as Harvey AI, which recently completed financing for its legal application.
The incident with Perplexity AI and Lepton AI highlights a pain point for application-layer companies — high competitive pressure with insufficient moats. For instance, Perplexity, which aims to solve general information search problems, faces challenges from four fronts: pressure from giants like Google, competition from vertical knowledge applications like Harvey, market encroachment from other knowledge service companies, and disruptions from middleware companies like Lepton AI. Vertical application companies face slightly less pressure, but they still confront these four forces and have a smaller market size, resulting in fewer resources.
So, what can be done? Many entrepreneurs believe that accumulating proprietary data can create a sufficient moat. This is very sensible, but it presupposes a systematic methodology for acquiring proprietary data. Here, I propose a methodology: using contrarian insights to gain a time advantage, leveraging the founder’s personal strengths for breakthroughs, and focusing on data-first products or operational capabilities.
First, contrarian insights, or insights not commonly understood, are essential. Entrepreneurs must find less trodden paths to build their core competitive advantage with minimal resources. What was once a contrarian insight can become common knowledge, such as the combination of big models with chat interfaces, which was novel before ChatGPT but is now common.
Second, the founder’s personal advantage is crucial. While contrarian insights offer temporary protection, they quickly become common knowledge once proven useful. Here, a deep understanding of user pain points in vertical applications can be a personal advantage. For example, one of Harvey’s co-founders was a lawyer. Even if a team doesn’t have a co-founder from a specific industry, previous experiences that can be leveraged as industry advantages are valuable.
Finally, building a data-first product or operational capability is key. The founder’s personal advantage must be systematized to sustain. There are two strategies:
Product-driven: The founder uses their deep understanding of user needs to design a product that naturally accumulates high-quality data, enhancing the product experience and creating a flywheel effect.
Operation-driven: The founder uses their resources and experience to build an operational system that continually acquires proprietary data, making operations or sales more efficient and faster.
The former suits products focused on Product-led Growth (PLG), while the latter suits those driven by Sales-led Growth (SLG). Both must prioritize data. If product-driven, each feature should contribute to data accumulation. If operation-driven, operations should focus on data, not just revenue or other metrics.
Returning to Perplexity’s case, Jiayang Qing could replicate Perplexity’s main functions over a weekend but not its data accumulation. As a middleware company, Lepton likely doesn’t intend this as a core strategy. However, many new application startups may use this to challenge Perplexity further. Whether Perplexity can withstand this depends on its ability to build a moat with proprietary data.
My name is Jing Conan Wang, a co-founder and CTO of Storytell.ai. In October 2022, together with two amazing partners DROdio and Erika, we founded Storytell.ai, dedicated to distilling signal from noise to improve the efficiency of knowledge workers. The reason we chose the name Storytell.ai is that storytelling is the oldest tool for knowledge distillation in human history. In ancient times, people sat around bright campfires telling stories, allowing human experiences and wisdom to be passed down through generations.
The past year has been an explosive one for large language models (LLMs). With the meteoric rise of ChatGPT, LLMs have quickly become known to the general public. I hope to share my own personal story to give people a glimpse into the grandeur of entrepreneurship in the field of large language models.
From Google and Beyond
Although ChatGPT comes from OpenAI, the roots of LLMs lie in Google Brain – a deep learning lab founded by Jeff Dean, Andrew Ng, and others. It was during my time at Google Brain that I formed a connection with LLMs. I worked at Google for five years, spending the first three in Ads engineering and the latter two in Google Brain. Not long after joining Google Brain, I noticed that one colleague after another began shifting their focus to research on large language models. That period (2017-2019) was the germination phase for LLMs, with a plethora of new technologies emerging in Google’s labs. Being in the midst of this environment allowed me to gain a profound understanding of the capabilities of LLMs. Particularly, there were a few experiences that made me realize that a true technological revolution in language models was on the horizon:
One was about BERT — one of the best LLMs before ChatGPT: One day in 2017, while I was in a Google Cafe, a thunderous applause broke out. It turned out that a group nearby was discussing the results of an experiment. Google provides free lunches for its employees, and lunchtime often brings people together to talk about work. A colleague mentioned to me: “Do you know about BERT?” At the time, I only knew BERT as a character from the American animated show Sesame Street, which I had never watched. My colleague told me: “BERT has increased Google Search revenue by 1% in internal experiments.” Google’s revenue was already over a hundred billion dollars a year, meaning this was equivalent to several billion dollars in annual revenue. This was quite shocking to me.
Another was my experience with Duplex: Sunder Pichai released a demo of an AI making phone calls at Google I/O 2018, which caused a sensation in the industry. The project, internally known as Duplex, was something our group was responsible for in terms of related model work. The demo only showed a small part of what was possible; internally, there was a lot more data on similar AI phone calls. We often needed to review the results of the Duplex model. The outcome was astonishing; I could hardly differentiate between conversations held by AI or humans.
Another gain was my reflection on business models. Although I had worked in Google’s commercialization team for a long time and the models I personally worked on generated over two hundred million dollars in annual revenue for Google, I realized that an advertising-driven business model would become a shackle for large language models. The biggest problem with the advertising business model is that it treats users’ attention (time) as a commodity for sale. To users, it seems like they are using the product for free, but in reality, they are giving their attention to the platform. The platform has no incentive to increase user efficiency but rather to capture more attention to sell at a very low price. Valuable users will eventually leave the platform, leading to the platform itself becoming increasingly worthless.
One of the AI applications I worked on at Google Brain was the video recommendation on YouTube’s homepage. The entire business model of Google and YouTube is based on advertising; longer user watch time means more ad revenue. Therefore, for applications like YouTube, the most important goal is to increase the total time users spend on the app. At that time, TikTok had not yet risen, and YouTube was unrivaled in the video domain in the United States. In YouTube’s model review meetings, we often joked that the only way for us to get more usage is to reduce the time people spend eating and sleeping. Although I wanted to improve user experience through better algorithms, no matter how I adjusted, the ultimate goal was still inseparable from increasing user watch time to boost ad revenue.
During my contemplation, I gradually encountered the Software as a Service (SaaS) business model and felt that this was the right model for large-scale models. In SaaS, users only pay for subscriptions if they receive continuous value. SaaS is customer-driven, whereas Google’s culture overly emphasizes an engineering culture and neglects customer value, making it very difficult to explore this path within Google. Ultimately, I was determined to leave Google and decided to start my own SaaS company. At the end of 2019, I joined a SaaS startup as a Founding Member and learned about the building process of a SaaS company from zero to one.
At the same time, I was also looking for good partners. Finally, in 2021 I was able to meet two amazing partners DROdio and Erika and we started storytell.ai in 2022.
Build a company of belonging
The first thing we did at the inception of our company was to clarify our vision and culture. We want to build a company of belonging by defining our vision and culture clearly. The vision and culture of a company truly define its DNA; the vision helps us know where to go, and the culture ensures we work together effectively.
Storytell’s vision is to become the Clarity Layer, using AI to help people distill signal from noise (https://go.storytell.ai/vision). — a company with great vision and culture.
We have six cultural values: 1) Apply High-Leverage Thinking. 2) Everyone is Crew. 3) Market Signal is our North Star. 4) We Default to Transparency. 5) We Prioritize Courageous Candor in our Interactions. 6) We are a Learning Organization. Please refer to this https://go.Storytell.ai/values for details.
We also pay special attention to team culture building during the company’s creation process. From the start, we hope to work hard but also play harder. We have offsite gatherings every quarter. The entire team is very fond of outdoor activities and camping, so we often hold various outdoor events (we have a shared album with photos from the very first day of our establishment). We call ourselves the Storytell Crew, hoping that we can traverse the stars and oceans together like an astronaut crew.
Build a Product that people love
In the early stages of a startup, finding Product-Market Fit (PMF) is of utmost importance. Traditional SaaS software emphasizes specialization and segmentation, with typically only a few companies iterating within each niche, and product stability may take years to achieve. This year, ChatGPT brought about a radical market change. The explosive popularity of ChatGPT is a double-edged sword for SaaS software entrepreneurs. On one hand, it reduces the cost of educating the market; on the other hand, the entire field becomes more competitive, with a surge of entrepreneurs entering the market and diverting customer resources. The influx of ineffective traffic brought by ChatGPT ultimately fails to convert effectively into the product.
Many believe that the moat for startups applying large models is technology or data. We think neither is the case. The real moat is the skill in wielding this double-edged sword. Good swordsmanship can transform both edges of the sword into a force that breaks through barriers:
On one hand, for traditional SaaS, it’s about leveraging the momentum of ChatGPT to maximize the impact on traditional SaaS. Make customers feel the urgency to keep up with the times. Develop AI Native features that incumbents find hard to follow.
On the other hand, use the competition to bring about a thriving ecosystem and have a methodical and steadfast approach in product iteration, ultimately shortening the product iteration cycle to achieve the greatest momentum.
We follow these two principles in our own product iteration.
1) Data-guided: In the iteration process, we use the North Star Metric to guide our general direction. Our North Star Metric is:
Effective Reach = Total Reach x Effective Ratio
Total reach is the number of summaries and questions asked on our platform each day. The Effective Ratio is a number from 0 to 1 that indicates how much of the content we generate is useful for users.
2) User-driven. Drive product feature adjustments through in-depth communication with users. For collecting user feedback, we’ve adopted a combination of online and offline methods. Online, we use user behavior analysis tools to identify meaningful user actions and follow up with user interviews to collect specific feedback. Offline, we organize many events to bring users together for brainstorming sessions.
With this approach in mind, our product has undergone multiple rounds of iteration in the past year.
V0: Slack Plugin
Since June 2022, Erika, DROdio and I have been conducting numerous customer discovery calls. During our interviews with users, we often needed to record the conversations. We primarily used Zoom, but Zoom itself did not provide a summarization tool back then. I used the GPT-3 API to create a Slack plugin that automatically generates summaries. Whenever we had a Zoom meeting, it would automatically send the meeting video link to a specific Slack channel. Subsequently, our plugin would reply with an auto-generated summary. Users could also ask some follow-up questions in response.
At that time, there weren’t many tools available for automatically generating summaries, and every user we interviewed was amazed by this tool. This made us gradually shift our focus towards the direction of automatic summarization. The Slack plugin allowed us to collect a lot of user feedback. By the end of December 2022, we realized the limitations of the Slack plugin.
Firstly: Slack is a system with high friction. Only system administrators can install plugins; regular employees cannot install plugins themselves.
We had almost no usage of our Slack plugin over the weekends. The likelihood of users using Slack in their personal workflows was low.
Slack’s own interface caused a great deal of confusion for our users.
V1: Chrome Extension
We began developing a Chrome extension in December 2022, primarily to address the issues mentioned above. While Chrome extensions also have friction, users have the option to install them individually. Chrome extensions can also automatically summarize pages that users have visited, achieving the effect of AI as a companion. Additionally, Chrome extensions facilitate better synergy between personal and work use. During the iteration process of the Chrome extension, we realized that chat is one of the most important means of interaction. Users can accurately express their needs by asking questions (or using prompt words). Although we allowed users to ask questions during the Slack phase, the main focus was still on providing a series of buttons. In the iteration process of the Chrome extension, we discovered that the chat interface is very flexible and can quickly uncover customer needs that weren’t predefined.
On January 17th, we released our Chrome extension. However, on February 7th, Microsoft released Bing Chat (later known as Copilot), integrated into Microsoft Edge. By March, the Chrome Store was flooded with Copilot copycats. We quickly realized that the direction Copilot was taking would soon become a saturated market. Additionally, during the development of our Chrome extension, we became aware of some bottlenecks. The friction in developing Chrome extensions is quite high. Google’s Web Store review process takes about a week. This wouldn’t be a problem in traditional software development, but it’s very disadvantageous for the development of large models. This year, the iteration speed of large models is essentially daily. If we update only once a week, it’s easy to fall behind.
V2: VirtualMe™ (Digital Twin)
In March 2023, we began developing our own web-based application. Users can upload their documents or audio and video files, and then we generate summaries, allowing users to ask corresponding questions. Our initial intention was to build a user interaction platform that we could control. The development speed of the web-based application was an order of magnitude faster than the Chrome extension. We could release updates four to five times a day without waiting for Google’s approval. Moreover, with the Chrome extension, we could only use a small part of the browser’s right side. There were many limitations in interaction, but with the web-based platform, we have complete control over user interactions, allowing us to create more complex user-product interactions.
During this process, we learned that it is very difficult to retain users with utility applications. Users typically leave as soon as they are done with the tool, showing no loyalty. Costs remain high. Moreover, with a large number of AI utility tools going global, the field is becoming increasingly crowded.
We began deliberately filtering our users to interview enterprise users and understand their feedback. By June 2023, we realized that the best way to increase user stickiness was to integrate tightly with enterprise workflows. Enterprise workflows naturally result in data accumulation, and becoming part of an enterprise’s workflow enhances the product’s moat.
We started thinking about how our product could integrate with enterprise workflows. We came up with the idea of creating a personified agent. Most of the time when we encounter problems at work, we first ask our colleagues. A personified agent could integrate well with this workflow. We quickly developed a prototype and invited some users for beta testing.
Our initial user scenario envisioned that everyone could create their own digital twin. Users could upload their data to their digital twin so that when they are not online, it could answer questions on their behalf. After launching the product, we found that the most common use case was not creating one’s own digital twin, but creating the digital twin of someone else. For instance, we found that product managers were heavy users of our product. They mainly created digital twins of their customers to ask questions and see how the customers would respond.
During the VirtualMe™ phase, we began to refine our enterprise user persona for the first time. We identified several user personas, mainly 1. Product Managers, 2. Marketing Managers, 3. Customer Success Managers. Their common characteristic is the need to better understand others and create accordingly.
At the end of July, we organized an offline event and invited many users to test our VirtualMe product together. They found our product very useful, but they had significant concerns about the personified agent. Personal branding is very important for our user group. They were worried that what the virtual twin says could impact their personal brand, especially since large models generally still have the potential for “hallucination.”
It was also at this event that users mentioned the part of our product they found most useful was the customizable Data Container and the ability to quickly generate a chatbot. At that time, no other product on the market could do this.
V3: SmartChat™
Starting in August, we began to emphasize data management features based on this approach and launched SmartChat™. In SmartChat™, once users upload data, we automatically extract tags from the content. Users can also customize tags for data management. By clicking on a tag, the ChatBot will converse based on the content associated with that tag. At the same time, we also launched an automation system that runs prompts for users automatically, pushing the results to the appropriate audience via Slack or email.
The following figure shows our North Star Metric (NSM) up to December 1st of this year. At the beginning of the year, during the Slack plugin phase, our NSM was only averaging around 1. During the Chrome Extension phase, our NSM reached the hundreds. VirtualMe™ pushed our NSM up to 5,000.
By early December, our NSM was close to 20,000. Previously, our growth was entirely organic. By this time, we felt we could start to do a bit of growth hacking. In December, we started some influencer marketing activities, and our NSM grew by 30 times, reaching 550K!
From an NSM of less than 1 at the beginning of the year to 550K by the end of the year, in 2023 we turned Storytell from a demo into a product with a loyal user base. I am proud of our Crew and very grateful to our early users and design partners.
Words at the end
From a young age, I have been particularly fond of reading books on the history of entrepreneurship. The year 2023 marks the beginning of a new era for me to embark this journey. I know the road ahead is challenging, but I am fortunate to experience this process firsthand with my two amazing partners and our Crew. Regardless of the outcome, I will forge ahead with all the Storytell Crew, fearless and without regret. Looking forward to Storytell riding the waves in 2024!
Also, Storytell.ai is hiring front-end and full-stack engineers: https://go.storytell.ai/fse-role. If you are interested or you know anyone might be interested, please don’t hesitate to contact me at my email jingconan@storytell.ai.
在我思考的过程中,我逐渐接触到Software as a Service (SaaS)的商业模式,觉得这才是大模型的正确商业模式。在SaaS里面,只有切实的为用户提供持续的价值,用户才会付费订阅。SaaS讲究的是客户驱动,而谷歌的文化过分强调工程师文化,忽略了客户价值。这使得在谷歌内部探索这个道路非常困难。最终我坚定了离开谷歌,决定做一个自己SaaS的创业公司。我在2019年底加入了一家SaaS初创公司成为Founding Member,了解了SaaS公司从0到1的构建过程。在这个过程中也同时寻找合适的商业合伙人。终于在2022年我和两个合伙人创立了Storytell.ai。
我们在2022年12月份开始开发Chrome插件。我们考虑这个主要解决上面这些问题。Chrome插件虽然也有Friction,但是用户可以选择个人安装。Chrome插件也可以自动summarize用户访问过的页面(实现AI as a companion的效果)。另外Chrome 插件比较容易形成个人和工作的协同。在Chrome Extension的迭代过程中,我们意识到Chat是一种最为重要的交互手段。用户通过提问(或者提示词prompt)可以将准确的表达他们的需求。我们虽然在Slack阶段也允许用户提问,但是主要的重心还是放在提供一系列的按钮。在Chrome Extension的迭代过程中,我们发现了聊天的的界面具有很大的灵活性,并且可以快速的发现没有预先定义好的客户需求。
我们在创立之初第一件事情就是厘清公司的愿景和文化。企业的愿景和文化真正定义了一个企业的基因,愿景是帮助我们知道该向何处去,文化保证我们有效的合作。Storytell的愿景是希望能够成为(Clarity Layer),利用AI来帮助人从纷繁的信息里面抽丝剥茧(https://go.storytell.ai/vision)。我们有了六个企业文化价值:1)杠杆思维 Apply High-Leverage Thinking。2)同舟共济 Everyone is Crew。 3)市场驱动 Market Signal is our North Star。 4)默认透明。We Default to Transparency。 5)坦诚沟通 We Prioritize Courageous Candor in our Interactions。 感兴趣的朋友可以看这里 https://go.Storytell.ai/values。我们在公司创建的过程中也特别注重团队文化建设。我们从开始希望的是work hard but also play harder。每一个季度我们都会有offsite。整个团队都非常喜欢户外和Camping,所以我们经常举行各种户外活动(我们有一个共享相册,有我们成立第一天开始的照片)。我们称自己为Storytell Crew。就是希望我们能够向一个宇航员机组一样,一起跨越星辰大海。
Last week, an intriguing discussion caught my attention at a fantastic event organized by Leni. The panel discussion revolved around an interesting comparison: Will the Generative AI industry resemble the Coffee industry, with a dominant player like Starbucks, or the Winery industry, characterized by a multitude of providers offering differentiated products?
This thought-provoking question led me to delve deeper into the dynamics of the Generative AI industry. Here are my thoughts.
In any industry, two key factors significantly influence its structure – the fixed and marginal costs of producing the product and the price for each unit of service. Let’s consider the Coffee and Winery industries for context.
In the Coffee industry, the high fixed cost – primarily branding – incentivizes scaling. Starbucks, for instance, has invested heavily in establishing a formidable brand and hence, scales up to distribute the cost. On the contrary, the Winery industry thrives on differentiation, with numerous wineries offering unique products.
Now, let’s apply these factors to the Generative AI industry. The industry can be divided into three essential layers as per the framework described by A16z:
1) The Infrastructure layer, which runs training and inference workloads for generative AI models.
2) The Foundational Model Layer, which provides the Foundational model via a proprietary API or open-source model checkpoints.
3) The Application Layer, where companies transform generative AI models into user-facing products, either by running their own model pipelines (“end-to-end apps”) or relying on a third-party API.
For the Foundational Model vendors, there’s a high fixed cost involved in training the models, and the marginal cost of providing a unit of service (API call) is quite low. Moreover, most sales are made through API calls, which have a low unit sale price. This dynamic, coupled with the fierce competition and the rise of competitive open-source alternatives, is causing the pricing power of Proprietary API vendors to shrink rapidly. As a result, the Foundational Model market is likely to resemble the Coffee industry, where you either go to Starbucks (OpenAI), or you make your own coffee (Open Source). Infrastructure layer has very similar dynamics as the Foundational Model layer so I will skip it in this discussion.
Moving to the Application Layer, it’s essential to differentiate between consumer and enterprise applications. Consumer applications are likely to follow the Coffee industry’s pattern due to the significant fixed cost of creating a consumer-facing brand and the strong incentive to scale.
However, enterprise applications might mirror the Winery industry. With the wide availability of LLM APIs, creating an enterprise AI application no longer requires a substantial fixed cost. Although there are some fixed costs required for enterprises (e.g., data compliance), they are not on the same level as training LLM and can be sequenced in the iteration with customers. Moreover, the price for enterprise applications can be quite high (up to 6 or 7-figure for a single account), fostering an expectation for differentiated services.
In conclusion, the Generative AI industry presents a unique blend of the Coffee and Winery industries’ dynamics. The Foundational Model Layer and consumer applications at the Application Layer are akin to the Coffee industry, while enterprise applications at the Application Layer resemble the Winery industry. As the industry evolves, it will be fascinating to see how these dynamics play out.
This blog is finished with the help of SmartChat™ by Storytell.ai (both in the stage of researching content and rewriting the final draft). It is available for initial testing. Please sign in at storytell.ai and click dashboard like this https://share.getcloudapp.com/nOuLGPGN to access this feature.