[#32] Beyond the Hype: Building Transformative Product Experiences with Generative AI
Exploring how Generative AI transforms digital interactions into truly personalized, adaptive, and delightful experiences
(Generative) AI is underhyped right now
As we approach ChatGPT’s second anniversary, a fascinating dichotomy has emerged in technology circles. The conversation often centers on what Generative AI can't do—hallucinations, inconsistencies & perceived lack of utility—labelling the technology as overhyped. But I’d argue something much more nuanced: Generative AI is fundamentally underhyped, particularly when considering its transformative impact on how enterprises operate.
Today’s frontier models have evolved far beyond simple text generation. They're:
Processing & generating multiple media types: From generating video scripts to storyboards, AI is blurring the boundaries between different forms of content.
Writing & reviewing production-grade code: Tools like Copilot are not only helping write lines of code but even suggesting architectural improvements.
Operating computer systems autonomously: Autonomous systems are now able to configure themselves, with models acting like self-driving "administrators" for infrastructure.
Executing complex business workflows: AI-driven automation now orchestrates adaptive customer journeys, making real-time adjustments to campaigns & workflows.
Interfacing with enterprise systems: Generative models are being integrated into ERP & CRM systems, not merely to pull data but to anticipate trends & inform decision-making.
Yes, challenges exist. Models still hallucinate. Consistency remains a concern. But focusing on these limitations misses a crucial point: the fundamental building blocks for knowledge work transformation are already here.
The recent earnings calls from major cloud providers & technology leaders tell a compelling story:
Enterprise Adoption
AWS's AI business: Multi-billion dollar run rate
Triple-digit YoY growth
Growing 3x faster than AWS itself at a similar stage
Infrastructure Investment
Custom silicon development accelerating
Manufacturing capacity being expanded
Price-performance improvements driving adoption
Developer Impact
25%+ of new code at major tech companies is AI-generated
Substantial productivity gains in engineering teams
Accelerated development cycles
P.S: Any quotes below unless otherwise noted are directly from the respective company earnings call, and are from the CEO.
Apple
“Earlier this week, we made the first set of Apple Intelligence features available in U.S. English for iPhone, iPad and Mac users with system-wide writing tools that help you refine your writing, a more natural and conversational Siri, a more intelligent Photos app, including the ability to create movies simply by typing a description, and new ways to prioritize and stay in the moment with notification summaries and priority messages.”
”Apple Intelligence, which marks the start of a new chapter for our products. This is just the beginning of what we believe generative AI can do, and I couldn't be more excited for what's to come.”
Apple’s push to integrate AI across user devices reflects a commitment to making AI as fundamental as the App Store—a shift to ambient intelligence.
Amazon
“In the last 18 months, AWS has released nearly twice as many machine learning and gen AI features as the other leading cloud providers combined. AWS's AI business is a multibillion-dollar revenue run rate business that continues to grow at a triple-digit year-over-year percentage and is growing more than 3x faster at this stage of its evolution as AWS itself grew, and we felt like AWS grew pretty quickly”
“As customers approach higher scale in their implementations, they realize quickly that AI can get costly. It's why we've invested in our own custom silicon in Trainium for training and Inferentia for inference. The second version of Trainium, Trainium2 is starting to ramp up in the next few weeks and will be very compelling for customers on price performance. We're seeing significant interest in these chips, and we've gone back to our manufacturing partners multiple times to produce much more than we'd originally planned.”““It's moving very quickly and the margins are lower than what they -- I think they will be over time. The same was true with AWS. If you looked at our margins around the time you were citing, in 2010, they were pretty different than they are now”
The Trainium story serves as a prime example of infrastructure players investing deeply, not just for AWS but as a broader indicator of market direction & future growth.
Alphabet/Google
“Just this week, AI overview started rolling out to more than 100 new countries and territories. It will now reach more than 1 billion users on a monthly basis. We are seeing strong engagement, which is increasing overall search usage and user satisfaction. People are asking longer and more complex questions and exploring a wide range of websites. What's particularly exciting is that this growth actually increases over time as people learn that Google can answer more of their questions”
“We've had 2 generations of Gemini model. We are working on the third generation, which is progressing well. And teams internally are now set up much better to consume the underlying model innovation and translate that into innovation within their products”
“We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than 1/4 of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster. I'm energized by our progress and the opportunities ahead, and we continue to be laser-focused on building great products.”
Their move towards deploying the Gemini model at scale underscores not just innovation, but how Generative AI becomes intrinsic to user engagement on a billion-user platform
There's a significant information asymmetry in the market. There are organizations that are seeing transformative results while others are still questioning the practical value. This gap isn't about technology limitations—it's about implementation knowledge & organizational readiness around data & personnel.
Even if AI development were to halt today (it won't), we'd still have years of transformative change ahead as enterprises integrate existing capabilities into their operations. The question isn't whether GenAI will transform enterprise operations—it's how quickly organizations can adapt to capitalize on these changes.
The most interesting aspect? This entire transformation has emerged in just 24 months. The real story isn't about the technology itself—it's about the speed of enterprise adoption & the scale of impact we're already seeing.
This isn't just another technology wave. It's a fundamental shift in how enterprises operate, develop, & deliver value. And we're just getting started.
Rethinking standard product experiences using Generative AI
While traditional machine learning has long been used to power recommendation engines, search optimization & basic automation tasks, Generative AI can take these product experiences to the next level—enabling hyper-personalization, richer contextual understanding & entirely new modes of user interaction. Below are some of the potential product experiences Generative AI makes possible:
Hyper-Personalized App Recommendations in App Stores
Current app store recommendations tend to be generic, relying on broad user categories & historical download patterns. Generative AI, on the other hand, can take into account real-time user behavior, app usage & contextual factors such as time of day or recent activities on the device. Imagine an app store that knows not just that you like fitness apps, but that you're specifically looking for mindfulness & stretching guides for early mornings. This level of hyper-personalization can drive user engagement & app discovery in ways that traditional models cannot.
AI Assistant for Curated App Discovery
Generative AI can provide a guided, conversational experience for discovering new apps. Instead of browsing through endless categories, users could interact with an AI assistant to describe what they need: "I'm looking for an app to help my kids learn math in a fun way." The assistant can dynamically refine & personalize the recommendations based on further inputs, creating a concierge-like experience. Traditional ML can serve broad categories, but Generative AI can generate contextual, conversation-driven suggestions, reducing search fatigue & improving discovery efficiency.
Contextual Summarization of User Reviews
Reading through dozens or hundreds of app reviews can be overwhelming. Generative AI can provide a summarized, contextual version of reviews, highlighting the aspects that are most relevant to the user’s needs. For example, if a user is searching for a travel planning app, a generative model could prioritize reviews that discuss features like offline maps or itinerary planning. This reduces decision friction & enhances user confidence when choosing an app, ultimately improving conversion rates.
Adaptive Personal Health Companion
Imagine a health companion app that doesn't just track your fitness metrics but proactively designs a wellness plan based on your lifestyle & habits. Generative AI could offer personalized health advice, daily exercise routines & meal plans that adapt to your changing preferences, energy levels & even stress markers. It might suggest an early morning yoga routine if you've had a stressful week or guide you towards high-energy workouts when you're feeling energized. Additionally, the AI could respond to changes like a sudden illness, offering modified routines to aid recovery, or suggesting specific nutritional needs for better immune support. This level of tailored interaction goes beyond static health tracking to become an adaptive, empathetic personal companion, constantly learning & evolving based on your behavior.
Personalized Content Curation & Playlist Generation
Traditional ML-generated playlists & content suggestions are often based on static patterns—frequent genres or repeated listens. Generative AI takes this a step further by generating playlists that adapt dynamically to user contexts. For instance, a morning playlist that understands your preference for upbeat, motivational tracks but also varies based on how you interacted with music the night before. Imagine a playlist that starts off soothing if you had a restless sleep, gradually increasing tempo to help you get energized throughout the day. Generative AI doesn’t just select songs based on genre—it creates an experience that matches your current state of mind & personal context.
Enhanced Search Experience with Contextual Understanding
Search, especially in app stores, is largely dependent on keyword matching or basic filters. Generative AI, however, can understand nuanced, complex queries. A user might ask, "What are the best productivity apps for someone who works late nights & travels frequently?" Instead of static results based on keywords, Generative AI synthesizes data across similar use cases, past app downloads & user behavior to provide highly targeted, relevant recommendations. This level of semantic understanding can significantly enhance search satisfaction, providing results that feel genuinely curated to the user’s individual needs.
Immersive Learning Experiences
Imagine an educational app that doesn’t just provide lessons, but adapts its content dynamically based on the learner's strengths & weaknesses. Generative AI can create individualized lesson plans that adjust in real time—if a student struggles with a particular topic, it can break down concepts into simpler parts, provide additional resources, or even create a personalized set of practice questions. Moreover, Generative AI could provide motivational feedback tailored to the student’s learning pace & personality, making the learning process not only effective but also enjoyable. This adaptive & responsive learning environment makes traditional one-size-fits-all educational content seem archaic.
These product experiences demonstrate how Generative AI can extend beyond the limitations of traditional ML models by adding context, nuance & personalization to the user journey. This isn't just an incremental improvement—it redefines how users interact with digital products, making the experiences more intuitive, enjoyable & ultimately useful.
The next section will explore why Generative AI has a fundamental edge over traditional approaches, examining its unified architecture, enhanced adaptability & richer context handling.
Why Generative AI Has a Fundamental Edge Over Traditional Approaches
Generative AI represents a leap forward from traditional AI & machine learning approaches, not just in model complexity, but in the ability to fundamentally reshape product experiences by offering richer context, adaptability & multi-faceted capabilities. Below, we explore why Generative AI has an edge over traditional models in creating & delivering transformative experiences.
Unified Model Architecture vs. Specialized Models
Traditional Approach: In the traditional setup, different tasks require different models, often specialized & trained for each unique use case. For example, one model might be optimized for recommendation systems while another handles search functionality. These models might have different architectures—such as CNNs for image processing or RNNs for sequence prediction. Managing multiple models results in high maintenance costs, specialized infrastructure requirements & fragmented user experiences. Additionally, these models need regular retraining & fine-tuning to remain effective, especially when new data or new patterns emerge, which adds to the complexity & cost of maintenance.
Generative AI Approach: Generative AI operates on a unified model architecture that can handle diverse use cases seamlessly. For instance, a large language model can serve multiple purposes—recommendation generation, conversational assistance, review summarization—without needing separate specialized models. The use of transfer learning & few-shot learning allows these models to adapt quickly to new use cases, significantly reducing operational complexity & enabling cross-functional consistency. This unified architecture not only lowers the overhead associated with managing multiple models but also results in more cohesive user experiences, as the same model learns & improves across a broader spectrum of tasks.
Understanding Context & User Intent
Traditional Approach: Traditional ML models rely heavily on structured features & explicit rules to infer user preferences. This makes understanding nuanced user intent challenging, as the models require extensive feature engineering & typically employ collaborative filtering or content-based filtering. These methods are effective for basic personalization but fall short when dealing with complex, multi-dimensional user needs. Traditional models are often rigid, requiring a significant amount of manual intervention to handle evolving user preferences or contextual changes.
Generative AI Approach: Generative AI is inherently better at parsing & understanding natural language, allowing it to capture subtle user intents & temporal contexts. For example, it can differentiate between a user's interest in language-learning apps for casual gaming versus serious study. Additionally, generative models can handle temporal changes in preferences—like recommending different music in the morning compared to the evening—leading to more sophisticated & contextually relevant user experiences. Generative AI can also process implicit cues, such as mood or inferred motivations, making its recommendations & interactions feel far more intuitive & aligned with user needs. This deeper contextual understanding enables a fluid & adaptive user experience that traditional methods cannot match.
Content Processing & Generation
Traditional ML: Traditional models are largely focused on classification & prediction. They can process structured metadata but struggle with unstructured content like reviews, descriptions, or user comments. The content they generate, if any, is usually templated & lacks the depth or richness that comes from genuine understanding. Traditional approaches often need separate models for content extraction, classification & summarization, which makes the pipeline complex & disjointed.
Generative AI Evolution: Generative AI models are capable of both processing & generating content. They can synthesize information from multiple sources—such as product reviews, customer feedback & usage data—to create concise, human-readable summaries. This ability to generate semantically rich content provides added value, whether through contextualized app reviews, personalized blog summaries, or dynamic marketing copy. For instance, a generative model can seamlessly blend insights from different content types, providing not only a summary of customer sentiment but also actionable recommendations for product improvements. This deep content synthesis capability gives businesses an edge in creating engaging, context-rich user experiences.
Personalization Mechanism
Traditional Approach: Most traditional models use matrix factorization or explicit feature-based similarity metrics to create personalized experiences. However, this approach is limited in dealing with cold-start problems & requires a substantial amount of user-item interaction data to be effective. Additionally, traditional approaches often struggle to combine implicit & explicit signals into cohesive user profiles. These models also tend to become stale over time, requiring periodic updates & retraining to remain relevant, which is both time-consuming & resource-intensive.
Generative AI: Generative AI excels at incorporating implicit signals, such as natural language queries or inferred user emotions, which traditional models overlook. For example, when a user interacts with an AI assistant & describes a preference conversationally, generative models can interpret these nuances directly & adapt accordingly. This adaptability is particularly powerful in handling sparse data scenarios & integrating a wide range of contextual signals, thereby delivering far richer & more accurate personalization. Generative AI can also explain its recommendations in natural language, providing transparency & building trust with users. By offering explanations like "Based on your recent interest in productivity apps during late evenings, I suggest trying this tool," the user feels more connected to the personalization process, enhancing overall satisfaction.
Query Understanding & Enhanced Search
Traditional: Traditional search models focus on keyword matching, using approaches like TF-IDF or embedding-based similarity. They have limited semantic understanding & often require extensive rules for query expansion to improve the relevance of search results. This means traditional models are prone to producing irrelevant results when queries are ambiguous or phrased in a non-standard way, reducing the overall search experience quality.
Generative AI: Generative AI, in contrast, understands natural language queries with a deeper sense of context & meaning. This allows it to handle complex & ambiguous queries far more effectively. For instance, a user might ask, "What productivity tools should I use if I work late nights & frequently travel?" Generative AI can infer nuanced meaning from the question, taking into account factors like working hours, productivity habits & travel contexts to deliver highly targeted, relevant results. This richer semantic understanding significantly enhances the user experience by providing more accurate, nuanced & satisfying results, which in turn boosts engagement & reduces search fatigue.
Key Technical Differentiators
Zero/Few-Shot Learning Capabilities: Traditional ML requires extensive retraining to adapt to new use cases or domains. Generative AI, however, leverages zero-shot or few-shot capabilities to handle novel tasks with minimal additional data. This adaptability drastically reduces the development cycle & accelerates deployment across new scenarios. With few-shot learning, a generative model can understand a new domain by being exposed to only a handful of examples, enabling rapid scaling & adaptation.
Multi-Task Learning: Traditional AI relies on separate models for each task, which limits knowledge sharing across use cases. Generative AI employs multi-task learning, allowing a single model to address a wide array of functionalities while benefiting from shared insights across tasks, creating a more cohesive & versatile user experience. This ability to learn across tasks not only makes Generative AI more efficient but also enhances its performance, as learnings from one domain can improve results in another.
Rich Contextual Understanding: Traditional AI operates within predefined feature spaces, which limits its ability to understand dynamic, evolving contexts. Generative AI, by contrast, is designed to interpret rich, nuanced contexts, making it ideal for situations where user needs are fluid & constantly changing. Whether it’s understanding the sentiment behind a request or adapting content to match a user’s specific preferences, Generative AI can deliver a far more personalized & context-aware interaction.
Adaptability: Traditional models have fixed decision boundaries, meaning they need to be explicitly retrained whenever a new pattern emerges. Generative AI, with its flexible architecture, is far more adaptable, responding to new patterns & scenarios with minimal need for manual intervention. This flexibility allows generative models to stay relevant in rapidly changing environments, such as those influenced by social trends or shifting consumer behaviors, without the costly overhead of frequent retraining cycles.
These distinctions illustrate why Generative AI isn’t merely an incremental improvement over traditional AI, but a fundamental shift in how we think about AI capabilities. It introduces a unified, adaptive & context-aware model that offers richer experiences, is easier to scale & has the potential to transform how we interact with technology. As Generative AI continues to evolve, its ability to seamlessly blend context, adaptability & deep learning will pave the way for creating more intuitive, responsive & ultimately human-like interactions.
Reimagining the Future with Generative AI: A Call to Action
Generative AI is not just a technological upgrade—it's a paradigm shift that has the potential to fundamentally reshape how we interact with digital systems, products, & services. Unlike traditional AI & machine learning models, which are often narrow, task-specific & limited by their predefined architectures, Generative AI offers a holistic, unified approach that brings adaptability, deep context awareness & versatility to the forefront.
One useful mental model to understand the role of Generative AI is to think of foundational models as “contextually unaware instruction followers.” This framework captures the essence of what these models do: follow instructions in the context they are given, without maintaining a persistent memory or deep, ongoing awareness across sessions. This idea is not only powerful for understanding the technology itself but also highly valuable for product thinking.
Product Integration Implications:
Every digital interface becomes a potential instruction surface: Generative AI enables traditional UIs—buttons, forms & menus—to be augmented or even replaced with natural language instructions, transforming how users interact with products.
Products as "instruction orchestrators": Instead of rigid, fixed workflows, think of products as orchestrators that take user instructions & execute them through dynamic, AI-driven processes.
Key Product Design Questions:
What instructions does the user actually want to give? Identifying user intent is central to making the most of Generative AI’s instruction-following abilities.
How to manage context effectively given the "context unaware" nature? Products need smart context-handling layers that maintain relevant information & inject it when necessary.
How to structure the interaction to get reliable outputs? Clear framing of user inputs, guardrails & validation mechanisms are key to leveraging Generative AI effectively.
When to use LLMs vs traditional UI elements? Understanding when natural language is the best fit versus traditional UI can create smoother, more intuitive experiences.
System Design Implications:
Context Management Across Sessions: Foundational models do not maintain memory, so designers need to create external systems to manage context & user state over time.
When to Use LLMs as Components vs. End-to-End Solutions: Generative AI doesn’t need to solve everything. It should be thought of as one component in a broader system that integrates other elements for persistence, validation & deterministic processes.
Handling Instruction Ambiguity: Since LLMs can sometimes interpret instructions in unintended ways, careful system design is needed to refine outputs, resolve ambiguity & validate accuracy.
Identifying Where LLMs Are Not the Right Solution:
Consistent State/Memory Needs: Scenarios where persistent state or long-term memory is crucial may require more robust context-handling mechanisms beyond what Generative AI provides.
Exact, Deterministic Outputs: When precision & determinism are critical, LLMs should be augmented with additional layers, such as rule-based systems or validation workflows.
Cross-Session Awareness: LLMs inherently lack awareness across sessions, but this limitation can be addressed by storing & injecting relevant context through well-designed product layers.
These limitations can be seen not as barriers but as implementation challenges that can be overcome with a thoughtful system architecture:
State/Memory? → Build a context management layer.
Deterministic Outputs? → Add validation or guardrails.
Cross-Session Awareness? → Store & inject relevant context appropriately.
Real-World Example:
Consider ChatGPT, which appears to “remember” conversations. This isn’t because the GPT-4 model inherently maintains state, but rather because the product layer effectively manages conversation history & injects context when appropriate. Instead of viewing these limitations as constraints that rule out LLM use, a more practical question is: "How do we architect systems that combine LLMs' instruction-following capabilities with other components to create robust, scalable product experiences?"
The distinctions outlined in this analysis are not just academic—they are directly actionable for businesses, developers & decision-makers. As companies increasingly strive to personalize user experiences, reduce operational complexity & offer more intuitive products, Generative AI represents the next big leap forward. It provides the tools needed to generate richer, context-aware interactions, synthesize content dynamically & streamline processes that were previously siloed & disjointed.
For enterprises, the call to action is clear: begin integrating Generative AI now. The value lies not just in improving existing features but in redefining what your digital products can do. The companies that embrace Generative AI early will be the ones best positioned to deliver truly differentiated customer experiences—those that feel intuitive, personal & intelligently responsive. From hyper-personalized recommendations to real-time conversational assistance, the possibilities are immense.
For developers, it's time to think beyond specialized models & silos of functionality. Generative AI opens up opportunities to unify & enhance applications, allowing a single model to serve multiple functions—drastically reducing development timelines & infrastructure burdens. Leveraging the zero-shot & few-shot capabilities of these models allows developers to iterate quickly, explore new use cases & provide solutions that adapt dynamically to changing user needs.
For decision-makers & leaders, the goal should be to establish an AI-first mindset within your organizations. Educate your teams about the transformative potential of Generative AI, invest in the right talent & ensure your data infrastructure can support such powerful models. Generative AI isn’t about incremental improvements—it’s about being able to offer experiences that fundamentally change how customers perceive & engage with your brand.
Generative AI is still in its early stages, but the pace of progress is astonishing. The best time to act is now. Whether you’re building a new product or enhancing an existing one, start experimenting, learning & integrating Generative AI capabilities into your roadmap. This technology is evolving rapidly & the companies that adapt with it will be the ones that thrive in a future increasingly driven by AI innovation.
Measuring the Value of Generative AI Features
The end outcomes for Generative AI features typically revolve around two key goals: increasing product adoption or boosting revenue. To understand if these features are delivering tangible value, it's crucial to track specific metrics that map directly to these objectives.
User Engagement and Adoption Metrics: Metrics like Daily Active Users (DAU), Monthly Active Users (MAU), and increased interactions with digital products provide a direct indicator of whether new AI-driven features are resonating with users. For example, in e-commerce settings, tracking increased product searches or added-to-cart rates can indicate the success of AI-powered recommendations in engaging users.
Feature Interaction Metrics: Track the number of features each user engages with. Beyond app reviews, if Generative AI is used to power product recommendations, it’s important to measure whether users who interact with these recommendations end up making purchases or exploring more options. For instance, in a customer support context, if AI-driven chat assists users in navigating complex queries, are those users more likely to complete support tickets without additional help?
User Satisfaction and Feedback: After users engage with a feature, satisfaction surveys and feedback loops become essential in assessing the quality of the AI-generated experiences. For SaaS platforms, feedback metrics could include how easily customers were able to onboard using AI-guided setups or whether a conversational AI assistant helped them solve issues more effectively compared to traditional methods.
Conversion Rates and Task Completion Metrics: To assess the full user journey, metrics like conversion rates, task completion time, and journey analysis are pivotal. For instance, in online education platforms, how many users move seamlessly from discovering a new course through AI-powered recommendations to enrolling, completing lessons, and submitting assignments? Tracking each stage's efficiency and understanding drop-off points can reveal how well Generative AI features are aiding the learning journey.
Content Consumption and Secondary Business Metrics: Metrics like content consumption rates help measure the indirect impact of AI features. In the case of media platforms, AI-generated content recommendations leading to increased video plays or article reads can signify improved engagement. Similarly, in B2B software, if AI-driven insights lead to more frequent use of key features, this signals better engagement and value realization, ultimately supporting core metrics like customer retention and reduced churn.
Batch Workload Metrics: Many AI product experiences involve batch processing, such as analyzing customer service interactions to generate insights every few hours. Metrics that track the efficiency of these batch workloads, like processing time and data freshness, can provide insights into operational impact. For instance, in healthcare, Generative AI can be used to analyze patient records periodically to provide updated treatment recommendations—ensuring insights are timely and relevant.
By systematically tracking these metrics, businesses can gain a clear understanding of whether Generative AI is delivering on its promise of increasing adoption, improving user experiences, and ultimately driving revenue growth.
My recommendation to readers is to start by identifying areas within your product or service offerings where context-aware, adaptive features could provide value. Begin experimenting with Generative AI frameworks, explore partnerships with AI solution providers & position your teams to capitalize on what is quickly becoming the most significant technological advancement of the decade.
Most examples in this post are from the real world & currently in beta or early experimentation. If you'd like to learn more about specific use cases or discuss how to integrate these insights to create stunning product experiences, I’d love to chat more.