Though I haven’t been in one, the image above is my best guess on how growth teams in modern-day B2B PLG SaaS startups discuss, strategize, build & operate today.
There’s a lot of chatter about what PLG (Product-Led Growth) means. A simple way to understand PLG is to think about using B2C-inspired product & marketing tactics in a B2B SaaS context.
This means an easily accessible product w/ a frictionless experience married w/ approaches like referral loops, constant UI/UX improvements & incremental feature rollouts.
As you’d likely guess, PLG requires rapid data-driven experimentation & iterative approach to product building that requires a lot of collaboration across teams.
If you probe a level deeper &/or speak to engineers/PMs in PLG teams, you’d realize the reliance on developers is huge and it is for tasks that can definitely be generalized/platformized/abstracted away as primitives.
As a cloud & distributed systems enthusiast, I set out to study the space & following is a non-exhaustive list of some observations/findings. It’s purely meant to be a conversation starter.
Primitives for software development, PLG style
In my mind, the building blocks of dev primitive for PLG software development can be thought of as 2 separate components: exploration (experiments) & exposure (feature flagging/gating).
Both components are fundamentally different. Certain features will only be used by people in a certain kind of cohort/package. Certain cohorts might get the product free forever.
Experiments always accompany feature rollouts. The product metrics under consideration must get better for an experiment to be successful.
Based on some design considerations, I’m envisioning the platform as a combination of the following tiers →
Experimentation tier: Experiments will be ideated, created, updated, rolled out & then terminated. Managing the entire lifecycle will likely be a foundational component. It is also important to consider how a set of features will be packaged & exposed to a user & how to ensure that it corresponds to the specific conducted experiment.
Delivery & Integration tier: The new code changes from experiment-inspired features should work well w/ the other components & not create trouble in the overall product experience. If you have a mobile version of the product, you’d also need app stores to approve your compiled binaries.
Data Infrastructure tier: Collecting data from product/feature usage & user journeys, processing it & then analyzing it daily w/ a robust data pipeline will be a key component
Experimentation Tier
The experimentation tier could be further broken down into the following components →
At the network level: Experiments can either be done on the client-side experiment or the server side. User action & journey flows would be client-side experiments, while new features being rolled out as a part of a package would be server-side experiments.
At the auth level: Many PLG products allow the end-user to experience quite a lot of the product w/o requiring them to sign in/up. These experiments are done at the pre-auth level & maybe you’d be using “Device ID” as the key field for experimentation. For a registered user, the experiments would be post-auth & the key unit would shift to “user ID” to serve a consistent experience to the user even if they change devices.
Monitoring experiment activity: Initial conditions of the experiments can change. For instance, the users could change devices (operating systems), accounts, and/or even their geography. An ideal platform would offer flexibility in terms of allowing the management of the lifecycle of the experiment.
Delivery & integration Tier
The barrier for launching new experiments should be low for a platform like this to find adoption. It requires reducing the efforts needed in integration & consuming results from the integration in the product UI as quickly as possible. Can we store feature flags & app configs on the server side for certain experiments to help enable this?
Data Infrastructure Tier
Things like enabling the platform to ingest product usage data, CRM data & Helpdesk data to create & manage different experiments would be a core design consideration before building such a platform. And being able to do it in real-time, keeping up w/ things like data latency, is a hard engineering problem.
As entailed above, these are some of the design considerations I could think of while whiteboarding on this problem statement myself. And it’s only just a conversation starter.
If you’ve worked on this problem at your current/previous setups, I’d love to hear from you to learn more about the problem space from a practitioner's lens.
If you think I have made some mistakes in terms of thinking through the design considerations of the platform, please do not hesitate to let me know.
That is all, thanks.
If you liked this piece, please consider subscribing
If you’d like to discuss more on any/all of these points, please do so in the comments section below, LinkedIn, or on Twitter.
Thanks for reading this piece. If you liked/resonated w/ this, feel free to share it w/ others in your network.
[#16] Testing & Experimentation platforms in SaaS
[#16] Testing & Experimentation platforms in SaaS
[#16] Testing & Experimentation platforms in SaaS
There’s a lot of chatter about what PLG (Product-Led Growth) means. A simple way to understand PLG is to think about using B2C-inspired product & marketing tactics in a B2B SaaS context.
This means an easily accessible product w/ a frictionless experience married w/ approaches like referral loops, constant UI/UX improvements & incremental feature rollouts.
As you’d likely guess, PLG requires rapid data-driven experimentation & iterative approach to product building that requires a lot of collaboration across teams.
If you probe a level deeper &/or speak to engineers/PMs in PLG teams, you’d realize the reliance on developers is huge and it is for tasks that can definitely be generalized/platformized/abstracted away as primitives.
As a cloud & distributed systems enthusiast, I set out to study the space & following is a non-exhaustive list of some observations/findings. It’s purely meant to be a conversation starter.
Primitives for software development, PLG style
In my mind, the building blocks of dev primitive for PLG software development can be thought of as 2 separate components: exploration (experiments) & exposure (feature flagging/gating).
Both components are fundamentally different. Certain features will only be used by people in a certain kind of cohort/package. Certain cohorts might get the product free forever.
Experiments always accompany feature rollouts. The product metrics under consideration must get better for an experiment to be successful.
Based on some design considerations, I’m envisioning the platform as a combination of the following tiers →
Experimentation tier: Experiments will be ideated, created, updated, rolled out & then terminated. Managing the entire lifecycle will likely be a foundational component. It is also important to consider how a set of features will be packaged & exposed to a user & how to ensure that it corresponds to the specific conducted experiment.
Delivery & Integration tier: The new code changes from experiment-inspired features should work well w/ the other components & not create trouble in the overall product experience. If you have a mobile version of the product, you’d also need app stores to approve your compiled binaries.
Data Infrastructure tier: Collecting data from product/feature usage & user journeys, processing it & then analyzing it daily w/ a robust data pipeline will be a key component
Experimentation Tier
The experimentation tier could be further broken down into the following components →
At the network level: Experiments can either be done on the client-side experiment or the server side. User action & journey flows would be client-side experiments, while new features being rolled out as a part of a package would be server-side experiments.
At the auth level: Many PLG products allow the end-user to experience quite a lot of the product w/o requiring them to sign in/up. These experiments are done at the pre-auth level & maybe you’d be using “Device ID” as the key field for experimentation. For a registered user, the experiments would be post-auth & the key unit would shift to “user ID” to serve a consistent experience to the user even if they change devices.
Monitoring experiment activity: Initial conditions of the experiments can change. For instance, the users could change devices (operating systems), accounts, and/or even their geography. An ideal platform would offer flexibility in terms of allowing the management of the lifecycle of the experiment.
Delivery & integration Tier
The barrier for launching new experiments should be low for a platform like this to find adoption. It requires reducing the efforts needed in integration & consuming results from the integration in the product UI as quickly as possible. Can we store feature flags & app configs on the server side for certain experiments to help enable this?
Data Infrastructure Tier
Things like enabling the platform to ingest product usage data, CRM data & Helpdesk data to create & manage different experiments would be a core design consideration before building such a platform. And being able to do it in real-time, keeping up w/ things like data latency, is a hard engineering problem.
As entailed above, these are some of the design considerations I could think of while whiteboarding on this problem statement myself. And it’s only just a conversation starter.
If you’ve worked on this problem at your current/previous setups, I’d love to hear from you to learn more about the problem space from a practitioner's lens.
If you think I have made some mistakes in terms of thinking through the design considerations of the platform, please do not hesitate to let me know.
That is all, thanks.
If you liked this piece, please consider subscribing
If you’d like to discuss more on any/all of these points, please do so in the comments section below, LinkedIn, or on Twitter.
Thanks for reading this piece. If you liked/resonated w/ this, feel free to share it w/ others in your network.
Share