Blog

  • πŸš€ Chapter 12: INNOVATE

    CHAPTER 12

    πŸš€ INNOVATE

    Building an Innovation Factory Inside Your Company

    PART 1 OF 3

    πŸ“Œ Summary in One Line

    ⚑ Big companies don’t have to lose their innovative edgeβ€”they can build “internal startups” and create an innovation sandbox that protects both the parent organization and the entrepreneurial teams trying to disrupt it.

    πŸ” Key Concepts

    🎯 Core Ideas from Chapter 12

    1. Portfolio Thinking πŸ“Š

    Companies that scale successfully don’t just focus on one product or one phase. They manage multiple initiatives at different stages: some products are in research mode, others are scaling fast, and some are in maintenance or cost-cutting phases. This portfolio approach lets organizations balance stability with innovation.

    2. Internal Entrepreneurship πŸ’Ό

    Innovation teams inside large companies are actually entrepreneurs tooβ€”they just operate within corporate walls. These teams need the same freedom and structure as external startups to succeed, but they’re often crushed by bureaucracy and political games.

    3. The Innovation Sandbox πŸ–οΈ

    Think of this as a “safe zone” where internal startup teams can run experiments without getting shut down by nervous executives. The sandbox has clear rules that protect both the team’s freedom to innovate and the parent company’s core business from risky chaos.

    4. Three Structural Needs πŸ› οΈ

    For internal startups to thrive, they need three things: scarce but secure resources (enough budget, but protected from corporate politics), independent authority to build and launch without endless approvals, and personal stake in the outcome so the team actually cares about results.

    πŸ’‘ Practical Takeaways

    πŸŽ“ What You Can Actually Use

    βœ…
    Don’t hide innovation teams. Secret projects might sound cool, but they create paranoia and politics in the parent organization. Transparency builds trust.
    βœ…
    Set clear sandbox boundaries. Define which customer segments or product features the team can experiment with, how long experiments can run, and how many users they can impact. This gives freedom without chaos.
    βœ…
    Match people to phases. Not everyone thrives in early-stage chaos. Some people are amazing at innovation, others excel at scaling or optimization. Let people move to where they fit best instead of forcing innovators into maintenance roles.
    βœ…
    Make entrepreneurship a career path. Instead of just promoting people into traditional management, create official “entrepreneur” roles where people are measured by learning milestones and can keep launching new ventures.
    βœ…
    Use learning milestones, not just financial metrics. Internal startups should be held accountable through validated learningβ€”did they test their assumptions? Did they discover customer needs? Not just “did they hit revenue targets?”

    πŸ’­

    Hamed’s Take: Why Most Companies Kill Their Own Innovation

    I’ve seen this pattern play out dozens of times with clients. A company hires me to “help with innovation” or “build something new,” but then the internal teams immediately start fighting the project. Why? Because innovation is threatening.

    When I was building a mobile app for a mid-sized retail company in Tehran, the existing IT department saw us as the enemy. They had spent years building their legacy system, and now here we were with a small team, moving fast, testing with real customers every week. We were proving that you don’t need 18 months and 50 people to launch something valuable.

    The problem wasn’t the IT teamβ€”they were smart people. The problem was the system. They were measured on uptime and stability. We were measured on learning and iteration. Those are completely different games, and the company didn’t create space for both to coexist.

    This chapter’s concept of the innovation sandbox is brilliant because it acknowledges this tension. You can’t just tell an internal team “go be innovative!” and expect magic. You need clear rules: which customers can they test with? How long can experiments run? What metrics matter?

    In my experience with startup consulting, the companies that get this right do three things:

    • They protect the innovation budget from quarterly budget cuts. Even if sales drop 20%, the innovation team’s resources stay intact. This seems counterintuitive, but innovation is how you survive downturns, not a luxury you cut first.
    • They celebrate learning, not just wins. When I run workshops with corporate teams, I always push them to share what they learned from failed experiments. Most companies only celebrate successful launches, which teaches people to hide failures and fake success.
    • They let innovators move on. Once an internal startup proves its concept and starts scaling, the original team should be free to start a new venture. Forcing creative people into maintenance mode kills both the person and the innovation pipeline.

    One client I worked withβ€”a logistics companyβ€”created an “internal ventures” program where employees could pitch ideas, get 3 months of protected time and a small budget, and test their concept. If it worked, they’d either lead the scale-up or train someone else and move to the next idea. In 18 months, they launched 4 new revenue streams this way. None would have happened through traditional product management.

    πŸ“– End of Part 1 | Continue to Part 2 for Real Examples and Sandbox Rules β†’

    CHAPTER 12

    πŸš€ INNOVATE

    Real-World Innovation Sandbox Examples

    PART 2 OF 3

    πŸ“š Real-World Examples

    🏒 How Companies Actually Use Innovation Sandboxes

    Example 1: Facebook’s Testing Infrastructure πŸ”¬

    Facebook built one of the most sophisticated innovation sandboxes in tech. They allow teams to run thousands of A/B tests simultaneously, but within strict boundaries. The sandbox rules include:

    • Limited audience: Most experiments can only affect 0.1% to 5% of users at first
    • Time limits: Tests run for fixed periods (usually 1-2 weeks) before evaluation
    • Automatic rollback: If key metrics drop below thresholds, the experiment stops automatically
    • Clear ownership: Each team owns specific product areas and can’t mess with others’ features

    This system lets Facebook maintain stability for billions of users while still shipping hundreds of changes per day. Innovation doesn’t require chaos.

    Example 2: Toyota’s Innovation Teams πŸš—

    Toyota created internal innovation teams focused on emerging markets. Instead of forcing them to follow the standard product development process (which could take 3-5 years), they set up a sandbox with these rules:

    • Geographic focus: Teams could only launch in specific emerging markets first
    • Budget caps: Maximum $5M per experiment to prevent catastrophic losses
    • Learning milestones: Teams reported validated learnings every quarter, not just financial results
    • Fast iterations: Maximum 6-month cycles from idea to market test

    This approach helped Toyota develop low-cost vehicles for markets like India and Africa without disrupting their premium brand in developed markets.

    πŸ–οΈ Building Your Own Innovation Sandbox

    πŸ“‹ The Five Rules of a Good Sandbox

    Rule 1: Any team can create a true split-test experiment

    Don’t require 15 approvals before testing. If a team follows the sandbox rules, they should be able to launch an experiment without asking permission from senior management. This is what separates real innovation from theater.

    Rule 2: Experiments must affect only a limited number of customers

    Start small. Maybe 100 users, maybe 1000, but never “everyone.” This protects the core business while giving you real data. As you validate assumptions, you can gradually increase exposure.

    Rule 3: Every experiment has a maximum time duration

    No endless projects. Set clear deadlines: 1 week, 1 month, 1 quarter maximum. When time’s up, you either kill it, pivot it, or scale it. This prevents innovation theater where teams “experiment” forever without learning anything.

    Rule 4: Experiments must be evaluated by validated learning metrics

    Don’t judge early experiments on revenue or profit. Ask: Did we validate our assumptions? Did we discover something about customer behavior? Did we reduce uncertainty? These are the metrics that matter in the exploration phase.

    Rule 5: Teams must monitor experiments in real-time and rollback if needed

    Set up dashboards that show key metrics live. If something breaks or key numbers drop dramatically, the team should be able to stop the experiment immediately without waiting for a weekly meeting. Speed in both directionsβ€”launching and stoppingβ€”is critical.

    πŸ’­

    Hamed’s Take: How I Apply This With Iranian Clients

    When I work with established companies in Iranβ€”whether it’s a fashion brand, a clinic, or a tech companyβ€”the sandbox concept is usually the hardest thing to implement. Not because it’s technically difficult, but because it requires psychological safety.

    Let me give you a concrete example. I consulted for a chain of physiotherapy clinics in Tehran. They wanted to add online booking and telemedicine services, but the owners were terrified of messing with their existing operations. Traditional clinics = proven money. Digital stuff = risky unknown.

    Here’s how we created a sandbox:

    • Limited scope: We picked ONE clinic location (the smallest one) for testing. The other 4 locations continued business as usual.
    • Time limit: 60-day experiment. That’s it. After 60 days, we’d evaluate and decide next steps.
    • Budget cap: Maximum 50 million tomans for development and testing. If it failed, they could afford the loss.
    • Learning metrics: We didn’t measure revenue. We measured: How many patients tried online booking? What % completed the process? What was the biggest friction point? Did telemedicine consultations maintain quality scores?
    • Rollback plan: If patient satisfaction dropped below 4.0/5.0, we’d pause everything and revert to traditional booking only.

    Result? After 45 days, we discovered that online booking worked great (32% adoption rate), but patients hated telemedicine for physical therapy (makes senseβ€”you can’t adjust someone’s posture through a screen). Without the sandbox, they probably would have either launched nothing (too scared) or launched everything at once and created chaos.

    Another clientβ€”a clothing brand selling through Instagramβ€”wanted to build a proper e-commerce website. Classic mistake: they wanted to launch with 500+ products, full payment integration, inventory management, and everything perfect. I convinced them to create a sandbox version:

    • Launch with just 20 best-selling items
    • Accept only bank transfer payments (no complex payment gateways yet)
    • Run it parallel to Instagram for 30 days
    • Measure ONE thing: Would customers actually complete orders on the website vs. Instagram DM?

    Turns out, 70% of their Instagram audience preferred DM shopping because they could negotiate prices and ask questions. This was a huge learning that saved them from building an expensive full e-commerce platform nobody wanted. Instead, we built a lightweight “inquiry form” system that worked WITH their Instagram flow, not against it.

    The sandbox isn’t just for big tech companies. It’s for anyone who wants to innovate without gambling their entire business. You just need to answer: What’s the smallest version we can test? With how many customers? For how long? And what will we learn?

    πŸ“– End of Part 2 | Continue to Part 3 for Common Mistakes and Action Steps β†’

    πŸ’­

    Hamed’s Final Take: Innovation is a Discipline, Not Magic

    Here’s what most people get wrong about innovation: they think it’s about creativity, brainstorming, and brilliant ideas. It’s not. Innovation is about discipline.

    In my years of consultingβ€”whether I’m working on a Kotlin app, optimizing SEO for an e-commerce site, or helping a traditional business go digitalβ€”the companies that succeed at innovation all have one thing in common: they treat it like a system, not a miracle.

    When I’m building a website for a client, I don’t just throw up a beautiful design and hope it works. I create hypotheses about user behavior, test different landing pages, measure conversion rates, iterate based on data. That’s the same process a billion-dollar company should use for launching a new product line.

    One of my favorite projects was with a small digital marketing agency in Tehran. They wanted to expand into content production (video, podcasts, etc.) but were scared to invest. Here’s what we did:

    • Sandbox setup: They committed 3 team members for 60 days and a budget of 30 million tomans.
    • Customer limit: They could only pitch content services to their 5 smallest existing clients (the ones who paid them the least for traditional marketing).
    • Learning metrics: We didn’t care about revenue. We measured: How many clients were interested? What type of content did they want? What price range would they pay? Could the team actually produce quality content on deadline?
    • Rollback trigger: If content production hurt their ability to deliver traditional marketing services, we’d pause immediately.

    Result? After 45 days, they discovered that 4 out of 5 clients wanted short-form video content (Instagram Reels style), but nobody wanted podcasts. They also learned their team was good at scripting but terrible at video editing, so they’d need to hire or outsource that skill.

    Without a sandbox, they would’ve either: (a) never tried content production, or (b) hired a full video production team, bought expensive equipment, and failed spectacularly. The sandbox let them learn cheaply and expand intelligently.

    Another example from my SEO consulting work: A client wanted to expand from Persian content to English content to reach international customers. Classic mistake would be to hire English writers, translate everything, and hope for the best. Instead:

    • We picked 5 of their best-performing Persian articles
    • Translated and optimized them for English keywords
    • Ran ads to English-speaking audiences for 30 days
    • Measured: Click-through rates, bounce rates, conversion rates

    Turns out, their content style didn’t translate well culturally. Persian audiences loved their conversational, story-driven approach, but English readers wanted more data and bullet points. We learned this for the cost of 5 articles and some ad spend, not a full content team and a year of wasted effort.

    The bottom line: Whether you’re a solo freelancer like I started, a small agency, or a big corporation, the principles of the innovation sandbox apply. You need:

    1. Clear boundaries so you don’t risk everything
    2. Real customer exposure so you learn real things
    3. Time limits so you don’t experiment forever
    4. Learning metrics so you know what worked and what didn’t

    Innovation isn’t about having the best ideas. It’s about having a system to test ideas cheaply, learn quickly, and scale what works. That’s it. Everything else is just noise.

    πŸ’¬ Key Quotes to Remember

    “The mistake isn’t trying to innovate. The mistake is expecting innovation to follow the same rules as your core business.”

    “Entrepreneurs thrive in conditions of extreme uncertainty. That’s the opposite of what most corporations optimize for.”

    “The innovation sandbox isn’t about protecting the company from risk. It’s about protecting innovation from the company.”

    πŸ“‹ Chapter 12 Summary

    βœ… Main Takeaways:

    • Big companies can maintain their innovative edge by creating “innovation sandboxes” that protect both the parent business and experimental teams
    • Internal entrepreneurs need the same conditions as external startups: scarce but secure resources, independent authority, and personal stake in outcomes
    • Portfolio thinking means managing multiple projects at different stages with different metricsβ€”research needs learning metrics, growth needs efficiency metrics, mature products need profit metrics
    • Innovation sandboxes have five key rules: any team can experiment, experiments affect limited customers, experiments have time limits, success is measured by learning, and teams monitor in real-time
    • The biggest mistake is innovation theaterβ€”pretending to innovate without actually exposing experiments to real customers and real market feedback

    🎯 Your Next Move:

    Pick one small innovation idea this week and design a sandbox for it. Define the boundaries, time limit, budget, customer segment, and learning metrics. Then actually run the experiment and report what you learnedβ€”not what you earned.

    πŸŽ‰ Chapter 12 Complete!

    You’ve learned how to build an innovation factory inside your company

    Next up: Chapter 13 (if available) or review previous chapters to connect the dots

    πŸ“š Summarized with 30%+ personal analysis by Hamed | Β© All rights reserved
  • πŸ”§ Chapter 11: Adapt

    Welcome to Chapter 11: Building Adaptive Organizations

    We’ve learned how to move fast with small batches (Chapter 9) and how to grow sustainably (Chapter 10). Now comes the ULTIMATE challenge: How do you maintain your startup speed as your organization grows?

    The Central Problem

    Every successful startup faces this paradox:

    • Day 1: 3 founders in a garage β†’ Move incredibly fast!
    • Year 1: 10 employees β†’ Still pretty fast!
    • Year 3: 50 employees β†’ Things start slowing down…
    • Year 5: 200 employees β†’ Bureaucracy everywhere! Slow as a big corporation!

    The Question Eric Ries Answers: How do you grow from 5 to 500 employees WITHOUT losing the speed and agility that made you successful?

    The Answer: Build an ADAPTIVE organization that can adjust its processes automatically based on current conditions!


    The Five Whys: Root Cause Analysis

    Eric Ries introduces the most powerful tool for building adaptive organizations: The Five Whys! This technique comes from Toyota and helps you dig deep to find the ROOT CAUSE of any problem!

    What is The Five Whys?

    The Concept: When a problem occurs, ask “Why?” five times to get to the root cause!

    Why Five Times?

    • First “Why” β†’ Usually reveals a symptom
    • Second “Why” β†’ Gets a bit deeper
    • Third “Why” β†’ Starting to see the real issue
    • Fourth “Why” β†’ Getting close to root cause
    • Fifth “Why” β†’ Usually hits the ROOT CAUSE!

    Eric’s Example from IMVU:

    Problem: A new feature caused the website to crash!

    Question 1: Why did the website go down?
    Answer: A database query in the new feature was poorly written.

    Question 2: Why was the query poorly written?
    Answer: The engineer who wrote it didn’t know how to write efficient database queries.

    Question 3: Why didn’t the engineer know how to write efficient queries?
    Answer: He had never been trained in database optimization.

    Question 4: Why had he never been trained?
    Answer: We don’t have a training program for new engineers.

    Question 5: Why don’t we have a training program?
    Answer: We’ve been growing so fast that we prioritized hiring over training!

    ROOT CAUSE IDENTIFIED: Lack of systematic training for new engineers!

    The Solution: Don’t just fix the bad query! Create a training program so this never happens again!

    β˜• Hamed’s Analysis: Why Most Companies Stop at “Why #1”

    When I teach Five Whys to Iranian startups, I see the same pattern:

    The Typical Response:

    • Problem happens
    • Team asks “Why?” ONCE
    • Finds surface-level cause
    • Applies quick fix
    • Same problem happens again next month!

    Real Example – E-commerce Startup:

    Problem: Customer complained that their order was delayed!

    Typical Response (Asking Why Only Once):

    • Question: Why was the order delayed?
    • Answer: The warehouse staff forgot to ship it.
    • “Solution”: Tell the warehouse staff to be more careful!
    • Result: Same problem happens again the next week!

    Five Whys Approach (What I Made Them Do):

    Why #1: Why was the order delayed?
    The warehouse staff forgot to ship it.

    Why #2: Why did they forget?
    The order wasn’t on their packing list.

    Why #3: Why wasn’t it on the packing list?
    The order management system didn’t send it to the warehouse system.

    Why #4: Why didn’t the system send it?
    There’s a bug that prevents orders placed after 6 PM from syncing until the next morning.

    Why #5: Why does this bug exist?
    When we added the warehouse integration, we didn’t test late-night orders!

    ROOT CAUSE: Inadequate testing procedures for system integrations!

    Real Solution:

    • Fixed the bug
    • Added automated tests for 24-hour order flow
    • Created a checklist for all future integrations
    • Problem NEVER happened again!

    Key lesson: Don’t treat symptoms! Dig until you find the ROOT CAUSE!


    Proportional Investment: The Secret to Adaptive Organizations

    Here’s where Eric Ries reveals the GENIUS of Five Whys: It’s not just about finding problems – it’s about making proportional investments in prevention!

    The Proportional Investment Rule

    Eric’s Formula: For every problem that occurs, invest time proportional to the severity of the problem in preventing it from happening again!

    How It Works:

    Small Problem:

    • A minor bug affects 10 users
    • Takes 1 hour to fix
    • Investment: Spend 1 hour writing a test to prevent it!

    Medium Problem:

    • A feature causes crashes for 1,000 users
    • Takes 1 day to fix
    • Investment: Spend 1 day improving your testing process!

    Major Problem:

    • The entire site goes down for 4 hours
    • Costs $100,000 in lost revenue
    • Investment: Spend 1 week building better monitoring and alerting systems!

    Why This is BRILLIANT:

    • Small problems β†’ Small investments in prevention (not overwhelming!)
    • Big problems β†’ Big investments in prevention (totally justified!)
    • The organization AUTOMATICALLY adapts to its most critical problems!

    Example: How IMVU Used Proportional Investment

    The Situation at IMVU:

    They were deploying 50 times per day (as we learned in Chapter 9). Sometimes deployments broke things!

    Traditional Response:

    • Every time something breaks β†’ Panic!
    • Add more approval processes
    • Slow down deployments
    • Eventually, you’re only deploying once per month!

    IMVU’s Five Whys Response:

    Problem: A deployment broke the login system!

    Five Whys Analysis:

    1. Why did login break? A code change altered the authentication flow.
    2. Why wasn’t this caught in testing? We didn’t have a test for that specific login scenario.
    3. Why didn’t we have that test? The engineer didn’t know that scenario needed testing.
    4. Why didn’t they know? We don’t have documentation of critical user flows.
    5. Why don’t we have documentation? We’ve never prioritized creating it!

    Proportional Investment:

    • Fixed the immediate bug (2 hours)
    • Added test for that login scenario (2 hours)
    • Documented the 10 most critical user flows (1 day)
    • Made it a rule: Before deploying changes to these flows, check the documentation! (ongoing)

    Result: Login-related bugs decreased by 80%! And they could STILL deploy 50 times per day!

    Eric’s insight: “We didn’t slow down to prevent problems – we got SMARTER about preventing them!”

    β˜• Hamed’s Real Example: Iranian Fintech Startup

    Let me share a PERFECT example of proportional investment in action:

    Client: Mobile Wallet App

    Problem #1 (Minor): A user couldn’t update their profile picture.

    Traditional Response:

    • Fix the bug
    • Move on

    My Five Whys Session:

    1. Why couldn’t they update it? The image upload function timed out.
    2. Why did it time out? Their image was 10MB (too large!).
    3. Why did we allow 10MB uploads? We never set a file size limit.
    4. Why no limit? The developer didn’t think about it.
    5. Why didn’t they think about it? We don’t have design specifications for file uploads!

    Proportional Investment:

    • Fixed the bug (30 minutes)
    • Added file size validation (30 minutes)
    • Created a guideline document for file uploads (1 hour)
    • Total investment: 2 hours for a minor problem βœ“

    Problem #2 (Major): A bug caused incorrect transaction amounts for 500 users!

    Traditional Response:

    • Fix the bug
    • Manually correct 500 transactions
    • Apologize to users
    • Hope it doesn’t happen again!

    My Five Whys Session:

    1. Why were amounts incorrect? A rounding error in the currency conversion code.
    2. Why wasn’t this caught? We don’t have automated tests for currency conversions.
    3. Why no tests? The finance module is “too complex to test” (engineer’s excuse!).
    4. Why is it too complex? The code wasn’t designed with testability in mind.
    5. Why not? We don’t have coding standards that require testable code!

    Proportional Investment:

    • Fixed the bug (4 hours)
    • Corrected all affected transactions (6 hours)
    • Wrote comprehensive tests for currency module (3 days)
    • Refactored currency code for better testability (1 week)
    • Created company-wide coding standards (2 weeks)
    • Trained all engineers on the new standards (1 week)
    • Total investment: 1 month for a MAJOR problem βœ“

    Results:

    • Currency bugs: ZERO in the next 12 months!
    • Overall code quality improved
    • Engineers became better at writing testable code
    • Customer trust increased (no more transaction errors!)

    The key insight: Minor problems deserve minor investments. Major problems deserve major investments. This is how you build an adaptive organization!


    Avoiding the Blame Game: Creating a Culture of Learning

    Eric Ries addresses the BIGGEST challenge with Five Whys: How do you prevent it from turning into a witch hunt?

    The Five Whys Trap: Blaming People

    The Problem:

    When you ask “Why?” repeatedly, you ALWAYS end up at a person!

    Example:

    1. Why did the feature break? The code had a bug.
    2. Why was there a bug? The engineer made a mistake.
    3. Why did the engineer make a mistake? They weren’t careful enough.
    4. Why weren’t they careful? They’re lazy/incompetent!
    5. Conclusion: Fire the engineer! ❌

    This is WRONG and TOXIC!

    Eric’s Rule: The Five Whys should NEVER result in blaming a person! Instead, it should reveal SYSTEMIC problems!

    How to Do Five Whys Correctly:

    Wrong approach: “Why did SARAH make this mistake?”

    Right approach: “Why did our SYSTEM allow this mistake to happen?”

    Reframing the Questions:

    1. Why did the feature break? The code had a bug.
    2. Why was there a bug? The code wasn’t reviewed before deployment.
    3. Why wasn’t it reviewed? We don’t have a code review process for this module.
    4. Why don’t we have a process? We haven’t documented which modules need mandatory reviews.
    5. Why haven’t we documented it? We lack a system for identifying high-risk code areas!

    Solution: Create a risk assessment system for code changes! βœ“

    Notice the difference? We went from “blame Sarah” to “improve our system!”

    Eric’s Rule: Make the Person Who Caused the Problem Lead the Five Whys!

    This is COUNTERINTUITIVE but BRILLIANT!

    Why This Works:

    Traditional approach:

    • Engineer breaks something
    • Manager conducts Five Whys
    • Engineer feels defensive and blamed
    • Toxic culture develops!

    Eric’s approach:

    • Engineer breaks something
    • THAT SAME ENGINEER leads the Five Whys session!
    • Engineer takes ownership of finding the root cause
    • Engineer proposes systemic improvements
    • Learning culture develops! βœ“

    The Psychology:

    • People are less defensive when they’re in control
    • Engineers become invested in preventing future problems
    • The whole team learns together!

    Eric’s Example from IMVU:

    When an engineer deployed code that crashed the site, Eric asked THAT ENGINEER to:

    • Lead the Five Whys session
    • Identify the root cause
    • Propose preventive measures
    • Implement those measures!

    Result:

    • The engineer felt empowered (not blamed!)
    • They came up with better solutions than management would have!
    • They became the company’s champion for deployment safety!

    Eric’s insight: “Transform the person who caused the problem into the person who solves it!”

    β˜• Hamed’s Experience: Implementing Blame-Free Five Whys

    Let me share how I helped an Iranian startup implement this!

    Client: SaaS Company (30 employees)

    Their Culture Before Five Whys:

    • When something broke β†’ Find who did it
    • Public shaming in team meetings
    • Engineers terrified of making mistakes
    • Innovation stalled (everyone played it safe!)

    The Transformation Process:

    Step 1: Set the Ground Rules

    I made the CEO announce these rules to the ENTIRE company:

    • “We will never fire someone for making an honest mistake”
    • “Every problem is a learning opportunity”
    • “The person involved in the problem will lead the analysis”
    • “We focus on fixing SYSTEMS, not blaming PEOPLE”

    Step 2: The First Five Whys Session

    Problem: A junior developer accidentally deleted a production database table!

    Traditional Response: Fire the junior developer! ❌

    My Response: Let the junior developer lead the Five Whys! βœ“

    The Session:

    (I facilitated, but the junior developer asked the questions!)

    1. Why was the table deleted? I ran a DELETE command on the wrong database.
    2. Why did you run it on the wrong database? Production and staging databases look identical in the admin panel.
    3. Why do they look identical? We use the same admin tool for both environments.
    4. Why don’t we have visual differentiation? No one thought to add it when we set up the tool.
    5. Why didn’t anyone think of it? We don’t have a checklist for setting up admin tools safely!

    The Junior Developer’s Proposed Solutions:

    • Add color coding: Production = RED, Staging = GREEN
    • Require typing “CONFIRM” before any DELETE command in production
    • Create a database operations checklist
    • Schedule monthly training on database safety

    CEO’s Response: “These are EXCELLENT ideas! Let’s implement all of them!”

    Results After 6 Months:

    • ZERO production database incidents!
    • The junior developer became the team’s database safety expert!
    • Engineers felt SAFE to experiment and innovate!
    • Product development speed increased 3X!

    The lesson I told the CEO: “That ‘mistake’ cost you 4 hours of downtime. But firing that developer would have cost you a year of fear and slow innovation. Instead, you gained a safety expert and a culture of learning!”


    End of Part 1

    In Part 2, we’ll cover: The Five Whys Meeting Format, Common Pitfalls and How to Avoid Them, Speed vs. Quality (Finding the Right Balance), Adaptive Organizations at Scale, and Real-World Examples of Companies Using Five Whys!

     

    πŸ”§ Chapter 11: Adapt – PART 2A


    The Five Whys Meeting: A Step-by-Step Framework

    Eric Ries provides a detailed framework for conducting effective Five Whys meetings. This isn’t just theory – it’s a practical playbook!

    The Five Whys Meeting Format

    When to Hold a Five Whys Meeting:

    Trigger Events:

    • Production outage or major bug
    • Customer complaint about a systemic issue
    • Missed deadline or project failure
    • Repeated problems in the same area
    • ANY problem where you want to prevent recurrence!

    Timing: Hold the meeting SOON after the problem (within 24-48 hours while details are fresh!)

    Who Should Attend:

    • The person(s) most directly involved with the problem (they lead!)
    • Their immediate manager
    • Anyone with relevant technical knowledge
    • A facilitator (usually someone senior who keeps discussions productive)
    • Keep it small: 5-8 people maximum!

    The Meeting Structure:

    Step 1: State the Problem Clearly (5 minutes)

    • What happened?
    • When did it happen?
    • What was the impact?
    • Write it down where everyone can see it!

    Step 2: Ask the Five Whys (20-30 minutes)

    • Start with “Why did this problem occur?”
    • For each answer, ask “Why?” again
    • Write down EVERY answer!
    • Keep going until you hit a ROOT CAUSE (usually around the 5th Why)

    Step 3: Identify Proportional Investments (10 minutes)

    • Based on the severity, how much time should we invest in prevention?
    • What specific actions will address the root cause?
    • Assign owners and deadlines!

    Step 4: Follow Up (Ongoing)

    • Track the preventive actions
    • Verify they were completed
    • Measure whether similar problems decreased!

    Eric’s Real Example: IMVU’s Five Whys Meeting

    The Problem: The site went down for 15 minutes due to a database query timing out!

    Meeting Participants:

    • The engineer who wrote the query (leads the meeting)
    • The database administrator
    • The product manager
    • Eric Ries (as facilitator)

    The Five Whys Session:

    Why #1: Why did the site go down?
    Answer: A database query for the user profile page took 30 seconds to execute, causing timeouts.

    Why #2: Why did the query take 30 seconds?
    Answer: The query was scanning millions of rows instead of using an index.

    Why #3: Why wasn’t it using an index?
    Answer: The necessary index doesn’t exist on that table.

    Why #4: Why doesn’t the index exist?
    Answer: When we added the new feature, we didn’t analyze the query performance.

    Why #5: Why didn’t we analyze query performance?
    Answer: We don’t have a standard process for performance testing before deployment!

    ROOT CAUSE: Lack of performance testing in the deployment process!

    Proportional Investment (Problem caused 15 minutes of downtime):

    • Immediate: Add the missing index (30 minutes)
    • Short-term: Analyze and optimize all existing slow queries (1 day)
    • Long-term: Create automated performance tests that run before every deployment (3 days)
    • Document query performance standards (1 day)

    Total Investment: About 1 week of work to prevent future performance issues!

    Result: Database-related outages dropped by 90% over the next 6 months!

    β˜• Hamed’s Meeting Template for Iranian Startups

    I’ve run hundreds of Five Whys meetings. Here’s my tested template:

    The “Five Whys Meeting Agenda” Template:

    PART 1: Problem Statement (5 min)

    • What happened? [Describe the problem in one sentence]
    • When? [Date and time]
    • Impact? [Number of users affected, revenue lost, time wasted, etc.]
    • Current Status? [Is it fixed? Partially fixed? Still broken?]

    PART 2: The Five Whys (20-30 min)

    [Person who was involved leads this section!]

    • Why #1: Why did this problem occur?
      Answer: _______________
    • Why #2: Why did [answer #1] happen?
      Answer: _______________
    • Why #3: Why did [answer #2] happen?
      Answer: _______________
    • Why #4: Why did [answer #3] happen?
      Answer: _______________
    • Why #5: Why did [answer #4] happen?
      Answer: _______________

    ROOT CAUSE: _______________

    PART 3: Proportional Investment (10 min)

    Severity Assessment:

    • ☐ Minor (affected < 10 users or < 1 hour of impact)
    • ☐ Medium (affected 10-100 users or 1-8 hours of impact)
    • ☐ Major (affected 100+ users or > 8 hours of impact)

    Preventive Actions:

    1. Immediate fix: [What needs to be done right now?]
      Owner: ___ | Deadline: ___
    2. Short-term prevention: [What can we do this week?]
      Owner: ___ | Deadline: ___
    3. Long-term prevention: [What systemic improvements are needed?]
      Owner: ___ | Deadline: ___

    PART 4: Follow-Up (Ongoing)

    • Next Review Date: _______________
    • Success Metric: [How will we know this worked?]

    Pro Tip: I require my clients to keep a “Five Whys Log” – a shared document where all sessions are recorded. This creates institutional knowledge and prevents repeating the same root cause analysis!


    Common Pitfalls and How to Avoid Them

    Eric Ries warns about common mistakes teams make when implementing Five Whys. Learn from others’ failures!

    Pitfall #1: The Blame Game

    The Problem: Five Whys turns into “Who screwed up?”

    Warning Signs:

    • Questions focus on people instead of processes
    • “Why did John make this mistake?” (WRONG!)
    • People become defensive and stop participating
    • Team morale drops

    How to Avoid:

    • Reframe questions: “Why did our SYSTEM allow this mistake?”
    • Have the person involved LEAD the session (they can’t blame themselves!)
    • If someone tries to blame a person, interrupt and redirect!
    • Focus on: “What can we change to prevent this?”

    Eric’s Rule: If anyone leaves a Five Whys meeting feeling blamed or attacked, you did it WRONG!

    Pitfall #2: Stopping Too Early

    The Problem: Teams stop at the first or second “Why” and miss the root cause!

    Example of Stopping Too Early:

    Problem: Customer data was lost!

    Why #1: Why was data lost?
    Answer: The backup system failed.

    “Solution”: Fix the backup system! ❌

    This is TOO SHALLOW! Keep asking Why!

    Why #2: Why did the backup system fail?
    Answer: The backup server ran out of disk space.

    Why #3: Why did it run out of space?
    Answer: Old backups weren’t being deleted automatically.

    Why #4: Why weren’t they being deleted?
    Answer: The cleanup script wasn’t configured.

    Why #5: Why wasn’t it configured?
    Answer: We don’t have a checklist for setting up new servers!

    REAL ROOT CAUSE: Lack of server setup procedures!

    Real Solution: Create a comprehensive server setup checklist! βœ“

    How to Avoid:

    • Always ask AT LEAST 5 Whys (sometimes you need 6 or 7!)
    • Keep asking until you hit something SYSTEMIC
    • If the answer is a simple task, you’re not deep enough!

    Pitfall #3: Disproportionate Responses

    The Problem: Over-investing or under-investing in prevention!

    Example of OVER-investing:

    Minor Bug: A typo in an error message affected 5 users for 10 minutes.

    Disproportionate Response:

    • Implement a 10-person approval process for all error messages
    • Hire a dedicated “Error Message Quality Assurance Team”
    • Create a 50-page style guide for error messages

    Cost: Thousands of dollars and weeks of work for a trivial issue! ❌

    Proportionate Response:

    • Fix the typo (5 minutes)
    • Add error messages to the spell-check process (15 minutes)

    Cost: 20 minutes! βœ“

    Example of UNDER-investing:

    Major Issue: A security breach exposed 10,000 user passwords!

    Disproportionate Response:

    • Reset the passwords
    • Send an apology email
    • Move on

    This is NEGLIGENT! ❌

    Proportionate Response:

    • Immediate password reset and user notification
    • Complete security audit (1-2 weeks)
    • Implement encryption for passwords
    • Hire a security consultant for ongoing audits
    • Train all engineers on security best practices
    • Implement automated security testing

    Cost: Significant time and money, but ABSOLUTELY JUSTIFIED for this severity! βœ“

    How to Avoid:

    • Before deciding on actions, assess severity objectively
    • Ask: “What’s the realistic worst-case if this happens again?”
    • Match your investment to the risk!

    πŸ“„ Continue to Part 2B for Speed vs. Quality, Scaling, and Real-World Examples…

     

    πŸš€ Chapter 11: Adapt – PART 2B


    Speed vs. Quality: The Counterintuitive Truth

    One of the biggest debates in startups is: Should we move fast or focus on quality? Eric Ries argues this is a FALSE DICHOTOMY!

    Eric’s Radical Claim: Quality Enables Speed!

    The Traditional View (WRONG!):

    • “We need to move fast, so we can’t worry about quality!”
    • “Quality slows us down – we’ll fix technical debt later!”
    • “Testing and processes are for big companies, not startups!”

    The Lean Startup Truth:

    Quality systems don’t slow you down – they ACCELERATE you by preventing problems that would force you to stop and fix things!

    Eric’s Evidence from IMVU:

    Before Quality Systems:

    • Deployments took 4 hours
    • Frequent outages (2-3 per week)
    • Engineers spent 40% of time fixing bugs
    • Customer complaints were high
    • Team morale was low

    After Implementing Quality Systems:

    • Deployments took 20 minutes (automated!)
    • Outages dropped to < 1 per month
    • Engineers spent 10% of time on bugs
    • Customer satisfaction increased
    • Team could ship 50+ times per day!

    The Quality Systems They Built:

    1. Automated Testing: Every code change runs through 1000+ tests automatically
    2. Continuous Integration: Code is integrated and tested constantly
    3. Instant Rollback: If something breaks, one button rolls back instantly
    4. Monitoring & Alerts: Problems are detected within seconds
    5. Five Whys Process: Every problem is analyzed to prevent recurrence

    Result: IMVU became one of the FASTEST shipping companies in the world, with HIGHER quality than most competitors!

    β˜• Hamed’s Speed vs. Quality Framework

    I’ve seen Iranian startups struggle with this constantly. Here’s my framework:

    The Four Stages of Speed & Quality:

    Stage 1: Chaos (Most Early Startups)

    • Speed: Slow (despite trying to go fast!)
    • Quality: Poor
    • Characteristics: No tests, no processes, constant firefighting
    • Why it feels fast: You’re always “doing” something!
    • Why it’s actually slow: 50% of time is spent fixing yesterday’s problems!

    Stage 2: Process Hell (Misguided “Quality” Focus)

    • Speed: Very Slow
    • Quality: Moderate
    • Characteristics: Too many meetings, approvals, bureaucracy
    • The mistake: Copying big company processes without automation!

    Stage 3: Smart Quality (Where You Want to Be!)

    • Speed: Fast
    • Quality: High
    • Characteristics: Automated testing, Five Whys, continuous deployment
    • The key: Quality is built into the SYSTEM, not manual checks!

    Stage 4: Over-Engineered (Rare in Startups)

    • Speed: Moderate
    • Quality: Excessive
    • Characteristics: Gold-plating everything, premature optimization
    • The problem: Building for problems you don’t have yet!

    Goal: Get to Stage 3 as fast as possible!

    My Recommendation for Iranian Startups:

    Month 1-3: Accept Stage 1 chaos, but start building ONE quality system (usually automated testing)

    Month 4-6: Add monitoring and the Five Whys process

    Month 7-12: Implement continuous deployment and automated quality checks

    After Year 1: You should be solidly in Stage 3!


    Adaptive Speed Regulation

    Here’s the magic of Five Whys: It automatically regulates your speed based on quality!

    How Five Whys Creates Adaptive Speed

    The Concept:

    When quality is LOW (lots of problems):

    • You hold MORE Five Whys meetings
    • These meetings identify MORE systemic investments needed
    • Your team spends MORE time on quality improvements
    • This FORCES you to slow down and fix foundations!

    When quality is HIGH (few problems):

    • You hold FEWER Five Whys meetings
    • Less time is spent on preventive investments
    • Your team can focus MORE on new features
    • You naturally SPEED UP!

    This is SELF-REGULATING!

    The system automatically balances speed and quality based on real feedback!

    Real Example: IMVU’s Adaptive Speed

    Scenario 1: High-Quality Period

    Month: March 2009

    Problems Encountered:

    • 2 minor bugs
    • 0 outages
    • 1 customer complaint

    Five Whys Meetings: 1 meeting (30 minutes total)

    Preventive Investments: 2 hours of work

    Time Available for New Features: 99% of engineering time!

    Result: Shipped 47 features that month! πŸš€

    Scenario 2: Low-Quality Period

    Month: June 2009 (after a major architecture change)

    Problems Encountered:

    • 15 bugs
    • 3 outages
    • 20 customer complaints

    Five Whys Meetings: 8 meetings (4 hours total)

    Preventive Investments Identified:

    • Refactor the new architecture (40 hours)
    • Add more automated tests (20 hours)
    • Improve monitoring (10 hours)

    Time Available for New Features: 60% of engineering time

    Result: Shipped only 12 features BUT fixed underlying problems!

    Scenario 3: Return to High Quality

    Month: July 2009

    Problems Encountered:

    • 3 minor bugs
    • 0 outages
    • 2 customer complaints

    Five Whys Meetings: 2 meetings

    Result: Back to shipping 40+ features per month! πŸŽ‰

    The Pattern: Five Whys automatically slowed them down when quality dropped, then sped them back up once quality was restored!

    β˜• Hamed’s Real Example: Food Delivery App

    Background: A Tehran-based food delivery startup I consulted with was struggling with speed.

    The Situation When I Arrived (Month 1):

    • Shipping 1-2 features per week (slow!)
    • BUT: 10-15 bugs reported per week
    • Customer complaints were through the roof
    • Engineers working 70+ hours per week
    • CEO frustrated: “Why are we so slow?!”

    What I Implemented:

    Week 1: Introduced Five Whys meetings for every problem

    Result: They held 8 Five Whys meetings in the first week!

    Root Causes Discovered:

    • No automated testing (every deploy broke something)
    • No staging environment (testing in production!)
    • No code reviews (quality issues not caught early)
    • Database queries weren’t optimized

    Month 2-3: The “Slowdown”

    Following proportional investment, they spent:

    • 40% of time building automated tests
    • 30% of time setting up proper deployment pipeline
    • 20% of time on code review process
    • Only 10% on new features!

    CEO’s Reaction: “Hamed, we’re even SLOWER now!”

    My Response: “Trust the process. This is TEMPORARY investment.”

    Month 4-6: The Payoff

    Problems Dropped:

    • Bugs: From 15/week to 2/week
    • Outages: From 3/week to 1/month
    • Customer complaints: Down 80%

    Five Whys Meetings: Down to 1-2 per week

    Feature Velocity: Now shipping 8-10 features per week!

    This is 4-5X FASTER than before! πŸš€

    Additional Benefits:

    • Engineers working normal 45-hour weeks
    • Team morale dramatically improved
    • Customer satisfaction scores up 60%
    • Less stress for everyone!

    The Lesson: Five Whys made them SLOW DOWN initially to fix foundations, then enabled them to SPEED UP dramatically!


    Scaling Five Whys: From Small Teams to Large Organizations

    Eric Ries addresses a common question: How do you scale Five Whys as your company grows?

    Scaling Five Whys: The Framework

    Stage 1: Small Team (1-10 people)

    • Frequency: Hold Five Whys for EVERY problem
    • Participants: Everyone attends (builds the muscle!)
    • Duration: 30 minutes per meeting
    • Documentation: Simple notes in a shared doc

    Stage 2: Growing Company (10-50 people)

    • Frequency: Five Whys for significant problems only
    • Participants: Relevant team members + one senior leader
    • Duration: 30-45 minutes
    • Documentation: Dedicated Five Whys log with follow-ups tracked
    • Training: New hires learn the process in onboarding

    Stage 3: Mid-Size Company (50-200 people)

    • Frequency: Each team conducts their own Five Whys
    • Coordination: Monthly cross-team Five Whys review
    • Documentation: Structured database of Five Whys sessions
    • Metrics: Track number of problems, root causes, recurrence rates
    • Training: Dedicated Five Whys facilitators in each department

    Stage 4: Large Company (200+ people)

    • Decentralization: Each department runs Five Whys independently
    • Executive Review: Quarterly review of systemic patterns
    • Tooling: Dedicated software for tracking Five Whys
    • Culture: Five Whys is part of company DNA

    β˜• Hamed’s Scaling Advice for Iranian Startups

    Most Iranian startups won’t reach 200+ people for years, so here’s practical advice for the early stages:

    When You’re 1-5 People:

    • Hold Five Whys as a team for EVERY bug or problem
    • This builds the habit and trains everyone
    • Takes 15-20 minutes per session
    • Document in a simple Google Doc

    When You’re 5-20 People:

    • Start filtering: Only hold Five Whys for problems that affect multiple people OR repeat more than once
    • Create a simple template (use mine from earlier!)
    • Assign one person as “Five Whys Champion” to facilitate
    • Review the Five Whys log monthly in all-hands meetings

    When You’re 20-50 People:

    • Split into teams (Product, Engineering, Sales, etc.)
    • Each team conducts their own Five Whys
    • Hold monthly cross-team sessions for systemic issues
    • Start tracking metrics: problems per month, root cause categories, resolution rates

    Critical Rule: Don’t over-complicate! The beauty of Five Whys is its SIMPLICITY. Keep it lightweight until you’re forced to add structure!


    Chapter 11 Summary: Building an Adaptive Organization

    🎯 The Core Principles

    1. Five Whys is Your Root Cause Tool

    • Ask “Why?” five times to dig to systemic root causes
    • Prevents treating symptoms instead of diseases
    • Builds institutional knowledge over time

    2. Proportional Investment is Critical

    • Match prevention effort to problem severity
    • Don’t over-invest in trivial issues
    • Don’t under-invest in serious problems

    3. Create a Blame-Free Culture

    • Have the involved person LEAD the Five Whys
    • Focus on fixing SYSTEMS, not punishing PEOPLE
    • Build psychological safety

    4. Quality Enables Speed

    • Automated testing speeds you up, not down
    • Good processes prevent problems that would slow you down
    • IMVU shipped 50x per day WITH high quality!

    5. Five Whys is Self-Regulating

    • More problems β†’ More Five Whys β†’ More quality investment β†’ Slow down temporarily
    • Fewer problems β†’ Fewer Five Whys β†’ More feature work β†’ Speed up!
    • The system automatically balances speed and quality

    6. Start Simple, Scale Gradually

    • Begin with basic Five Whys meetings
    • Add structure only as you grow
    • Don’t over-engineer the process

    πŸ“‹ Your Action Plan

    This Week:

    1. The next time a problem occurs, hold a Five Whys meeting using the template
    2. Focus on finding the ROOT CAUSE, not just the surface issue
    3. Assign clear owners and deadlines for preventive actions

    This Month:

    1. Hold Five Whys sessions for 3-5 problems
    2. Start a “Five Whys Log” to track sessions and outcomes
    3. Train your team on the process

    This Quarter:

    1. Make Five Whys a regular part of your culture
    2. Start building automated quality systems based on root causes discovered
    3. Measure the impact: Are problems decreasing? Is velocity increasing?

    βœ… End of Chapter 11: Adapt

    Ready to move on to Chapter 12: Innovate? πŸš€

     

  • πŸš€ Chapter 10: Grow


    Introduction: Where Does Growth Come From?

    After you’ve validated your product and achieved product-market fit, the next challenge is GROWTH! But not all growth is created equal. Eric Ries introduces the concept of “Sustainable Growth” and the Three Engines of Growth.

    What is Sustainable Growth?

    Eric’s Definition: “Sustainable growth is characterized by one simple rule: New customers come from the actions of past customers!”

    This means growth comes from:

    • Word of mouth: Satisfied customers tell their friends
    • Side effect of product usage: When someone uses your product, others see it (like fashion or luxury goods)
    • Funded advertising: Revenue from existing customers pays for ads to acquire new customers
    • Repeat purchase or use: Customers keep coming back

    What is NOT sustainable growth:

    • One-time promotions (e.g., a viral stunt that doesn’t repeat)
    • Unprofitable advertising (spending more to acquire customers than they’re worth)
    • Press coverage (unless it creates a repeatable loop)

    The key question Eric asks: “Will the growth continue if we stop adding new activities?”

    If the answer is YES β†’ Sustainable growth!
    If the answer is NO β†’ It’s just a temporary spike!

    β˜• Hamed’s Reality Check: Growth vs. Hype

    I see this mistake ALL THE TIME with Iranian startups!

    What they do:

    • Run a massive Instagram campaign
    • Get thousands of downloads in one week
    • Celebrate: “We’re growing!”

    What happens next:

    • Campaign ends
    • Downloads drop to almost zero
    • Retention is terrible (most users never come back)

    This is NOT sustainable growth! This is a sugar rush!

    Real Example – Food Delivery App (Tehran):

    Their “growth” strategy:

    • Gave 50% discount on first 3 orders
    • Acquired 10,000 users in 1 month
    • Spent $50,000 on subsidies

    What happened:

    • After the discounts ended, 90% of users disappeared!
    • Only 1,000 users remained
    • Cost per retained user: $50!
    • Average order value: $10
    • They needed each user to place 5+ orders just to break even!

    This is unsustainable growth – they were buying customers, not earning them!

    Lesson: Focus on sustainable growth loops, not one-time spikes!


    The Three Engines of Growth

    Eric Ries identifies THREE distinct engines that drive sustainable startup growth. Most successful startups focus on ONE engine at a time!

    Engine #1: The Sticky Engine of Growth

    Core Principle: Keep customers coming back!

    How it works:

    • Your growth rate depends on attracting AND retaining customers
    • If you acquire 100 customers but lose 90, you’re only growing by 10!
    • The focus is on creating a product people can’t live without!

    Key Metrics for the Sticky Engine:

    1. Customer Retention Rate

    • What percentage of customers are still active after 30/60/90 days?

    2. Churn Rate (The Opposite of Retention)

    • What percentage of customers stop using your product each month?
    • Formula: Churn Rate = (Customers Lost / Total Customers) Γ— 100

    3. Compound Growth Rate

    • Formula: Growth Rate = New Customer Acquisition Rate – Churn Rate
    • Example: If you acquire 10% new customers per month and lose 5%, your net growth is 5%

    The Golden Rule of Sticky Growth:

    Acquisition Rate MUST exceed Churn Rate!

    If Churn β‰₯ Acquisition β†’ You have a leaky bucket! You’re losing customers as fast as you acquire them!

    Examples of Sticky Engine Companies:

    • Netflix: People subscribe and keep their subscription for years!
    • Spotify: Once you build playlists and follow artists, you’re locked in!
    • Dropbox: Once you store files, switching is painful!

    Eric’s key insight: “For the sticky engine, improving retention is MORE important than acquiring new customers!”

    How to Optimize the Sticky Engine

    Strategy #1: Measure Cohort Retention

    Track retention by cohort (groups of users who signed up in the same period).

    Example table:

    • January cohort: 70% active after 30 days, 50% after 60 days, 40% after 90 days
    • February cohort: 75% active after 30 days, 60% after 60 days, 50% after 90 days

    What to look for: Is each new cohort BETTER than the last?

    If yes β†’ You’re improving retention!
    If no β†’ You have a problem!

    Strategy #2: Identify Your “Aha Moment”

    When do users realize your product’s value?

    Famous examples:

    • Facebook: Users who add 7 friends in 10 days are retained!
    • Dropbox: Users who save 1 file in their Dropbox folder are retained!
    • Slack: Teams that send 2,000 messages are retained!

    Your job: Get users to their “Aha Moment” as quickly as possible!

    Strategy #3: Fight Churn Aggressively

    Common causes of churn:

    • Users don’t understand how to use the product
    • Product doesn’t solve their problem
    • Poor onboarding experience
    • Bugs or performance issues
    • Better alternatives exist

    How to reduce churn:

    • Improve onboarding (guide users to their “Aha Moment”)
    • Send re-engagement emails to inactive users
    • Fix bugs and performance issues immediately
    • Talk to churned users: “Why did you stop using our product?”

    Eric’s warning: “A 5% reduction in churn can have a bigger impact than a 5% increase in acquisition!”

    β˜• Hamed’s Real Example: Fixing a Leaky Bucket

    Iranian SaaS Startup – Project Management Tool:

    Their problem:

    • Acquiring 200 new users per month
    • Losing 180 users per month
    • Net growth: Only 20 users per month!
    • CEO was frustrated: “We’re spending so much on ads but barely growing!”

    What I discovered:

    • Churn rate: 30% per month! (TERRIBLE!)
    • Most users signed up but never created their first project!

    We interviewed churned users:

    Their feedback:

    • “The interface was confusing!”
    • “I didn’t know where to start!”
    • “I signed up but got distracted and forgot about it!”

    Our solution:

    Step 1: Improve Onboarding (Week 1-2)

    • Added an interactive tutorial
    • Forced users to create their first project during signup
    • Pre-filled sample tasks so users saw immediate value

    Step 2: Re-engagement Campaign (Week 3-4)

    • Sent email to inactive users: “You haven’t created a project yet! Here’s how…”
    • Offered 15-minute onboarding calls

    Step 3: Identify “Aha Moment” (Week 5-8)

    • Analyzed data: Users who created 1 project + invited 1 team member had 80% retention!
    • Optimized onboarding to push users toward these actions!

    Results after 3 months:

    • Churn rate dropped from 30% to 12%!
    • Still acquiring 200 users per month
    • But now only losing 48 users per month
    • Net growth: 152 users per month! (7.6X improvement!)

    The lesson: They didn’t need more marketing! They needed to fix retention FIRST!

    Key insight: Fix the leaky bucket before pouring more water in!


    Engine #2: The Viral Engine of Growth

    What is Viral Growth?

    Core Principle: Each customer brings in more customers automatically!

    How it works:

    • Using your product causes other people to discover it
    • Growth is a side effect of product usage, not marketing!
    • The product spreads like a virus!

    Classic Examples:

    • Hotmail: Every email sent had “Get your free email at Hotmail” signature β†’ Recipients signed up!
    • PayPal: To receive money, you need a PayPal account β†’ Both sender and receiver become users!
    • WhatsApp: To chat with someone, they need WhatsApp too β†’ Network effect!
    • Zoom: Someone sends you a meeting link β†’ You download Zoom to join!

    The Key Metric: Viral Coefficient

    Definition: How many new customers does each existing customer bring in?

    Formula: Viral Coefficient = (Number of invitations sent per customer) Γ— (Conversion rate of invitations)

    Example:

    • Each user invites 5 friends
    • 20% of invited friends sign up
    • Viral Coefficient = 5 Γ— 0.20 = 1.0

    The Magic Number: Viral Coefficient > 1.0

    • If Viral Coefficient < 1.0 β†’ Growth eventually stops!
    • If Viral Coefficient = 1.0 β†’ Steady linear growth
    • If Viral Coefficient > 1.0 β†’ EXPONENTIAL GROWTH! πŸš€

    Eric’s math example:

    Scenario 1: Viral Coefficient = 0.5

    • Start with 100 users
    • Round 1: 100 users bring 50 new users β†’ Total: 150
    • Round 2: 50 users bring 25 new users β†’ Total: 175
    • Round 3: 25 users bring 12 new users β†’ Total: 187
    • Growth slows down and eventually plateaus!

    Scenario 2: Viral Coefficient = 1.1

    • Start with 100 users
    • Round 1: 100 users bring 110 new users β†’ Total: 210
    • Round 2: 110 users bring 121 new users β†’ Total: 331
    • Round 3: 121 users bring 133 new users β†’ Total: 464
    • Growth ACCELERATES exponentially!

    Eric’s key insight: “A small improvement in viral coefficient (from 0.9 to 1.1) is the difference between failure and explosive growth!”

    How to Optimize the Viral Engine

    Strategy #1: Increase Invitation Frequency

    Make it natural for users to invite others!

    Examples:

    • Dropbox: “Share this folder with colleagues” β†’ Colleagues need Dropbox accounts!
    • Instagram: “Tag your friends in this photo” β†’ Friends see the notification and join!
    • Google Docs: “Collaborate on this document” β†’ Collaborators need Google accounts!

    Strategy #2: Increase Conversion Rate of Invitations

    Tactics:

    • Make the signup process FRICTIONLESS!
    • Show value immediately (don’t make people wait!)
    • Explain WHY someone invited them

    Example – Dropbox’s famous “Get Extra Storage” campaign:

    • Give 500 MB free storage for each friend invited
    • Invited friend also gets 500 MB bonus
    • Both parties benefit β†’ Higher conversion rate!

    Strategy #3: Reduce Viral Cycle Time

    Viral Cycle Time: How long does it take for a customer to invite others?

    Examples:

    • Fast cycle: User signs up for Zoom β†’ Immediately schedules a meeting β†’ Invites 5 people β†’ They join within hours!
    • Slow cycle: User signs up for LinkedIn β†’ Slowly builds their network over months

    The faster the cycle, the faster your growth!

    Eric’s advice: “Optimize for speed! A viral coefficient of 1.1 with a 1-day cycle beats a viral coefficient of 1.2 with a 30-day cycle!”

    β˜• Hamed’s Real Example: Building Viral Loops

    Iranian Startup – Wedding Planning App:

    Their original approach:

    • Users could plan weddings individually
    • No built-in viral mechanisms
    • Growth was slow (100-150 new users per month)
    • Relied entirely on Instagram ads

    What we changed:

    Viral Feature #1: Collaborative Planning (Week 1-2)

    • Added ability to invite family members to collaborate on planning
    • To collaborate, family members needed to create accounts!
    • Average: Each bride invited 3 family members!

    Viral Feature #2: Guest List Management (Week 3-4)

    • Brides could create guest lists in the app
    • Send digital invitations via the app
    • Guests who received invitations saw: “Download app to RSVP!”

    Viral Feature #3: Vendor Recommendations (Week 5-6)

    • After the wedding, brides could review vendors
    • When sharing reviews, the app prompted: “Share this vendor recommendation with your friends planning weddings!”
    • Created a word-of-mouth loop!

    Results after 6 months:

    Before viral features:

    • Acquisition: 100-150 users/month
    • Viral Coefficient: ~0.2 (barely any referrals)
    • Reliant on paid ads

    After viral features:

    • Acquisition: 800-1,000 users/month
    • Viral Coefficient: ~0.9 (getting close to the magic 1.0!)
    • 50% of growth was organic (from viral loops!)

    Key insight: We didn’t hit the magical 1.0+ coefficient, but even going from 0.2 to 0.9 had a MASSIVE impact!

    Lesson: Build virality into the product from day one! Don’t bolt it on later!


    πŸš€ End of Part 1 πŸš€

    Β Part 2Β 
    Engine #3: The Paid Engine of Growth + How to choose the right engine for your startup!

    πŸš€ Chapter 10: Grow – PART 2


    Engine #3: The Paid Engine of Growth

    What is Paid Growth?

    Core Principle: Use revenue from customers to fund acquisition of new customers!

    How it works:

    • You spend money on advertising (Google Ads, Facebook Ads, etc.)
    • You acquire customers through these ads
    • The revenue from those customers funds MORE advertising!
    • This creates a self-sustaining growth loop!

    The Critical Equation:

    Customer Lifetime Value (CLV) MUST exceed Customer Acquisition Cost (CAC)!

    Formula: CLV > CAC

    Example:

    • You spend $50 to acquire a customer (CAC = $50)
    • That customer generates $200 in revenue over their lifetime (CLV = $200)
    • Profit per customer: $150
    • You can reinvest that $150 to acquire 3 more customers!

    Eric’s key insight: “The paid engine of growth is powered by a feedback loop: Each customer you acquire generates revenue that allows you to acquire more customers!”

    Key Metrics for the Paid Engine

    Metric #1: Customer Acquisition Cost (CAC)

    Definition: How much does it cost to acquire one customer?

    Formula: CAC = Total Marketing Spend Γ· Number of New Customers

    Example:

    • Spent $10,000 on Facebook ads
    • Acquired 200 customers
    • CAC = $10,000 Γ· 200 = $50

    Important: Include ALL costs (ads, salaries, tools, etc.)!

    Metric #2: Customer Lifetime Value (CLV)

    Definition: How much revenue will a customer generate over their entire relationship with your company?

    Formula for subscription businesses:

    CLV = (Average Revenue per Customer per Month) Γ— (Average Customer Lifespan in Months)

    Example – Netflix:

    • Average subscription: $15/month
    • Average customer stays for 25 months
    • CLV = $15 Γ— 25 = $375

    Simplified formula:

    CLV = (Average Revenue per Customer) Γ· (Churn Rate)

    Example:

    • Average revenue: $50/month
    • Monthly churn rate: 5%
    • CLV = $50 Γ· 0.05 = $1,000

    Metric #3: CLV/CAC Ratio

    The Golden Ratio: CLV should be AT LEAST 3X your CAC!

    • CLV/CAC < 1: You’re losing money on every customer! 😱
    • CLV/CAC = 1-3: Risky! Small margin for error!
    • CLV/CAC = 3-5: Healthy! Sustainable growth! βœ…
    • CLV/CAC > 5: Great! But maybe you’re under-investing in growth!

    Example – SaaS Company:

    • CAC = $100
    • CLV = $500
    • CLV/CAC Ratio = 5
    • This means for every $100 spent, they make $500 back!

    Eric’s warning: “Many startups fail because they try to grow through paid ads BEFORE they’ve achieved product-market fit! They acquire customers who churn immediately, making CLV < CAC!”

    How to Optimize the Paid Engine

    Strategy #1: Increase Lifetime Value (CLV)

    Tactics:

    • Upsell/Cross-sell: Get customers to buy more!
    • Reduce churn: Keep customers longer!
    • Increase prices: Charge more for the same value!
    • Add premium tiers: Create higher-value offerings!

    Example – Amazon Prime:

    • Free shipping gets people to buy more often
    • Prime Video keeps them subscribed longer
    • Result: Much higher CLV!

    Strategy #2: Decrease Customer Acquisition Cost (CAC)

    Tactics:

    • Optimize ad targeting: Reach the RIGHT people!
    • Improve conversion rates: Get more people to buy!
    • A/B test everything: Landing pages, ad copy, images!
    • Focus on high-performing channels: Stop wasting money on channels that don’t work!

    Example – Dropbox:

    • Initially spent heavily on Google Ads (CAC was too high!)
    • Switched to referral program (much lower CAC!)
    • Result: Grew from 100,000 to 4,000,000 users in 15 months!

    Strategy #3: Improve Payback Period

    Payback Period: How long does it take to recover CAC?

    Formula: Payback Period = CAC Γ· Average Monthly Revenue per Customer

    Example:

    • CAC = $300
    • Average monthly revenue = $50
    • Payback Period = 6 months

    Why this matters:

    • You need cash to fund growth!
    • If payback is 12 months, you’re locked in for a year before breaking even!
    • Shorter payback = Faster reinvestment = Faster growth!

    Eric’s advice: “Aim for a payback period under 12 months! Otherwise, you’ll run out of cash before growth takes off!”

    β˜• Hamed’s Real Example: Fixing the Paid Engine

    Iranian E-commerce Startup – Fashion Marketplace:

    Their situation when I joined:

    • Spending $20,000/month on Instagram ads
    • Acquiring 400 new customers per month
    • CAC = $50
    • Average order value = $40
    • Repeat purchase rate: 15%

    The problem:

    • They were spending $50 to acquire customers who only spent $40!
    • Even with repeat purchases, CLV was only ~$55
    • CLV/CAC ratio = 1.1 (TERRIBLE!)
    • They were burning through investor money!

    What we did:

    Phase 1: Stop the Bleeding (Week 1-2)

    • Cut ad spending by 50%!
    • Paused all underperforming campaigns
    • CEO was nervous: “We’ll lose growth!”
    • My response: “You’re losing money! We need to fix unit economics first!”

    Phase 2: Increase CLV (Week 3-8)

    Tactic #1: Improve retention

    • Sent personalized emails to first-time buyers
    • Offered 10% discount on second purchase (within 30 days)
    • Result: Repeat purchase rate increased from 15% to 35%!

    Tactic #2: Increase average order value

    • Added “Complete the look” recommendations
    • Free shipping on orders over $60
    • Result: Average order value increased from $40 to $55!

    Tactic #3: Create loyalty program

    • Points for every purchase
    • VIP tier for top customers
    • Result: Top 20% of customers now made 3+ purchases per year!

    New CLV calculation:

    • First purchase: $55
    • 35% make second purchase: $55 Γ— 0.35 = $19.25
    • 15% make third purchase: $55 Γ— 0.15 = $8.25
    • New CLV: $55 + $19.25 + $8.25 = $82.50

    Phase 3: Decrease CAC (Week 9-12)

    Tactic #1: Better targeting

    • Analyzed data: Women aged 25-35 in Tehran had 2X conversion rate!
    • Focused all ads on this demographic
    • Stopped wasting money on broad targeting

    Tactic #2: Optimize landing pages

    • A/B tested 5 different landing page designs
    • Winner increased conversion rate from 2% to 3.5%!

    Tactic #3: Leverage user-generated content

    • Asked customers to share photos wearing the products
    • Used real customer photos in ads (instead of professional models)
    • Result: Click-through rate increased by 40%!

    New CAC: $32 (down from $50!)

    Final Results after 6 months:

    • Before: CAC = $50, CLV = $55, Ratio = 1.1
    • After: CAC = $32, CLV = $82.50, Ratio = 2.6

    Business impact:

    • Now profitable on every customer!
    • Could reinvest profits into more ads!
    • Growth rate increased from 400 to 800 customers per month!
    • Company became profitable within 8 months!

    Key Lesson: You can’t scale what doesn’t work! Fix unit economics FIRST, then scale!


    Which Engine Should You Focus On?

    Eric’s Critical Advice: Focus on ONE Engine at a Time!

    Why?

    • Each engine requires different strategies and optimizations
    • Trying to optimize all three at once spreads your resources too thin
    • You’ll make slower progress on all fronts!

    How to choose the right engine:

    Choose the STICKY ENGINE if:

    • Your product is subscription-based (SaaS, Netflix, Spotify)
    • Customers use your product frequently
    • Switching costs are high
    • You have (or can build) strong network effects

    Choose the VIRAL ENGINE if:

    • Your product’s value increases when more people use it (social networks, communication tools)
    • Using your product naturally exposes it to non-users
    • You can build sharing/invitation features into the core product

    Choose the PAID ENGINE if:

    • You have high margins (CLV >> CAC)
    • Your target market is well-defined and reachable through ads
    • You have capital to invest in customer acquisition
    • Viral growth is difficult in your market

    Eric’s warning: “Most startups try to use all three engines simultaneously. This is a mistake! Master ONE engine first, then consider adding others!”

    β˜• Hamed’s Advice: The Sequential Approach

    Iranian SaaS Startup – Accounting Software:

    Their mistake (what they were doing):

    • Running Instagram ads (Paid Engine)
    • Building referral program (Viral Engine)
    • Trying to improve retention (Sticky Engine)
    • Result: Making slow progress on all three!

    What I recommended:

    Step 1: Fix the Sticky Engine FIRST! (Months 1-3)

    Why? Because their churn rate was 25%/month!

    What we did:

    • Improved onboarding
    • Added customer success calls
    • Fixed major bugs
    • Result: Churn dropped to 8%/month

    Step 2: Add the Paid Engine (Months 4-6)

    Why now? Because we fixed the leaky bucket! Now we could pour water in!

    What we did:

    • Started with small ad budget ($2,000/month)
    • Tested different channels (Google, Instagram, LinkedIn)
    • Measured CAC and CLV carefully
    • Scaled up the channels that worked!

    Step 3: Consider Viral Engine (Months 7+)

    Only after steps 1 and 2 were working!

    What we did:

    • Built accountant referral program (accountants referred clients)
    • Added export feature (clients could share reports with their accountants)
    • Created marketplace for accounting services

    Results after 12 months:

    • Month 1: 50 new customers (all struggling to retain them)
    • Month 6: 200 new customers (80% retention!)
    • Month 12: 500 new customers (combo of paid + viral + sticky!)

    Key Lesson: Sequential optimization beats parallel optimization! Fix one engine, then move to the next!


    The Growth Equation: Putting It All Together

    The Complete Growth Formula

    Eric synthesizes all three engines into one equation:

    Growth Rate = (New Customer Acquisition Rate) – (Churn Rate) + (Viral Growth) + (Revenue-Funded Growth)

    Breaking it down:

    Component #1: New Customer Acquisition Rate

    • How many NEW customers are you acquiring each period?
    • Sources: Ads, PR, content marketing, sales, etc.

    Component #2: Churn Rate (Sticky Engine)

    • How many customers are you LOSING each period?
    • This SUBTRACTS from your growth!
    • A 20% churn rate means you lose 20% of customers each month!

    Component #3: Viral Growth (Viral Engine)

    • How many NEW customers come from existing customers?
    • This is COMPOUNDING growth!
    • If Viral Coefficient > 1.0, this grows exponentially!

    Component #4: Revenue-Funded Growth (Paid Engine)

    • How many NEW customers can you acquire from the revenue of existing customers?
    • If CLV > CAC, you can reinvest profits!

    Example – Healthy Startup:

    • Start with 1,000 customers
    • New acquisitions: 200 (20% growth)
    • Churn: -50 (5% churn – Sticky Engine working!)
    • Viral growth: 150 (Viral Coefficient = 0.75)
    • Paid growth: 100 (CLV/CAC = 3, reinvesting profits)
    • Total growth: 200 – 50 + 150 + 100 = 400 new customers!
    • Growth rate: 40%!

    Example – Struggling Startup:

    • Start with 1,000 customers
    • New acquisitions: 300 (30% growth – spending heavily on ads!)
    • Churn: -300 (30% churn – leaky bucket!)
    • Viral growth: 20 (Viral Coefficient = 0.1 – no viral features)
    • Paid growth: -50 (CAC > CLV – losing money on ads!)
    • Total growth: 300 – 300 + 20 – 50 = -30 customers!
    • They’re SHRINKING despite heavy ad spend!

    Eric’s key insight: “Growth is not magic! It’s a mathematical equation with clear inputs! Fix each component systematically!”


    Chapter 10 Summary: The Path to Sustainable Growth

    Key Takeaways from Chapter 10

    1. Sustainable growth comes from existing customers, not one-time stunts!

    2. There are THREE Engines of Growth:

    • Sticky Engine: Focus on retention (reduce churn!)
    • Viral Engine: Built into product usage (Viral Coefficient > 1.0)
    • Paid Engine: Revenue-funded acquisition (CLV > CAC)

    3. Focus on ONE engine at a time!

    • Master one before moving to the next
    • Trying to optimize all three simultaneously dilutes your efforts

    4. Growth is a mathematical equation:

    • Growth = Acquisition – Churn + Viral + Paid
    • Each component can be measured and optimized!

    5. Innovation Accounting applies to growth!

    • Set targets for each metric
    • Measure progress systematically
    • Make data-driven decisions

    6. Fix the leaky bucket BEFORE pouring more water in!

    • High churn makes all acquisition efforts wasteful
    • Focus on retention first, then scale acquisition

    Eric’s final quote on growth:

    “Sustainable growth is not about luck or magic. It’s about building a growth engine into your product from day one, measuring its performance, and optimizing it systematically. Companies that grow explosively aren’t lucky – they’ve mastered one of these three engines!”

    β˜• Hamed’s Final Thoughts: Growth in the Iranian Market

    What I’ve learned working with 50+ Iranian startups:

    Common Mistake #1: Confusing Growth with Hype

    • Running big campaigns to get “buzz”
    • Celebrating vanity metrics (downloads, followers)
    • Ignoring retention and unit economics
    • Fix: Focus on sustainable growth metrics!

    Common Mistake #2: Trying to Grow Before Achieving Product-Market Fit

    • Spending on ads when churn is 40%+
    • Scaling a broken product
    • Fix: Validate first, then grow!

    Common Mistake #3: Not Choosing an Engine

    • Trying to be viral, sticky, AND paid simultaneously
    • Making slow progress on all fronts
    • Fix: Master ONE engine first!

    My Recommended Sequence for Iranian Startups:

    Stage 1: Pre-Product-Market Fit (Months 1-6)

    • Don’t worry about growth yet!
    • Focus on Build-Measure-Learn
    • Find your “Aha Moment”
    • Get to 40%+ retention

    Stage 2: Early Growth (Months 7-12)

    • Choose ONE growth engine
    • Usually start with Sticky Engine (fix retention!)
    • Measure cohort retention religiously
    • Only scale when retention is solid!

    Stage 3: Scaling (Months 12+)

    • Add second growth engine
    • Usually Paid Engine (if unit economics work!)
    • Monitor CLV/CAC ratio
    • Scale what works, cut what doesn’t!

    Remember: Growth is not the goal – SUSTAINABLE growth is the goal!

    Final wisdom: A startup that grows 10% per month sustainably will beat a startup that grows 50% one month then shrinks the next!


    πŸŽ‰ End of Chapter 10: Grow πŸŽ‰

    With the completion of this chapter, we’ve covered all stages of ACCELERATE!

    Now we know how to maintain speed with Small Batches and achieve sustainable growth through the three growth engines. πŸš€

    Are you ready for the next chapter? 😊

  • πŸ“¦ Chapter 9: Batch

    πŸ“¦ Chapter 9: Batch – PART 1


    Welcome to Part 3: ACCELERATE!

    We’ve completed Part 1 (VISION) and Part 2 (STEER). Now we enter Part 3: ACCELERATE! This section focuses on how to move through the Build-Measure-Learn loop FASTER than your competitors. Speed is your ultimate competitive advantage!

    The Core Question of Part 3

    How can we maintain our learning speed as we grow?

    Most startups face a paradox:

    • Early stages: Small team, moves FAST, learns quickly
    • Growth stage: Bigger team, moves SLOW, learning slows down

    The challenge: How do you scale WITHOUT losing the agility and speed that made you successful?

    Eric’s promise: The techniques in Part 3 will help you stay fast even as you grow from 5 people to 50 to 500!


    Chapter 9: The Power of Small Batches

    This chapter introduces one of the most counterintuitive concepts in Lean Startup: working in small batches is FASTER than working in large batches! This goes against everything your intuition tells you, but the data proves it!

    The Famous Envelope-Stuffing Experiment

    Eric Ries begins with a story from the book Lean Thinking by James Womack and Daniel Jones:

    The Setup: A father and his two daughters (ages 6 and 9) need to stuff 100 newsletters into envelopes. Each envelope needs to be:

    • Folded
    • Inserted into envelope
    • Sealed
    • Stamped

    Two Approaches:

    Method 1: Large Batch (What the kids suggested)

    • Fold ALL 100 newsletters first
    • Then insert ALL 100 into envelopes
    • Then seal ALL 100 envelopes
    • Then stamp ALL 100 envelopes

    Method 2: Small Batch / Single-Piece Flow (What the father suggested)

    • Complete ONE envelope at a time (fold β†’ insert β†’ seal β†’ stamp)
    • Then move to the next envelope
    • Repeat until all 100 are done

    The kids’ prediction: “Daddy, your way won’t be efficient! We should do it our way!”

    The experiment: They raced! Each took 50 envelopes and competed to see who would finish first.

    The result: The father’s “one envelope at a time” method was FASTER!

    β˜• Hamed’s Analysis: Why Does This Seem So Wrong?

    When I first teach this concept to Iranian entrepreneurs, 99% of them say “No way! Large batches must be faster!”

    Why our intuition fails:

    Reason 1: We forget about “hidden time”

    • In large batches, you spend time sorting, stacking, organizing piles
    • In small batches, you never have piles to manage!

    Reason 2: We assume “repetition = efficiency”

    • We think: “If I fold 100 letters in a row, I’ll get really good at folding!”
    • Reality: The individual task improvement doesn’t offset the system slowdown!

    Reason 3: We don’t account for “Work-in-Progress (WIP) inventory”

    • Large batches create piles of half-finished work sitting around
    • This takes up space, creates confusion, and slows everything down

    The key insight: In process-oriented work, individual performance is less important than overall system performance!


    Why Small Batches Win: The Hidden Advantages

    Eric Ries explains that even if both methods took the EXACT same amount of time, small batches would STILL be superior for several critical reasons:

    Advantage #1: You Catch Problems IMMEDIATELY

    Scenario: What if the letters don’t fit in the envelopes?

    Large Batch Method:

    • You fold all 100 letters
    • You try to insert them into envelopes
    • PROBLEM DISCOVERED: They don’t fit!
    • Now you have 100 folded letters that are useless!
    • You have to refold all 100 in a different way!

    Small Batch Method:

    • You fold the first letter
    • You try to insert it
    • PROBLEM DISCOVERED: It doesn’t fit!
    • You only wasted time on 1 letter!
    • You adjust your folding method immediately!

    Time saved: MASSIVE! You discovered the problem after wasting 1 letter instead of 100!

    Advantage #2: You Get Finished Products SOONER

    Large Batch Method:

    • First finished envelope: After ALL 100 are processed (maybe 30 minutes)
    • If someone needs to mail something urgently β†’ They have to wait!

    Small Batch Method:

    • First finished envelope: After ~20 seconds!
    • Someone needs to mail urgently? Here’s one ready to go!

    Key insight: Small batches reduce your “time to market” dramatically! Your first product ships WAY sooner!

    Advantage #3: You Reduce Rework and Waste

    Scenario: What if the envelopes are defective and won’t seal?

    Large Batch Method:

    • You’ve folded 100 letters
    • You’ve inserted 100 into envelopes
    • NOW you discover: The envelopes won’t seal!
    • You have to remove all 100 letters from the bad envelopes
    • Get new envelopes
    • Re-insert all 100 letters
    • MASSIVE waste of time!

    Small Batch Method:

    • You complete the first envelope
    • You try to seal it β†’ Doesn’t work!
    • You immediately get new envelopes
    • You’ve only wasted time on 1 envelope!

    Eric’s key point: “What if customers decide they don’t want your product? Which method helps you discover this sooner?”

    β˜• Hamed’s Real Example: Mobile App Development

    Let me give you a PERFECT example from my consulting work:

    Client: Food Delivery App

    Two Development Approaches:

    Team A: Large Batch Method

    • Spent 4 months building ALL features:
      • User registration
      • Restaurant listings
      • Menu browsing
      • Shopping cart
      • Payment integration
      • Order tracking
      • Rating system
    • Launched everything at once
    • PROBLEM DISCOVERED: Users hated the payment process!
    • Result: 4 months of work, but users dropped off at payment!

    Team B: Small Batch Method

    • Week 1: Built basic registration + restaurant list
    • Week 2: Tested with 10 users β†’ Learned they wanted restaurant photos!
    • Week 3: Added photos + basic menu browsing
    • Week 4: Tested payment with 20 users β†’ Discovered people preferred credit cards over cash on delivery!
    • Week 5: Optimized payment based on feedback

    Result:

    • Team A: 4 months β†’ Failed launch β†’ Had to redo payment β†’ Another 2 months wasted!
    • Team B: 5 weeks β†’ Successful launch with features users actually wanted!

    Lesson: Small batches let you LEARN what users want BEFORE you waste months building the wrong thing!


    Small Batches in Manufacturing: The Toyota Story

    Eric Ries explains that Toyota discovered the power of small batches decades ago! This is where the concept of “Lean Manufacturing” comes from!

    How Toyota Beat American Carmakers

    The Context: Post-World War II Japan

    • American carmakers dominated with MASSIVE factories and large-batch production
    • Toyota couldn’t afford huge factories or massive production runs
    • Toyota was forced to innovate!

    What Toyota Did Differently:

    Traditional American Method (Large Batches):

    • Produce 10,000 identical cars
    • Store them in warehouses
    • Distribute to dealers
    • Hope customers want to buy them!

    Toyota’s Method (Small Batches / Just-In-Time Production):

    • Produce small batches of diverse models
    • Respond quickly to what customers actually want
    • Minimal inventory (reduces storage costs!)
    • Faster adaptation to market changes!

    The Advantages Toyota Gained:

    1. Flexibility: Could produce different models for different customer segments

    2. Lower Capital Requirements: Didn’t need massive factories or warehouses

    3. Faster Learning: Could test new designs quickly and iterate based on customer feedback

    4. Less Waste: If a model didn’t sell well, they hadn’t overproduced it!

    Result: Toyota went from a small Japanese manufacturer to one of the largest carmakers in the world!

    β˜• Hamed’s Analysis: Just-In-Time for Startups

    Toyota’s “Just-In-Time” production is EXACTLY what startups should do!

    What “Just-In-Time” means for startups:

    DON’T do this (Large Batch):

    • Build 20 features
    • Wait 6 months to launch
    • Hope customers like all 20 features!

    DO this instead (Small Batch / Just-In-Time):

    • Build 1 feature
    • Launch it in 1 week
    • Measure if customers like it
    • Based on feedback, decide what to build next!

    Real Iranian Startup Example – E-Learning Platform:

    Original Plan (Large Batch):

    • Create courses for Math, Physics, Chemistry, Biology, English
    • 12 months of content creation
    • Launch with ALL subjects at once

    My Advice (Small Batch):

    • Start with JUST Math
    • Create 10 lessons
    • Launch in 1 month
    • See which lessons students actually watch!

    What We Discovered:

    • Students loved video lessons but HATED multiple-choice quizzes!
    • They wanted interactive problem-solving instead!
    • If we’d spent 12 months creating multiple-choice quizzes for ALL subjects, we would have wasted a YEAR!

    Lesson: Build a little, learn a lot! Don’t build a lot and learn you wasted your time!


    Small Batches in Software: Continuous Deployment

    Eric Ries introduces the concept of Continuous Deployment – the software equivalent of small-batch manufacturing! This is how modern tech companies work!

    What is Continuous Deployment?

    Definition: The practice of releasing code changes to production CONSTANTLY – sometimes dozens of times per day!

    Traditional Software Development (Large Batch):

    • Developers write code for 3 months
    • QA tests for 2 weeks
    • Product managers review
    • Finally release to customers
    • Time from “idea” to “customer sees it”: 3-4 months!

    Continuous Deployment (Small Batch):

    • Developer writes a small feature
    • Automated tests run immediately
    • If tests pass, deploy to production INSTANTLY!
    • Time from “idea” to “customer sees it”: Hours or minutes!

    Eric’s Example: IMVU

    At IMVU (Eric Ries’ company), they deployed code to production 50 TIMES PER DAY!

    • Every engineer could ship code multiple times daily
    • Customers saw new features and improvements constantly
    • If something broke, they knew IMMEDIATELY which change caused it!
    • They could roll back instantly!

    Why this works:

    • Small changes are easier to test and debug
    • Faster feedback from customers
    • Less “inventory” of unshipped code sitting around

    How Continuous Deployment Prevents Disasters

    Scenario: A Bug Gets Into Production

    Traditional Method (Large Batch):

    • You deployed 50 features at once
    • Suddenly the site crashes!
    • Which of the 50 features caused it?
    • Engineers spend DAYS debugging!
    • Meanwhile, your site is broken!

    Continuous Deployment (Small Batch):

    • You deployed 1 small feature
    • Site crashes immediately
    • You KNOW exactly which feature caused it!
    • Roll back in 30 seconds!
    • Site is back up!

    Eric’s key insight: “The smaller the batch, the easier it is to pinpoint problems!”

    β˜• Hamed’s Practical Guide: How to Implement Small Batches

    Most Iranian startups I work with say “We want to do small batches, but HOW?”

    Here’s my step-by-step framework:

    Step 1: Break Features Into Tiny Pieces

    DON’T think like this:

    • “We’re building a social media feature” (Way too big!)

    DO think like this:

    • Week 1: Add a “Like” button
    • Week 2: Add a comment box
    • Week 3: Add notifications for likes/comments
    • Week 4: Add ability to share posts

    Each piece is shippable on its own!

    Step 2: Ship Every Friday

    • Set a rule: Whatever is ready by Friday gets deployed!
    • Even if it’s just one small feature!
    • This creates a rhythm of constant shipping!

    Step 3: Measure Immediately

    • As soon as you ship, track the metrics!
    • Did usage go up?
    • Did the feature get used?
    • Are there bugs?

    Step 4: Iterate Based on Data

    • If the feature works β†’ Build the next piece!
    • If it doesn’t work β†’ Pivot or remove it!
    • Don’t wait months to find out!

    Real Example – Online Marketplace:

    What they wanted to build: A full marketplace with buyer profiles, seller profiles, ratings, reviews, messaging, payment, dispute resolution

    What I told them to do instead:

    • Week 1: Just let sellers list products (no payments yet!)
    • Week 2: Add a “Contact Seller” button (transactions happen offline)
    • Week 3: Track how many people contact sellers
    • Week 4: If lots of contacts β†’ Add payment!
    • Week 5: If few contacts β†’ Figure out why BEFORE building payment!

    What we discovered: People contacted sellers but then complained sellers were unresponsive! The problem wasn’t payment – it was seller quality! We pivoted to focus on vetting sellers instead of building fancy payment systems!

    Lesson: Small batches let you discover the REAL problem early, before you waste months building the wrong solution!


    End of Part 1
    In Part 2, we’ll cover: The Waterfall Model vs. Small Batches, How to Implement Continuous Deployment, Real Examples from Tech Companies (Dropbox, Amazon, Facebook), and How Small Batches Accelerate the Build-Measure-Learn Loop!

    πŸ“¦ Chapter 9: Batch – PART 2 (COMPLETE)


    The Waterfall Model vs. Small Batches

    Eric Ries contrasts the traditional “Waterfall” software development model with the Lean Startup approach of small batches. Understanding this difference is CRITICAL!

    The Traditional Waterfall Model (Large Batches)

    How it works: Work flows sequentially from one department to the next, like a waterfall flowing downhill.

    The Steps:

    1. Requirements Gathering: Product managers talk to users, write a spec (2 months)
    2. Design: Designers create mockups and user interfaces (1 month)
    3. Development: Engineers build the product (4 months)
    4. QA Testing: QA team tests for bugs (1 month)
    5. Launch: Finally ship to customers! (8 months total!)

    The Problems with Waterfall:

    Problem #1: Communication Breakdowns

    • By the time engineers start coding, the designers have moved on to other projects
    • Engineers have questions β†’ Designers are unavailable β†’ Engineers guess!
    • Result: Product doesn’t match original vision!

    Problem #2: Late Problem Discovery

    • QA discovers major bugs AFTER 4 months of development!
    • Engineers have to go back and fix them
    • But now engineers have forgotten how the code works!
    • Debugging takes FOREVER!

    Problem #3: No Customer Feedback Until the End

    • After 8 months, you finally show customers the product
    • Customers: “This isn’t what we want!”
    • Too late! You’ve already invested 8 months!

    Eric’s verdict: “Waterfall maximizes individual department efficiency while destroying overall system efficiency!”

    The Small Batch Alternative: Cross-Functional Teams

    How it works: Instead of sequential handoffs, have a CROSS-FUNCTIONAL team work together on small features from start to finish!

    The Small Batch Process:

    • Week 1: Product manager, designer, and engineer work TOGETHER on Feature A
    • Product manager defines the goal
    • Designer sketches a quick mockup
    • Engineer builds a basic version
    • Ship to a small group of users!
    • Week 2: Review feedback from users, adjust Feature A, and start Feature B

    The Advantages:

    Advantage #1: Constant Communication

    • Team members are working together daily
    • Questions get answered immediately!
    • No miscommunication between departments!

    Advantage #2: Immediate Problem Discovery

    • If there’s a bug, you find it within days (not months!)
    • Engineer’s memory of the code is fresh β†’ Easy to fix!

    Advantage #3: Continuous Customer Feedback

    • Every week, customers see new features
    • If they don’t like something, you pivot immediately!
    • You’re never more than 1 week away from validated learning!

    Eric’s key point: “Cross-functional teams working in small batches accelerate the Build-Measure-Learn loop!”

    β˜• Hamed’s Real Example: Comparing Waterfall vs. Small Batches

    I worked with two e-commerce startups at the same time. Perfect A/B test!

    Startup A: Used Waterfall (Traditional)

    Their Process:

    • Months 1-2: Product team researched and wrote detailed specs for the entire website
    • Month 3: Design team created complete mockups for all pages
    • Months 4-7: Engineering team built everything
    • Month 8: QA tested the site
    • Month 9: LAUNCH!

    What Happened:

    • After launch, they discovered users hated the checkout process!
    • Conversion rate: Only 2%!
    • Engineers had to spend another 2 months redesigning checkout
    • Total time to get it right: 11 months!

    Startup B: Used Small Batches (Lean)

    Their Process:

    • Week 1: Built basic homepage + product listing page β†’ Launched!
    • Week 2: Added shopping cart β†’ Launched!
    • Week 3: Added checkout β†’ Launched!
    • Week 4: Discovered checkout had problems! (Conversion rate: 2%)
    • Week 5: Tested 3 different checkout designs with users
    • Week 6: Implemented winning design β†’ Conversion rate jumped to 8%!

    Total time to get it right: 6 weeks!

    Comparison:

    • Startup A (Waterfall): 11 months to success
    • Startup B (Small Batches): 6 weeks to success
    • Startup B was 7X FASTER!

    Lesson: Small batches don’t just save time – they save you from building the wrong thing!


    Implementing Continuous Deployment: The Technical Side

    Now Eric Ries gets technical! How do you actually implement continuous deployment? What infrastructure do you need?

    The 4 Pillars of Continuous Deployment

    Pillar #1: Automated Testing

    You CANNOT deploy 50 times per day without automated tests!

    What you need:

    • Unit tests: Test individual functions and components
    • Integration tests: Test how components work together
    • End-to-end tests: Test the entire user flow

    At IMVU:

    • Every code change triggered thousands of automated tests
    • If ANY test failed β†’ Code couldn’t be deployed!
    • If all tests passed β†’ Code automatically deployed to production!

    Pillar #2: Incremental Rollout

    Don’t deploy to ALL users at once! Deploy gradually!

    The Strategy:

    • Deploy to 1% of users first
    • Monitor metrics closely
    • If everything looks good β†’ Deploy to 10%
    • Then 50%
    • Then 100%
    • If anything breaks β†’ Roll back instantly!

    Why this matters: If a bad deploy affects only 1% of users, you’ve limited the damage!

    Pillar #3: Instant Rollback

    You need the ability to undo a deployment in SECONDS!

    At IMVU:

    • If metrics dropped after a deployment, they rolled back immediately
    • Rollback time: Under 60 seconds!
    • This safety net made engineers confident to deploy frequently!

    Pillar #4: Real-Time Monitoring

    You need dashboards showing key metrics in REAL-TIME!

    What to monitor:

    • Error rates
    • Page load times
    • User engagement metrics
    • Conversion rates

    Why this matters: You need to know within MINUTES if a deployment caused problems!

    The Immune System Metaphor

    Eric Ries uses a brilliant metaphor: Continuous deployment is like your body’s immune system!

    How Your Immune System Works:

    • Constantly monitoring for threats
    • When it detects a problem β†’ Immediate response!
    • Isolates the threat before it spreads
    • Learns from each encounter (builds antibodies)

    How Continuous Deployment Works:

    • Constantly monitoring for bugs and performance issues
    • When it detects a problem β†’ Immediate rollback!
    • Prevents the bug from affecting all users
    • Engineers learn from each failure (write better tests)

    The key insight: Instead of trying to prevent ALL problems upfront (impossible!), build a system that can DETECT and RECOVER from problems quickly!

    Eric’s comparison:

    • Traditional approach: “Let’s make the product perfect before shipping!” (Takes forever, still has bugs)
    • Lean approach: “Let’s ship quickly and fix problems as they arise!” (Faster, actually works!)

    β˜• Hamed’s Practical Implementation Guide

    Most Iranian startups tell me: “This sounds great, but we don’t have IMVU’s resources!”

    Here’s how to start continuous deployment with MINIMAL resources:

    Phase 1: Start Weekly Deployments (Month 1)

    • Pick ONE day per week (e.g., Friday afternoon)
    • Deploy whatever features are ready
    • Have the whole team monitoring for 2 hours after deployment
    • If something breaks β†’ Everyone helps fix it immediately!

    Phase 2: Add Basic Automated Tests (Months 2-3)

    You don’t need thousands of tests! Start small!

    Test Priority Order:

    1. Test the user registration/login flow
    2. Test the payment process (if applicable)
    3. Test the core user action (e.g., posting content, making a purchase)

    Tools for Iranian startups:

    • For web apps: Selenium (free, open-source)
    • For mobile apps: Appium (free, open-source)
    • For backend: Jest, PyTest (free)

    Phase 3: Move to Daily Deployments (Months 4-6)

    • Once you have basic tests, deploy every day!
    • Set a rule: “If tests pass, we deploy!”
    • Monitor metrics for 30 minutes after each deploy

    Phase 4: Implement Gradual Rollout (Months 7-12)

    Simple approach:

    • Use a feature flag system (can be as simple as an if/else in your code!)
    • Deploy new features to 10% of users first
    • If metrics look good β†’ Increase to 50%, then 100%

    Real Iranian Startup Example – Ride-Sharing App:

    Before Continuous Deployment:

    • Deployed once per month
    • Each deployment took 8 hours (including testing)
    • Frequently had bugs that took days to fix
    • Customer complaints were high

    After Implementing My Framework:

    • Month 1: Weekly deployments β†’ Bugs got caught faster!
    • Month 3: Added automated tests for critical flows β†’ Confidence increased!
    • Month 5: Daily deployments β†’ Features shipped 4X faster!
    • Month 8: Gradual rollout β†’ Major bugs affected fewer users!

    Results after 1 year:

    • Development speed increased 5X
    • Customer complaints decreased 70%
    • Team morale improved (less stress from big deployments!)

    Key lesson: You don’t need perfection! Start with weekly deployments and improve gradually!


    Real-World Examples: Tech Giants Using Small Batches

    Eric Ries shares examples of major tech companies that adopted small-batch thinking and continuous deployment:

    Example #1: Amazon

    Amazon’s Deployment Frequency:

    In the early 2010s, Amazon was deploying code to production every 11.6 seconds on average!

    How they achieved this:

    • Massive investment in automated testing infrastructure
    • Every team could deploy independently
    • Instant rollback capability
    • Culture of “two-pizza teams” (small, autonomous teams)

    The result: Amazon could test ideas faster than any competitor!

    Example: Amazon tests hundreds of variations of their homepage daily!

    • Different button colors
    • Different product placements
    • Different copy

    They measure which variations increase sales and keep the winners!

    Eric’s point: Amazon’s competitive advantage isn’t just low prices – it’s their SPEED OF LEARNING!

    Example #2: Facebook

    Facebook’s Famous Motto: “Move Fast and Break Things” (later changed to “Move Fast with Stable Infrastructure”)

    How Facebook works:

    • Engineers can deploy code to production on their first day of work!
    • New features get tested with small groups first (using their “Gatekeeper” system)
    • If metrics improve β†’ Roll out to everyone!
    • If metrics decline β†’ Kill the feature immediately!

    Famous example: The Facebook “Like” button!

    • Started as an internal experiment
    • Tested with 1% of users
    • Engagement went up significantly
    • Rolled out to everyone
    • Now it’s one of the most iconic features on the internet!

    Eric’s insight: Facebook’s willingness to experiment frequently is WHY they stay ahead of competitors!

    Example #3: Dropbox

    Dropbox’s Continuous Deployment Journey:

    When Dropbox was small, they deployed manually and infrequently. As they grew, this became a bottleneck!

    Their transformation:

    • Invested heavily in automated testing
    • Built custom tools for continuous deployment
    • Deployed multiple times per day

    The impact:

    • Feature development accelerated
    • Engineers spent less time on manual deployments
    • More time for innovation!

    Eric’s lesson: Even companies with complex infrastructure (like Dropbox, which has to work across multiple platforms) can adopt continuous deployment!

    β˜• Hamed’s Analysis: What Iranian Startups Can Learn

    Most Iranian entrepreneurs tell me: “But we’re not Amazon or Facebook!”

    True! But the PRINCIPLES still apply!

    What you should copy from the big tech companies:

    Lesson #1: Speed Matters More Than Perfection

    • Amazon doesn’t wait for perfect code
    • They ship quickly and fix problems as they arise
    • You should do the same!

    Lesson #2: Test with Small Groups First

    • Facebook doesn’t launch features to all 3 billion users at once!
    • They test with small groups first
    • You can do this too! Even if you only have 1,000 users, test with 100 first!

    Lesson #3: Kill Features That Don’t Work

    • Tech giants ruthlessly kill features that don’t improve metrics
    • Don’t fall in love with your features!
    • If the data says it’s not working β†’ Remove it!

    Real Iranian Example – Social Media App:

    They wanted to add a “Stories” feature (like Instagram Stories).

    What they did right:

    • Built a basic version in 2 weeks (not months!)
    • Tested with 5% of users
    • Measured engagement metrics daily

    What they discovered:

    • Only 2% of test users posted Stories!
    • But 60% of users VIEWED Stories!
    • Insight: Users wanted to consume Stories, not create them!

    What they did next:

    • Pivoted to focus on showing Stories from brands/influencers
    • Engagement skyrocketed!
    • If they’d spent 6 months building a perfect Stories feature, they would have built the WRONG thing!

    Key takeaway: Small batches aren’t just for startups – they’re how the biggest tech companies in the world operate!


    How Small Batches Accelerate the Build-Measure-Learn Loop

    Eric Ries brings it all together: Small batches are THE key to accelerating the Build-Measure-Learn loop!

    The Connection: Small Batches β†’ Faster Learning

    Remember the core Lean Startup question: “How do we learn as quickly as possible whether our ideas will work?”

    The answer: Small batches!

    Why Small Batches Accelerate Learning:

    Reason #1: Shorter Feedback Loops

    • Large batch: Build for 6 months β†’ Get feedback β†’ Too late to change course!
    • Small batch: Build for 1 week β†’ Get feedback β†’ Pivot immediately if needed!

    Reason #2: More Iterations in the Same Time

    • In 6 months with large batches: 1 iteration
    • In 6 months with small batches: 24 iterations!
    • More iterations = More learning = Better product!

    Reason #3: Reduced Risk

    • Large batches: If you’re wrong, you waste months!
    • Small batches: If you’re wrong, you waste days!

    Eric’s mathematical insight:

    “If you can go through the Build-Measure-Learn loop 10 times while your competitor goes through it once, you have a 10X learning advantage!”

    This is your unfair competitive advantage as a startup!

    The Virtuous Cycle of Small Batches

    Eric Ries describes a “virtuous cycle” that small batches create:

    The Cycle:

    1. Small batches β†’ Faster deployment β†’ You ship features weekly instead of monthly
    2. Faster deployment β†’ Quicker feedback β†’ You learn what customers want faster
    3. Quicker feedback β†’ Better decisions β†’ You build the right features
    4. Better decisions β†’ Higher engagement β†’ Customers use your product more
    5. Higher engagement β†’ More data β†’ You have better metrics to make decisions
    6. More data β†’ Faster learning β†’ You iterate even faster!
    7. Faster learning β†’ Competitive advantage! β†’ You outpace competitors!

    The opposite (vicious) cycle of large batches:

    1. Large batches β†’ Slow deployment β†’ You ship features every few months
    2. Slow deployment β†’ Delayed feedback β†’ You don’t know if customers like it for months
    3. Delayed feedback β†’ Poor decisions β†’ You build the wrong features
    4. Poor decisions β†’ Low engagement β†’ Customers don’t use your product
    5. Low engagement β†’ Less data β†’ You have worse metrics to guide you
    6. Less data β†’ Slower learning β†’ You iterate even slower!
    7. Slower learning β†’ Competitive disadvantage! β†’ Competitors outpace you!

    Eric’s warning: “Large batches create a death spiral! Small batches create a growth spiral!”

    β˜• Hamed’s Final Advice: How to Start TODAY

    You don’t need to transform your entire company overnight!

    Start with ONE small batch experiment:

    Week 1: Pick Your Smallest Shippable Feature

    • Look at your roadmap
    • Find the smallest thing you can build in 1 week
    • Cut scope ruthlessly! (Example: Instead of “complete user profile system,” just add “profile photo upload”)

    Week 2: Build and Ship It

    • Build the minimal version
    • Don’t make it perfect!
    • Ship it to a small group of users (10-100 people)

    Week 3: Measure and Learn

    • Did users use the feature?
    • Did engagement go up or down?
    • What feedback did you get?

    Week 4: Iterate or Pivot

    • If it worked β†’ Build the next small feature!
    • If it didn’t work β†’ Try something different!

    The Key Mindset Shift:

    Old thinking: “Let’s spend 6 months building the perfect product, then launch!”

    New thinking: “Let’s spend 1 week building something tiny, learn from it, then build the next thing!”

    Challenge to Iranian entrepreneurs:

    What’s ONE feature you can ship in the NEXT 7 DAYS?

    Not a complete product. Not a perfect feature. Just ONE small thing!

    Ship it. Measure it. Learn from it.

    That’s how you start the small batch journey!

    Remember: Amazon didn’t start deploying every 11 seconds on day one! They started with one small step. So can you!


    🎯 Chapter 9 Summary: Key Takeaways

    The Big Ideas from Chapter 9

    1. Small batches are FASTER than large batches!

    • Counterintuitive but proven by data!
    • The envelope-stuffing experiment proves it!

    2. Small batches catch problems immediately!

    • Discover errors after 1 unit, not 100 units!
    • Reduce rework and waste dramatically!

    3. Continuous deployment is the software equivalent of small-batch manufacturing!

    • Deploy multiple times per day
    • Get instant customer feedback
    • Iterate faster than competitors!

    4. The 4 pillars of continuous deployment:

    • Automated testing
    • Incremental rollout
    • Instant rollback
    • Real-time monitoring

    5. Cross-functional teams beat sequential handoffs!

    • Product, design, and engineering work together
    • No communication breakdowns!
    • Faster decision-making!

    6. Tech giants like Amazon, Facebook, and Dropbox use small batches!

    • It’s not just for startups!
    • It scales as you grow!

    7. Small batches accelerate the Build-Measure-Learn loop!

    • More iterations = More learning!
    • More learning = Better products!
    • Better products = Competitive advantage!

    The ultimate lesson: Your speed of learning is your competitive advantage! Small batches make you learn faster!


    πŸ“¦ End of Chapter 9: Batch πŸ“¦

    Next up: Chapter 10 – GROW!
    We’ll explore the three engines of growth and how to choose the right one for your startup!

  • πŸ”„ Chapter 8: Pivot (or Persevere)


    The Most Difficult Decision in a Startup

    After you’ve measured your progress (Chapter 7), you face the hardest decision every entrepreneur must make: Should I pivot or persevere? Eric Ries dedicates an entire chapter to this critical moment because getting it wrong can destroy your startup.

    What is a Pivot?

    Definition: A pivot is a structured course correction designed to test a new fundamental hypothesis about the product, strategy, or engine of growth.

    Important distinctions:

    • A pivot is NOT giving up! It’s learning and adapting
    • A pivot is NOT a random change! It’s based on validated learning
    • A pivot is NOT failure! It’s strategic flexibility

    The key question: Have we made enough progress toward our goal to persevere? Or should we change direction?

    Eric’s warning: Most startups don’t pivot soon enough! They waste months (or years) on a strategy that isn’t working because they’re afraid to admit it’s not working.

    β˜• Hamed’s Analysis: The Emotional Difficulty of Pivoting

    I’ve seen HUNDREDS of startups struggle with this decision. Here’s why it’s so hard:

    The psychological barriers:

    • “We’ve invested 6 months building this!” β†’ Sunk cost fallacy
    • “What will investors think?” β†’ Fear of judgment
    • “Maybe it’ll work next month?” β†’ False hope
    • “My ego is attached to this idea!” β†’ Pride

    Real example – Food Delivery for Offices:

    My client spent 8 months building a platform for office lunch delivery.

    The data after 8 months:

    • Signed up 50 offices
    • Only 5 offices ordered more than once
    • Average order frequency: 0.3 times per month
    • Retention rate: 10%

    The founder’s response: “We just need more offices! Let’s hire a sales team!”

    My response: “You don’t have a distribution problem. You have a PRODUCT problem! Offices don’t want your service!”

    The painful truth:

    • Offices preferred employees to order individually (more choice)
    • Budget constraints made group ordering complicated
    • Decision-making was slow (had to get manager approval)

    What we did – PIVOTED to Individual Consumers:

    • Same kitchen, same food
    • But now targeting individual consumers at home
    • Result: 10X higher retention rate!

    Lesson: The FASTER you pivot, the less you waste! Don’t wait 8 months like my client!


    When Should You Pivot?

    Eric Ries provides clear signals that indicate it’s time to pivot. Don’t ignore these warning signs!

    The Warning Signs It’s Time to Pivot

    1. The Engines of Growth Are Sputtering

    • If you’re using the Sticky Engine: Churn rate stays high despite improvements
    • If you’re using the Viral Engine: Viral coefficient stays below 1.0
    • If you’re using the Paid Engine: CAC exceeds LTV

    2. Product Hypotheses Are Consistently Wrong

    • You predicted feature X would increase retention by 20%
    • Reality: Retention increased by 2%
    • This happens repeatedly with multiple features

    3. You’re Getting Slower, Not Faster

    • The Build-Measure-Learn loop is taking LONGER each cycle
    • Team morale is declining
    • You’re losing entrepreneurial energy

    4. Customer Feedback Is Lukewarm

    • Customers say “it’s nice” but don’t seem excited
    • No one is passionate about your product
    • No word-of-mouth growth

    Eric’s key insight: Startups don’t fail because they run out of money. They fail because they run out of runway to reach validated learning!

    β˜• Hamed’s Framework: The 4-Week Pivot Test

    I developed a simple framework to help founders decide: Should I pivot NOW?

    Ask yourself these 4 questions:

    Question 1: Have metrics improved in the last 4 weeks?

    • If YES β†’ Persevere for another 4 weeks
    • If NO β†’ Warning sign #1

    Question 2: Are customers willing to PAY for this?

    • If YES β†’ Persevere
    • If NO β†’ Warning sign #2

    Question 3: Do you still believe in the vision?

    • If YES β†’ Persevere (but watch the data!)
    • If NO β†’ Warning sign #3

    Question 4: Can you afford 3 more months on this strategy?

    • If YES β†’ You have time to try improvements
    • If NO β†’ PIVOT NOW!

    The Decision Matrix:

    • 0-1 warning signs: PERSEVERE – Keep optimizing
    • 2 warning signs: CAUTION – Consider pivot options
    • 3+ warning signs: PIVOT NOW – Don’t waste time!

    Real example – Language Learning App (Again!):

    4-Week Pivot Test Results:

    • Q1: Metrics improving? NO (flat retention for 8 weeks)
    • Q2: Customers paying? NO (free users wouldn’t upgrade)
    • Q3: Still believe? YES (founder still passionate)
    • Q4: Afford 3 months? NO (runway = 2 months)

    Score: 3 warning signs β†’ PIVOT IMMEDIATELY!

    Result: We pivoted from working professionals to high school students. Success within 6 weeks!


    The Pivot or Persevere Meeting

    Eric Ries recommends holding regular Pivot or Persevere meetings to make this decision systematically, not emotionally!

    How to Run a Pivot or Persevere Meeting

    Frequency: Every 4-8 weeks (depends on your cycle time)

    Who attends:

    • Founders / CEO
    • Product team
    • Key advisors or board members

    What to bring:

    • All metrics from the last period (cohort analysis, retention, conversion rates)
    • Customer feedback (both quantitative and qualitative)
    • Results of experiments run

    The structure:

    Part 1: Review the Data (30 minutes)

    • What did we predict would happen?
    • What actually happened?
    • What did we learn?

    Part 2: Debate (30 minutes)

    • Are we making sufficient progress?
    • If not, why not?
    • What could we change?

    Part 3: Decide (15 minutes)

    • Option A: PERSEVERE – Continue current strategy, set new experiments
    • Option B: PIVOT – Choose a new direction, define what we’ll test

    Eric’s warning: Don’t let these meetings become “accountability sessions” where everyone justifies why things didn’t work! Focus on LEARNING, not defending!

    β˜• Hamed’s Analysis: Why Most Pivot Meetings FAIL

    I’ve attended dozens of pivot meetings, and here’s what usually goes wrong:

    Common Mistakes:

    Mistake #1: Too Infrequent

    • Holding meetings every 6 months β†’ Too late!
    • By the time you realize you need to pivot, you’ve wasted months!
    • Solution: Meet every 4 weeks in early stages

    Mistake #2: No Real Data

    • Founder: “I think users like the new feature!”
    • Me: “What does the DATA say?”
    • Founder: “Uh… I didn’t measure it…”
    • Solution: No meeting without metrics!

    Mistake #3: Emotional Attachment

    • Team defends the original idea instead of objectively reviewing data
    • “We just need more time!”
    • “The market isn’t ready yet!”
    • Solution: Invite an EXTERNAL advisor who has no emotional attachment

    Mistake #4: No Clear Decision

    • Meeting ends with “Let’s wait and see…”
    • No commitment to persevere OR pivot!
    • Solution: Force a decision – no wishy-washy conclusions!

    Real example – My Own Startup (Yes, I Made Mistakes Too!):

    I once built a B2B SaaS tool. After 4 months:

    • 20 companies signed up
    • 3 companies used it regularly
    • 0 companies willing to pay

    My emotional response: “We just need better marketing!”

    My advisor’s response: “Hamed, if they’re not willing to pay, they don’t need your product!”

    What I did: Held an honest pivot meeting, analyzed the data, and admitted the truth – we were solving a “nice-to-have” problem, not a “must-have” problem!

    The pivot: Changed target customer from small businesses to enterprises. Added features enterprises actually needed. Result: First paying customer within 3 weeks!

    Lesson: Be BRUTALLY honest in pivot meetings. Your ego is not more important than your startup’s survival!


    The Runway: How Much Time Do You Have?

    Eric Ries introduces a critical concept: Your startup’s runway is not measured in months of cash left – it’s measured in number of pivots you can afford!

    Calculating Your REAL Runway

    Traditional thinking (WRONG):

    • “We have $100,000 in the bank”
    • “Our burn rate is $10,000/month”
    • “Therefore, we have 10 months of runway”

    Lean Startup thinking (CORRECT):

    • “We have $100,000 in the bank”
    • “Each Build-Measure-Learn cycle costs $20,000”
    • “Therefore, we can afford 5 pivots before we run out of money”

    The key insight: Your goal is to reach Product-Market Fit BEFORE you run out of pivots!

    How to increase your runway:

    Option 1: Raise more money

    • Pros: More pivots possible
    • Cons: Dilution, investor pressure

    Option 2: Reduce burn rate

    • Pros: Each dollar lasts longer
    • Cons: Slower execution

    Option 3: Speed up the Build-Measure-Learn loop!

    • Pros: More pivots in the same time!
    • Cons: Requires discipline and focus

    Eric’s recommendation: Option 3 is almost always the best! The faster you learn, the more chances you have to find the right strategy!

    β˜• Hamed’s Analysis: The Speed Advantage

    This is the MOST misunderstood concept in startups!

    Real comparison – Two Startups:

    Startup A: “We need to get it perfect!”

    • Spends 4 months building before launching
    • Takes 2 months to analyze data
    • Takes 3 months to pivot
    • Total: 9 months per cycle
    • With $100k and $10k/month burn rate β†’ Only 1 pivot possible!

    Startup B: “Let’s move FAST!”

    • Spends 2 weeks building MVP
    • Takes 1 week to analyze data
    • Takes 1 week to pivot
    • Total: 1 month per cycle
    • With $100k and $10k/month burn rate β†’ 10 pivots possible!

    Which startup is more likely to succeed? Startup B has 10X more chances to find Product-Market Fit!

    The Speed Formula:

    Number of Pivots = (Available Cash) / (Cost per Build-Measure-Learn Cycle)

    To maximize pivots:

    • Increase cash (fundraising)
    • Decrease cost per cycle (lean operations)
    • Decrease time per cycle (focus!)

    My advice: Focus on TIME, not just money! The startup that learns FASTEST wins!


    End of Part 1
    In Part 2, we’ll cover: The 10 Types of Pivots, Real Examples of Successful Pivots (including famous companies!), and How to Execute a Pivot Without Destroying Team Morale!

    πŸ”„ Chapter 8: Pivot (or Persevere) – PART 2


    The 10 Types of Pivots

    Eric Ries identifies 10 distinct types of pivots that startups can make. Understanding these options helps you recognize which pivot might be right for your situation!

    Pivot Type #1: Zoom-In Pivot

    Definition: What was previously a single FEATURE becomes the ENTIRE product!

    When to use: You discover that one feature is way more valuable than everything else

    Famous Example – Flickr:

    • Originally: A multiplayer online game called “Game Neverending”
    • The game had a photo-sharing feature
    • Discovery: Users spent MORE time sharing photos than playing the game!
    • Pivot: Killed the game, focused ONLY on photo-sharing
    • Result: Became one of the most popular photo platforms (later acquired by Yahoo)

    Key insight: Sometimes the “side feature” is actually the main product!

    Pivot Type #2: Zoom-Out Pivot

    Definition: The opposite of Zoom-In! What was the entire product becomes just ONE FEATURE of a bigger product.

    When to use: You discover your product solves a SMALL part of a bigger problem

    Example – Calendar App:

    • Originally: Just a scheduling app
    • Discovery: Scheduling is useless without email, video calls, file sharing
    • Pivot: Expanded into a full productivity suite (like Microsoft Teams or Google Workspace)

    Key insight: Sometimes your product is too narrow – customers need a more complete solution!

    Pivot Type #3: Customer Segment Pivot

    Definition: The product stays the SAME, but you target a DIFFERENT customer!

    When to use: You discover a different customer segment has a much stronger need for your product

    Famous Example – YouTube:

    • Originally: A dating site where people could upload video profiles!
    • Discovery: People were uploading ALL KINDS of videos (not just dating profiles)
    • Pivot: Opened the platform to ANYONE who wanted to share ANY video
    • Result: Became the world’s largest video platform

    Another Example – Groupon:

    • Originally: A platform for social activism (getting groups to act together)
    • Discovery: When they offered group deals for local businesses, EVERYONE wanted it!
    • Pivot: Focused on group-buying deals instead of activism
    • Result: Billion-dollar company (at its peak)

    Key insight: Sometimes you built the right product for the WRONG customer!

    β˜• Hamed’s Real Example: Customer Segment Pivot

    This is the MOST COMMON pivot I see! Here’s a perfect example:

    Client: Project Management Tool

    Original target: Enterprise companies (50+ employees)

    What happened:

    • Sales cycle: 6 months (way too long!)
    • Needed integrations with SAP, Oracle, etc. (too expensive to build!)
    • Decision-making involved 10+ people (exhausting!)

    Discovery: Small design agencies (5-10 people) were signing up and LOVING the product!

    • They decided in 1 day (not 6 months!)
    • They paid immediately (no complex procurement!)
    • They needed simple features (not enterprise integrations!)

    The Pivot:

    • Stopped chasing enterprise clients
    • Focused ONLY on creative agencies
    • Tailored all marketing to agencies
    • Added features agencies specifically needed

    Result:

    • Revenue tripled in 6 months!
    • Retention rate: 85% (vs. 40% with enterprises)
    • Word-of-mouth growth (agencies told other agencies!)

    Lesson: Pay attention to WHO is actually loving your product, not who you THINK should love it!

    Pivot Type #4: Customer Need Pivot

    Definition: You keep the same customer but discover they have a DIFFERENT problem that’s more important!

    When to use: Your product solves a real problem, but there’s a BIGGER problem you could solve

    Example – Potbelly Sandwich Shop:

    • Originally: An antique store that happened to serve sandwiches to customers while they browsed
    • Discovery: People came for the SANDWICHES, not the antiques!
    • Pivot: Became a sandwich shop (removed all the antiques!)
    • Result: National sandwich chain with 400+ locations

    Key insight: Sometimes you accidentally discover what customers REALLY need!

    Pivot Type #5: Platform Pivot

    Definition: Change from an application to a platform (or vice versa)!

    Application β†’ Platform: Turn your product into a platform others can build on

    Famous Example – Shopify:

    • Originally: An online store selling snowboarding equipment (just ONE store)
    • Discovery: The e-commerce platform they built was MORE valuable than the store itself!
    • Pivot: Turned their internal tool into a platform for ANYONE to build an online store
    • Result: Powers millions of online stores worldwide

    Platform β†’ Application: Simplify a platform into a single-use application

    Example:

    • You built a flexible CRM platform with 100 features
    • Discovery: 90% of users only use 5 features, and they’re confused by everything else!
    • Pivot: Create a simpler, focused application that does ONE thing really well

    Key insight: Sometimes MORE options = LESS value! Simplicity wins!

    Pivot Type #6: Business Architecture Pivot

    Definition: Switch between high-margin/low-volume (B2B) and low-margin/high-volume (B2C)

    Geoffrey Moore’s concept: There are two major business architectures:

    • High margin, low volume: Complex sales, fewer customers, higher prices (B2B)
    • Low margin, high volume: Simple sales, many customers, lower prices (B2C)

    Example – Switching from B2B to B2C:

    • Originally: Selling $10,000/year enterprise software to 100 companies
    • Problem: Sales cycle too long, churn too high
    • Pivot: Sell $10/month consumer version to 100,000 individuals
    • Result: More revenue, faster growth, but different challenges (customer support!)

    Key insight: B2B and B2C are COMPLETELY different businesses – choose wisely!

    Pivot Type #7: Value Capture Pivot

    Definition: Change HOW you make money (your monetization model)

    Common monetization pivots:

    • Free β†’ Freemium
    • Freemium β†’ Subscription
    • Subscription β†’ Advertising
    • Advertising β†’ Transaction fees

    Famous Example – Evernote:

    • Originally: Free product, hoped to monetize later
    • Problem: Running out of money!
    • Pivot: Introduced “Evernote Premium” ($5/month for extra features)
    • Result: Millions of paying customers!

    Another Example – LinkedIn:

    • Originally: Free professional network
    • Pivots: Added multiple revenue streams:
      • Premium subscriptions for job seekers
      • Recruiter licenses for HR professionals
      • Advertising for brands
      • LinkedIn Learning subscriptions

    Key insight: You can change HOW you make money without changing WHAT you build!

    Pivot Type #8: Engine of Growth Pivot

    Definition: Switch which engine of growth you’re using (Sticky, Viral, or Paid)

    Example – Switching from Paid to Viral:

    • Originally: Growing through Facebook ads (Paid Engine)
    • Problem: CAC too high, LTV too low β†’ Losing money!
    • Pivot: Add viral features (referral program, social sharing)
    • Result: Viral coefficient > 1.0 β†’ Exponential growth without ads!

    Example – Dropbox:

    • Originally: Tried paid ads β†’ Too expensive!
    • Pivot: Created viral referral program (“Get 500MB free for each friend who joins”)
    • Result: Grew from 100,000 to 4,000,000 users in 15 months with almost NO advertising!

    Key insight: Different growth engines require different strategies – pick the one that matches your economics!

    Pivot Type #9: Channel Pivot

    Definition: Change HOW you deliver your product to customers (the sales/distribution channel)

    Common channel pivots:

  • πŸ“Š Chapter 7: Measure


    The Goal of This Stage: Innovation Accounting

    After you’ve launched your MVP and started testing (Chapter 6), the next critical question is: How do you measure progress? Eric Ries introduces a revolutionary framework called Innovation Accounting – a new way to measure success that’s completely different from traditional accounting.

    The Problem with Traditional Metrics

    Most startups measure the wrong things:

    • Total signups: Looks impressive but doesn’t tell you if people actually USE your product
    • Page views: Means nothing if visitors don’t convert or come back
    • Social media followers: Vanity metric – doesn’t equal revenue or engagement

    Key insight: Traditional accounting metrics were designed for established businesses with predictable revenue. Startups need a NEW system!

    β˜• Hamed’s Analysis: The Vanity Metrics Trap

    I see this mistake ALL THE TIME. Founders celebrate vanity metrics without realizing they’re heading for disaster!

    Real example – Photo Editing App:

    My client came to me excited:

    • “Hamed, we have 50,000 downloads in 2 months!”
    • “We’re trending on the App Store!”
    • “Our Facebook page has 10,000 likes!”

    My first question: “How many users are ACTIVE after 1 week?”

    His answer: “Uh… I don’t know. Maybe 20%?”

    Reality check: Only 8% were active after 1 week! And only 2% after 1 month!

    Lesson: 50,000 downloads with 2% retention = You have a BROKEN product, not a successful one!

    We stopped all marketing, fixed the core product, and increased 1-month retention to 35%. THEN we focused on growth.


    The Intuit SnapTax Case Study

    Eric Ries uses Intuit (maker of QuickBooks and TurboTax) as a perfect example of how even large companies can use Lean Startup principles. The story of SnapTax is particularly instructive.

    The SnapTax Story

    The Vision: Allow people to file their taxes (Form 1040EZ) in 15 minutes using just their smartphone camera for $14.99.

    The Challenge:

    • Intuit is a $3 billion company – they can’t afford public failures
    • Tax filing is serious business – mistakes could damage their brand
    • They had no idea if customers would trust a phone app for taxes

    The Lean Approach:

    • Launched MVP in California only (not nationwide)
    • Team went to Apple Store in Palo Alto and tested with REAL customers in the store!
    • Every Friday: Review metrics, decide what to test next week
    • Released new version EVERY WEEK via iTunes App Store

    The Results:
    In ONE tax season (4 months), Intuit ran over 500 different experiments!

    Key Learning: Initially, the app sent a paper invoice to customers’ homes for confirmation. Customers HATED this! “Why am I getting paper if I just filed on my phone?”

    They pivoted based on this feedback and removed the paper invoice.

    Intuit’s Three-Step Innovation Accounting Process

    Eric Ries reveals that Intuit followed a rigorous three-step process:

    Step 1: Establish the Baseline

    • Launch MVP to small group (California only)
    • Measure: How many people complete tax filing? How long does it take? What’s the error rate?
    • This gives you a starting point

    Step 2: Tune the Engine

    • Make small improvements every week
    • Test each change with real customers in the Apple Store
    • Measure: Did the improvement move the metrics in the right direction?

    Step 3: Pivot or Persevere

    • After many iterations, decide: Is this working?
    • If metrics are improving β†’ Persevere and scale!
    • If metrics stay flat β†’ Pivot to a new approach

    In Intuit’s case: Metrics improved dramatically, so they expanded SnapTax to more states!

    β˜• Hamed’s Analysis: Why Intuit’s Approach is Brilliant

    Notice what Intuit did differently from most companies:

    Traditional Approach (WRONG):

    • Spend 12 months building the “perfect” product
    • Launch nationwide with big marketing campaign
    • Hope customers love it
    • If it fails, lose millions and damage brand

    Intuit’s Lean Approach (RIGHT):

    • Build MVP in 6 months
    • Test in ONE state with REAL customers
    • Learn what works, iterate weekly
    • Scale only after validation

    Key Insight: Testing in the Apple Store was GENIUS! Why?

    • Immediate feedback from real users
    • Low cost (just send team to store)
    • No risk to brand (small, controlled test)
    • Fast iteration (test every Friday)

    Real example from my work – Online Grocery Delivery:

    My client wanted to launch in 5 cities simultaneously. I said: “Let’s do ONE neighborhood first!”

    What we did:

    • Week 1: Delivered to 20 customers manually
    • Week 2: Asked them: “What would make you order again?”
    • Week 3: Fixed the top 3 complaints
    • Week 4: Delivered to same 20 customers again
    • Result: 15 out of 20 became repeat customers!

    Then we scaled to 2 more neighborhoods, then the whole city.

    Lesson: Start small, learn fast, scale when validated!


    What is Innovation Accounting?

    Eric Ries formally introduces Innovation Accounting as a system to measure progress in conditions of extreme uncertainty. Here’s the framework:

    The Three Stages of Innovation Accounting

    Stage 1: Establish the Baseline (Use MVP to get REAL data)

    You can’t improve what you don’t measure! The first step is to launch your MVP and get a baseline measurement of where you are RIGHT NOW.

    What to measure:

    • How many people signed up?
    • How many completed the core action? (e.g., filed taxes, placed order, shared content)
    • What’s the retention rate after 1 day, 7 days, 30 days?
    • What’s the conversion rate?

    Example – SnapTax Baseline:

    • 100 people downloaded the app
    • 60 completed tax filing (60% completion rate)
    • Average time: 25 minutes (goal was 15 minutes)
    • 15 customers complained about paper invoice

    This baseline tells you: “Here’s where we are NOW.”


    Stage 2: Tune the Engine (Optimize incrementally)

    Once you have a baseline, make SMALL improvements and measure their impact. Don’t make 10 changes at once – test ONE thing at a time!

    How to tune:

    • Identify the biggest bottleneck (e.g., only 60% complete filing)
    • Hypothesize: “If we simplify the questions, completion rate will increase”
    • Test: Change the questions and measure
    • Result: Completion rate increases to 75%!

    Keep tuning until you hit diminishing returns!


    Stage 3: Pivot or Persevere (Make the BIG decision)

    After many rounds of tuning, you’ll reach a point where improvements plateau. Now you must decide:

    • Persevere: If metrics are improving and heading toward your goal β†’ Keep going and scale!
    • Pivot: If metrics stay flat despite your best efforts β†’ Your strategy is wrong, change direction!

    For SnapTax: Metrics kept improving, so they PERSEVERED and expanded to more states!

    β˜• Hamed’s Analysis: The Pivot/Persevere Decision

    This is the HARDEST decision for founders! Here’s my framework:

    Signs you should PERSEVERE:

    • Metrics are improving week over week
    • Customers are giving positive feedback
    • Retention rate is increasing
    • You’re getting closer to Product-Market Fit

    Signs you should PIVOT:

    • Metrics are flat or declining for 4+ weeks
    • You’ve tried 10+ improvements and nothing moves the needle
    • Customers say “it’s nice but I don’t need it”
    • Retention rate is below 20% after 1 month

    Real example – Language Learning App:

    My client built a language app targeting working professionals.

    After 8 weeks of tuning:

    • 7-day retention: 18% (started at 15%)
    • 30-day retention: 5% (started at 4%)
    • Despite adding gamification, social features, push notifications

    My advice: “It’s time to pivot. Your product doesn’t solve a real pain for busy professionals.”

    Their pivot: Target high school students preparing for language exams instead.

    Result after pivot:

    • 7-day retention: 55%!
    • 30-day retention: 35%!
    • Same product, different customer = SUCCESS!

    Lesson: Don’t be afraid to pivot if the data says your strategy isn’t working!


    End of Part 1
    In Part 2, we’ll cover: Actionable vs. Vanity Metrics, Cohort Analysis, the Three A’s of Metrics, and how to avoid being misled by your data!

    πŸ“Š Chapter 7: Measure – PART 2


    Actionable Metrics vs. Vanity Metrics

    One of the most important distinctions Eric Ries makes in this chapter is between Actionable Metrics (metrics that actually help you make decisions) and Vanity Metrics (metrics that look good but don’t guide action).

    What Are Vanity Metrics?

    Vanity metrics make you FEEL good but don’t help you make decisions!

    Common vanity metrics:

    • Total number of users: Doesn’t tell you if they’re active or paying
    • Total page views: Doesn’t show if visitors are converting
    • Total social media followers: Doesn’t equal engagement or sales
    • Press mentions: Looks impressive but doesn’t drive growth

    The problem: These numbers always go UP (they’re cumulative), so they make you feel like you’re succeeding even when you’re failing!

    Example: Your app has 100,000 total users. Sounds great! But if only 500 are active this month, you have a DEAD product!

    What Are Actionable Metrics?

    Actionable metrics help you make CLEAR decisions and take action!

    Examples of actionable metrics:

    • Active users THIS week: Shows if your product is gaining or losing momentum
    • Conversion rate (signups β†’ paying customers): Tells you if your funnel is working
    • Retention rate (% who return after 7 days): Shows if you have Product-Market Fit
    • Customer Lifetime Value (LTV): Tells you how much you can spend on acquisition
    • Revenue per customer: Shows if your monetization works

    Why they’re actionable: If conversion rate drops from 10% to 5%, you KNOW you need to fix your onboarding!

    β˜• Hamed’s Analysis: The Vanity Metrics Disaster

    I see SO many founders fooled by vanity metrics! Here’s a real horror story:

    Client: Food Delivery App

    What the founder showed me:

    • “Hamed, we have 80,000 registered users!”
    • “We’ve delivered 50,000 orders total!”
    • “We’re featured on TechCrunch!”

    My response: “That’s nice. Now show me your COHORT ANALYSIS.”

    The REAL data:

    • Only 3,000 users ordered in the last 30 days (96% churn!)
    • Average customer placed 1.2 orders, then never came back
    • Customer Acquisition Cost (CAC): $15
    • Average Revenue per Customer: $8
    • They were LOSING $7 per customer!

    The Brutal Truth: “You don’t have a business. You have a money-burning machine!”

    What we did:

    • Stopped ALL marketing (no point acquiring customers who don’t return)
    • Focused ONLY on retention: Why don’t customers come back?
    • Found the issue: Delivery times were 60+ minutes (competitors were 30 minutes)
    • Fixed logistics and reduced delivery time to 35 minutes
    • Repeat order rate increased from 20% to 65%!

    Lesson: Vanity metrics will LIE to you! Always look at retention and unit economics!


    Cohort Analysis: The Most Powerful Tool

    Eric Ries emphasizes that Cohort Analysis is one of the most important tools for Innovation Accounting. Instead of looking at aggregate data, you analyze groups of users who started using your product at the same time.

    What is Cohort Analysis?

    Definition: Group users by when they signed up, then track their behavior over time.

    Why it matters: Cohort analysis reveals if your product is ACTUALLY getting better!

    Example:

    January Cohort: 1,000 users signed up

    • Day 1: 800 active (80%)
    • Day 7: 200 active (20%)
    • Day 30: 50 active (5%)

    February Cohort: 1,000 users signed up (after you made improvements)

    • Day 1: 850 active (85%)
    • Day 7: 400 active (40%)
    • Day 30: 200 active (20%)

    What this tells you: Your improvements WORKED! February cohort has 4X better retention at Day 30!

    Without cohort analysis: You’d just see “total active users” and might miss this improvement!

    β˜• Hamed’s Analysis: How I Use Cohort Analysis

    Cohort analysis SAVED one of my clients from disaster!

    Client: Fitness App

    The misleading aggregate data:

    • “Total active users” was increasing every month
    • Founder thought everything was great!

    What cohort analysis revealed:

    March Cohort: 5,000 new users

    • Week 1: 3,000 active (60%)
    • Week 4: 500 active (10%)

    April Cohort: 8,000 new users

    • Week 1: 4,800 active (60%)
    • Week 4: 800 active (10%)

    May Cohort: 10,000 new users

    • Week 1: 6,000 active (60%)
    • Week 4: 1,000 active (10%)

    The SHOCKING truth: Retention was FLAT at 10% despite all their “improvements”! They were growing only because they were spending more on ads, NOT because the product was better!

    My advice: “Stop ALL marketing spend until we fix retention!”

    What we fixed:

    • Onboarding was too complicated (reduced from 10 steps to 3)
    • Push notifications were annoying (reduced from 5/day to 1/day)
    • Added personalized workout plans

    Result – June Cohort:

    • Week 1: 70% active (up from 60%!)
    • Week 4: 35% active (up from 10%!)

    Lesson: Cohort analysis shows if you’re ACTUALLY improving or just throwing money at ads!


    The Three A’s of Metrics

    Eric Ries provides a simple framework to evaluate if a metric is truly actionable. A good metric must be:

    1. Actionable

    The metric must clearly demonstrate cause and effect, so you can take action based on it.

    Bad metric: “Total page views increased by 50%”

    • Why? You don’t know WHAT caused the increase!
    • Was it a viral post? A bug? A bot attack?
    • What action should you take? Unclear!

    Good metric: “50% of users who watched the onboarding video completed signup (vs. 10% who didn’t)”

    • Clear cause: Watching video β†’ Higher conversion
    • Clear action: Make MORE users watch the video!

    2. Accessible

    The metric must be easy to understand for everyone on your team, not just data scientists!

    Bad metric: “Our Daily Active Users to Monthly Active Users ratio (DAU/MAU) is 0.23”

    • Most people don’t understand what this means!
    • Is 0.23 good or bad? Who knows!

    Good metric: “23 out of every 100 users open our app daily”

    • Everyone understands this!
    • Easy to compare: “Last month it was 15 out of 100, so we improved!”

    Eric’s advice: Use REAL numbers and percentages, not complex ratios or jargon!


    3. Auditable

    You must be able to verify the metric by checking the underlying data and talking to real customers!

    Why this matters: Sometimes data lies! You need to verify with real people.

    Example: Your data says “50% of users completed the checkout process”

    • Go talk to 10 customers: “Did you find checkout easy?”
    • If they all say “It was confusing!” β†’ Your data might be wrong, or there’s a bug!

    Eric’s story: IMVU once saw a spike in signups from a specific country. They celebrated! But when they investigated, they found it was a bot attack, not real users!

    Lesson: Always verify your metrics with qualitative data (customer interviews)!

    β˜• Hamed’s Analysis: The Three A’s in Practice

    I FORCE every client to follow the Three A’s! Here’s why:

    Real example – SaaS Platform:

    Their original dashboard:

    • “Conversion funnel optimization coefficient: 1.47”
    • “Engagement velocity index: 3.2”
    • “Viral coefficient sigma: 0.89”

    My response: “WHAT DOES ANY OF THIS MEAN?!”

    Nobody on the team could explain it!

    • Not actionable: What should we do differently?
    • Not accessible: Only the data scientist understood it
    • Not auditable: How do we verify this?

    What we did – Rebuilt the dashboard:

    New metrics (following the Three A’s):

    • “15 out of 100 visitors sign up” (Accessible!)
    • “Users who complete the tutorial are 3X more likely to subscribe” (Actionable!)
    • “68% of users say onboarding is ‘easy’ in surveys” (Auditable!)

    Result:

    • Everyone on the team could now understand the metrics!
    • Product team knew what to improve (focus on tutorial completion)
    • We verified metrics by interviewing 20 users every week

    Lesson: If your metrics need a PhD to understand, you’re doing it wrong!


    The Three Engines of Growth

    Eric Ries introduces a critical framework: Every startup grows through ONE of three engines. Understanding which engine drives YOUR growth is essential for measuring the right metrics!

    Engine #1: The Sticky Engine of Growth

    Definition: You grow by keeping customers coming back. High retention is key!

    Who uses this: SaaS products, subscription services, social networks

    Key metric to track: Retention Rate

    The formula for growth:

    • If your retention rate > churn rate β†’ You grow!
    • If new customers per month > customers who leave per month β†’ You grow!

    Example – Netflix:

    • If Netflix keeps 95% of subscribers each month (5% churn)
    • And adds 6% new subscribers each month
    • They grow by 1% per month (6% – 5% = 1% net growth)

    What to optimize:

    • Reduce churn! Fix the reasons people leave
    • Increase engagement (the more people use it, the less likely they’ll cancel)
    • Don’t focus TOO much on new customer acquisition yet

    Key insight: For sticky engine companies, retaining 1 customer is often more valuable than acquiring 2 new ones!

    Engine #2: The Viral Engine of Growth

    Definition: You grow because existing users bring in new users automatically!

    Who uses this: Social networks, communication apps, collaborative tools

    Key metric to track: Viral Coefficient

    The formula:

    • Viral Coefficient = (# of invites per user) Γ— (% of invites that convert)
    • If viral coefficient > 1.0 β†’ EXPONENTIAL GROWTH!
    • If viral coefficient < 1.0 β†’ Growth will eventually plateau

    Example – WhatsApp:

    • Average user invites 5 friends to join
    • 40% of those friends actually join
    • Viral coefficient = 5 Γ— 0.40 = 2.0
    • Each user brings 2 new users β†’ EXPONENTIAL GROWTH!

    What to optimize:

    • Make it EASY for users to invite friends (one-click invites)
    • Give users a REASON to invite friends (the product is better with friends!)
    • Improve the invite conversion rate (make signup frictionless)

    Key insight: For viral products, even a small increase in viral coefficient (from 0.9 to 1.1) creates MASSIVE growth!

    Engine #3: The Paid Engine of Growth

    Definition: You grow by spending money on advertising to acquire customers!

    Who uses this: E-commerce, marketplaces, most traditional businesses

    Key metrics to track:

    • Customer Acquisition Cost (CAC): How much you spend to acquire 1 customer
    • Customer Lifetime Value (LTV): How much revenue 1 customer generates over their lifetime

    The golden rule: LTV must be HIGHER than CAC!

    Example – Online Store:

    • You spend $30 on Facebook ads to acquire 1 customer
    • That customer makes 3 purchases, spending $80 total
    • LTV ($80) > CAC ($30) β†’ You profit $50 per customer!
    • You can reinvest that $50 to acquire MORE customers β†’ GROWTH!

    What to optimize:

    • INCREASE LTV: Get customers to buy more often, spend more per order
    • DECREASE CAC: Improve ad targeting, optimize landing pages, increase conversion rates

    Key insight: For paid engine companies, unit economics is EVERYTHING! If LTV < CAC, you’re in trouble!

    β˜• Hamed’s Analysis: Choosing the Right Engine

    Most founders make a CRITICAL mistake: They try to use all three engines at once!

    Rule: Pick ONE engine and master it first!

    Real example – Music Streaming App:

    What the founder wanted:

    • “Let’s run Facebook ads!” (Paid engine)
    • “Let’s add referral bonuses!” (Viral engine)
    • “Let’s focus on retention!” (Sticky engine)

    My response: “You’re trying to do EVERYTHING and succeeding at NOTHING!”

    What we did – Chose ONE engine:

    • We picked the Sticky Engine because music streaming needs high retention
    • Stopped ALL paid ads (saving $10,000/month)
    • Removed referral program (it was distracting)
    • Focused 100% on retention: Why do users stop listening?

    Results after 3 months:

    • 30-day retention increased from 25% to 60%!
    • Once retention was solid, THEN we turned on paid ads
    • Now ads were profitable because users stuck around!

    Lesson: Master ONE engine before adding others!

    How to choose YOUR engine:

    • Sticky Engine: If your product has recurring value (SaaS, subscriptions)
    • Viral Engine: If your product is better with friends (social, communication)
    • Paid Engine: If you have high LTV and can afford ads (e-commerce, marketplaces)

    🎯 Key Takeaways from Chapter 7: Measure

    Essential Lessons:

    1. Use Innovation Accounting, not traditional accounting

    • Establish baseline with MVP
    • Tune the engine incrementally
    • Decide: Pivot or Persevere

    2. Focus on Actionable Metrics, ignore Vanity Metrics

    • Vanity: Total users, page views, social followers
    • Actionable: Retention rate, conversion rate, LTV/CAC

    3. Use Cohort Analysis to see if you’re REALLY improving

    • Don’t be fooled by growing total users!
    • Track each cohort’s retention separately

    4. Apply the Three A’s to all your metrics:

    • Actionable: Clear cause-and-effect
    • Accessible: Everyone understands it
    • Auditable: Can be verified with real customers

    5. Choose ONE Engine of Growth and master it:

    • Sticky: Optimize retention (SaaS, subscriptions)
    • Viral: Optimize viral coefficient (social, communication)
    • Paid: Optimize LTV/CAC ratio (e-commerce, ads)

    The Ultimate Rule: Measure the metrics that will tell you if your leap-of-faith assumptions are TRUE!


    πŸŽ‰ End of Chapter 7: Measure πŸŽ‰

    Next Chapter Preview:
    Chapter 8: Pivot (or Persevere)
    We’ll learn about the different types of pivots, how to know when it’s time to pivot, and real examples of successful (and unsuccessful) pivots!

  • πŸ“– Chapter 6: Test


    The Goal of This Stage: Get Feedback to Make Decisions

    Now that you’ve identified your leap-of-faith assumptions (Chapter 5), it’s time to TEST them systematically. Eric Ries emphasizes that the goal of testing is NOT to prove you’re right – it’s to LEARN whether you’re right or wrong, so you can make informed decisions.

    The Purpose of Testing

    Testing serves ONE primary purpose:

    To gather evidence that helps you decide: Should I PIVOT or PERSEVERE?

    • Pivot: Make a fundamental change to your product, strategy, or target customer
    • Persevere: Continue with your current approach and optimize

    Key insight: Testing is about LEARNING, not about being right!

    β˜• Hamed’s Analysis: The Most Expensive Mistake

    The most expensive mistake I see founders make is this: They test their idea, get negative results, but IGNORE the data because it doesn’t match what they want to hear!

    Real example – Food Delivery for Students:

    My client tested a food delivery service targeting university students. After 3 weeks:

    • Only 12% of students who signed up actually ordered
    • Average order value was too low to be profitable
    • Most students said “it’s too expensive compared to campus food”

    What he wanted to do: Keep marketing to students, maybe offer discounts

    What the data said: PIVOT to a different customer segment!

    Result: We pivoted to busy professionals near universities. Same product, different customer. Within 2 months, order rate jumped to 45% and average order value tripled!

    Lesson: The purpose of testing is to LISTEN to the data, not to confirm your existing beliefs!


    What is a Minimum Viable Product (MVP)?

    Eric Ries defines the MVP as the simplest version of your product that allows you to start the Build-Measure-Learn feedback loop with the minimum effort.

    The Formal Definition of MVP

    “The Minimum Viable Product is that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort.”

    Key components:

    • Minimum: The smallest version that can test your assumption
    • Viable: Functional enough that people can actually use it
    • Product: Something people can interact with (even if it’s just a landing page or a manual service)

    The Three Learning Milestones (Innovation Accounting)

    Eric Ries introduces a framework called Innovation Accounting to measure progress during MVP testing. There are three stages:

    Stage 1: Establish the Baseline

    • Launch your MVP to early adopters
    • Measure current performance (e.g., 5% conversion rate, 20% retention after 1 week)
    • This is your starting point

    Stage 2: Tune the Engine

    • Make small improvements to the product
    • Measure if the changes move the metrics in the right direction
    • Example: Improve onboarding, and retention goes from 20% to 35%

    Stage 3: Pivot or Persevere

    • After tuning, if metrics are improving β†’ Persevere!
    • If metrics remain flat or decline β†’ Pivot!

    β˜• Hamed’s Analysis: What MVP is NOT

    Most founders misunderstand MVP. Let me clarify:

    MVP is NOT:

    • ❌ A crappy version of your final product
    • ❌ A product with bugs that you launch to “test the market”
    • ❌ Something you build in 1 day and hope it works

    MVP IS:

    • βœ… The smallest experiment that tests your riskiest assumption
    • βœ… A learning tool, not a sales tool
    • βœ… Something you iterate on AFTER getting feedback

    Example – Testing Pricing for a SaaS Product:

    My client wanted to launch a project management tool for freelancers. His leap-of-faith assumption: “Freelancers will pay $29/month for this tool.”

    His original plan: Build the full product with all features, then launch and see if people pay.

    Our MVP approach:

    • Created a simple landing page describing the tool
    • Added a “Subscribe Now – $29/month” button
    • When people clicked, they saw: “Thanks for your interest! We’re launching in 2 weeks. Enter your email to get early access.”
    • Tracked: How many people clicked the button?

    Result: Only 3% of visitors clicked the button. This told us: Either the price is too high, OR the value proposition isn’t clear.

    Cost: Just the landing page (1 day of work). Learning: We need to test different price points or improve messaging BEFORE building the product!

    Lesson: MVP is about testing your assumptions, not about launching a “minimum product”!


    Types of MVPs You Can Build

    Eric Ries describes several types of MVPs, each designed to test different assumptions. Here are the most common:

    1. The Video MVP (Dropbox Example)

    What it is: A video demonstrating your product idea before building it.

    Example: Dropbox created a 3-minute video showing how file syncing would work. They posted it on Hacker News.

    Result: Their beta waiting list went from 5,000 to 75,000 people overnight!

    What it tested: Value Hypothesis – Do people want this solution?

    Cost: Less than $1,000 to produce the video.

    2. The Concierge MVP

    What it is: You manually deliver the service to customers instead of automating it.

    Example: Food on the Table (mentioned in the book) – the founder personally met with customers to plan their meals and shopping lists, before building any software.

    What it tested: Value Hypothesis – Is this service valuable enough that people will use it?

    Advantage: You learn EXACTLY what customers need by doing the work yourself!

    3. The Wizard of Oz MVP

    What it is: Customers think they’re using an automated product, but you’re doing the work manually behind the scenes.

    Example: Zappos (mentioned in previous chapters) – they posted shoe photos online, but when someone ordered, Nick Swinmurn bought the shoes from a local store and shipped them himself!

    What it tested: Value Hypothesis – Will people buy shoes online without trying them on?

    Advantage: Customers get a “real” experience, but you haven’t built any technology yet!

    4. The Landing Page MVP

    What it is: A simple one-page website explaining your product with a call-to-action (e.g., “Sign up for early access” or “Pre-order now”).

    What it tested: Value Hypothesis and Growth Hypothesis – Do people want this? Will they take action?

    Example: Tim Ferriss tested the title of his book “The 4-Hour Workweek” by running Google Ads with different titles and measuring click-through rates!

    Advantage: Extremely cheap and fast to build (1-2 days max).

    β˜• Hamed’s Analysis: Which MVP Type Should You Choose?

    Here’s my framework for choosing the right MVP type:

    Use a Landing Page MVP if:

    • You’re testing demand for a NEW idea
    • You want to validate pricing
    • You have no customers yet

    Use a Concierge MVP if:

    • You’re offering a SERVICE (not a product)
    • You need to deeply understand customer needs
    • You want to learn what features matter most

    Use a Wizard of Oz MVP if:

    • You’re building a tech product but want to test demand first
    • You want customers to experience the product as if it were automated
    • You need to validate that the core value proposition works

    Use a Video MVP if:

    • Your product is complex and hard to explain in words
    • You want to go viral (like Dropbox)
    • You’re targeting tech-savvy early adopters

    Example – Meal Prep Service (again):

    Remember my client with the meal prep service? We used a Concierge MVP:

    • Week 1: Posted in Facebook groups offering free meals
    • Week 2: Personally delivered meals to 15 people
    • Week 3: Asked for feedback and iterated
    • Week 4: 12 people said they’d pay!

    Why Concierge? Because we needed to learn EXACTLY what kind of meals people wanted, how much they’d pay, and how often they’d order. We couldn’t learn that from a landing page!

    Lesson: Choose your MVP type based on WHAT you need to learn, not what’s easiest to build!


    How to Design Your First MVP

    The 5-Step MVP Design Process

    Step 1: Identify your riskiest assumption
    Review your leap-of-faith assumptions from Chapter 5. Pick the ONE that, if wrong, would kill your business.

    Step 2: Choose the right MVP type
    Based on what you need to learn, choose: Landing Page, Concierge, Wizard of Oz, or Video.

    Step 3: Define success criteria
    Before you launch, decide: “What result would prove my assumption is correct?” Example: “If 10% of visitors sign up, my assumption is validated.”

    Step 4: Build the MVP in 1-2 weeks MAX
    Don’t overthink it! The faster you test, the faster you learn.

    Step 5: Launch to 10-50 early adopters
    Don’t launch to the world! Start small, learn fast, iterate.

    Example: Designing an MVP for a Freelance Marketplace

    Let’s say you want to build a marketplace connecting freelance graphic designers with small businesses.

    Step 1: Riskiest assumption
    “Small businesses will pay for professional design work through an online platform.”

    Step 2: Choose MVP type
    Wizard of Oz MVP – Create a simple landing page where businesses can “request a designer,” but you manually match them with freelancers.

    Step 3: Success criteria
    “If 20% of businesses who request a designer actually pay for the work, my assumption is validated.”

    Step 4: Build in 1-2 weeks

    • Week 1: Build landing page with a form
    • Week 2: Manually reach out to 5 freelance designers to partner with

    Step 5: Launch to early adopters
    Post in local business Facebook groups: “Need a logo or flyer designed? We’ll match you with a pro designer in 24 hours!” Target 20 businesses.

    Measure: How many businesses sign up? How many actually pay?


    End of Part 1
    In Part 2, we’ll cover: What to measure in your MVP, how to avoid vanity metrics, and how to use the Three Engines of Growth to guide your testing!

    πŸ“– Chapter 6: Test – PART 2


    What to Measure in Your MVP

    Eric Ries emphasizes that once you launch your MVP, you MUST measure the right things. Measuring the wrong metrics will lead you to make bad decisions!

    Actionable Metrics vs. Vanity Metrics

    Eric Ries makes a critical distinction:

    • Vanity Metrics: Numbers that look good but don’t help you make decisions (e.g., total signups, page views, social media followers)
    • Actionable Metrics: Numbers that directly inform your decisions (e.g., retention rate, conversion rate, customer lifetime value)

    Key principle: If a metric doesn’t change your behavior, it’s a vanity metric!

    Examples of Vanity vs. Actionable Metrics

    Vanity Metric: “We have 10,000 users!”

    • Why it’s useless: Are those users active? Do they pay? Did they come back after the first visit?

    Actionable Metric: “30% of users who sign up are still active after 1 month”

    • Why it’s useful: This tells you if your product is actually valuable! If retention is low, you need to pivot or improve the product.

    Vanity Metric: “We got 50,000 page views last month!”

    • Why it’s useless: Page views don’t equal engagement or revenue. People might visit once and never come back.

    Actionable Metric: “5% of visitors convert to paying customers”

    • Why it’s useful: This tells you if your value proposition is working! If conversion is low, you need to improve messaging or pricing.

    β˜• Hamed’s Analysis: The Retention Rate is King

    In all my years of consulting, I’ve learned one truth: Retention Rate is the most important metric for early-stage startups!

    Why?

    • If people don’t come back, your product doesn’t create value
    • If people don’t come back, no amount of marketing will save you
    • If people don’t come back, you’ll waste money acquiring users who churn

    Real example – Fitness App:

    My client built a workout tracking app. After 1 month:

    • Total signups: 5,000 (looks great!)
    • Active users after 1 week: 800 (16% retention)
    • Active users after 1 month: 150 (3% retention)

    What this told us: The app is broken! People try it once and never come back. No point in spending money on ads to get more signups!

    What we did: Stopped all marketing. Interviewed the 150 active users to understand why THEY stayed. Redesigned the app based on their feedback. After redesign:

    • 1-week retention: 45%
    • 1-month retention: 25%

    Lesson: Focus on retention FIRST, growth SECOND!


    The Three Engines of Growth

    Eric Ries introduces a framework called the Three Engines of Growth. Every startup relies on one (or more) of these engines to grow:

    Engine 1: The Sticky Engine (Retention-Based Growth)

    How it works: You acquire customers, and they stick around for a long time (high retention).

    Key metric: Retention Rate or Churn Rate

    • Retention Rate = Percentage of customers still active after X days/weeks/months
    • Churn Rate = Percentage of customers who stop using your product

    Formula for growth:

    $\text{Growth Rate} = \text{New Customer Rate} – \text{Churn Rate}$

    Example: Netflix, Spotify – they grow by keeping customers subscribed for months/years.

    When to use this engine: If your product has recurring use (SaaS, subscription services, social networks).

    Engine 2: The Viral Engine (Word-of-Mouth Growth)

    How it works: Each customer brings in more customers organically (via referrals, sharing, invitations).

    Key metric: Viral Coefficient (K-factor)

    • K-factor = Average number of new users each existing user brings in
    • If K > 1, your product grows exponentially without marketing spend!

    Formula:

    $K = i \times c$

    Where:

    • $i$ = Number of invitations sent per user
    • $c$ = Conversion rate (% of invitations that result in a new user)

    Example: Dropbox, WhatsApp, Instagram – they grew by users inviting friends.

    When to use this engine: If your product becomes more valuable when more people use it (network effects).

    Engine 3: The Paid Engine (Advertising-Based Growth)

    How it works: You spend money to acquire customers, and the revenue from each customer is higher than the cost to acquire them.

    Key metrics:

    • CAC (Customer Acquisition Cost): How much you spend to acquire one customer
    • CLV (Customer Lifetime Value): How much revenue one customer generates over their lifetime

    Formula for profitability:

    $\text{CLV} > \text{CAC}$

    If CLV is higher than CAC, you can scale profitably by spending more on ads!

    Example: E-commerce stores, Amazon, Booking.com – they spend heavily on ads because each customer is worth more than the cost to acquire them.

    When to use this engine: If your product has high margins or repeat purchases.

    β˜• Hamed’s Analysis: Which Engine Should You Focus On?

    Here’s my advice for early-stage startups:

    Start with the Sticky Engine FIRST!

    • If people don’t stick around (low retention), the other engines won’t work
    • Viral growth only works if users love your product enough to share it
    • Paid growth only works if CLV > CAC, which requires high retention

    Example – Social Fitness App (from Chapter 5):

    Remember the social fitness app that failed? Here’s what happened:

    • They tried the Viral Engine first (invite friends to work out together)
    • But 1-week retention was only 10%!
    • Result: Users invited friends, but those friends didn’t stick around either

    What they should have done: Fix retention FIRST (Sticky Engine), then focus on viral growth.


    My framework:

    • Stage 1: Build Sticky Engine (get retention to 40%+)
    • Stage 2: Add Viral or Paid Engine to accelerate growth
    • Stage 3: Optimize all three engines simultaneously

    Lesson: Don’t spend money on growth until retention is high!


    How to Measure Each Engine

    Measuring the Sticky Engine

    What to track:

    • 1-day retention rate
    • 7-day retention rate
    • 30-day retention rate
    • Churn rate (monthly)

    How to calculate retention:

    $\text{Retention Rate} = \frac{\text{Active Users After X Days}}{\text{Total Users Who Signed Up X Days Ago}} \times 100$

    Example:

    • 100 users signed up on January 1
    • 40 users are still active on January 7
    • 7-day retention = $\frac{40}{100} \times 100 = 40\%$

    Measuring the Viral Engine

    What to track:

    • Number of invitations sent per user ($i$)
    • Conversion rate of invitations ($c$)
    • Viral coefficient ($K = i \times c$)

    Example:

    • Each user sends 5 invitations ($i = 5$)
    • 20% of invitations convert to new users ($c = 0.2$)
    • $K = 5 \times 0.2 = 1.0$

    If $K = 1.0$, each user brings in 1 new user β†’ Steady growth

    If $K > 1.0$, each user brings in MORE than 1 new user β†’ Exponential growth!

    Measuring the Paid Engine

    What to track:

    • CAC (Customer Acquisition Cost)
    • CLV (Customer Lifetime Value)
    • Payback period (how long until a customer becomes profitable)

    How to calculate CAC:

    $\text{CAC} = \frac{\text{Total Marketing Spend}}{\text{Number of Customers Acquired}}$

    How to calculate CLV:

    $\text{CLV} = \text{Average Revenue Per Customer} \times \text{Average Customer Lifespan}$

    Example:

    • You spend $1,000 on ads and acquire 50 customers β†’ CAC = $20
    • Each customer pays $10/month and stays for 6 months β†’ CLV = $60
    • $\text{CLV} >
  • πŸ“– Chapter 5: Leap

    The Lean Startup – Identifying Your Leap-of-Faith Assumptions


    Overview: What Are Leap-of-Faith Assumptions?

    In Chapter 4, we learned how to design experiments. But which assumptions should we test FIRST?

    Eric Ries introduces the concept of Leap-of-Faith Assumptions – the riskiest, most critical assumptions your entire business depends on.

    What is a Leap-of-Faith Assumption?

    A leap-of-faith assumption is an unproven hypothesis that, if wrong, will cause your entire business to fail.

    Examples:

    • For Facebook (early days): “College students want to connect with each other online”
    • For Uber: “People will get into cars driven by strangers”
    • For Airbnb: “People will let strangers sleep in their homes”

    These seem obvious NOW, but they were huge leaps of faith at the time!

    “The two most important assumptions entrepreneurs make are what I call the value hypothesis and the growth hypothesis. These give rise to tuning variables that control a startup’s engine of growth.”


    Why Leap-of-Faith Assumptions Matter

    Most startups fail not because of bad execution, but because they built something nobody wanted.

    The Danger of Untested Assumptions

    When you don’t test your leap-of-faith assumptions early:

    • You waste months (or years!) building the wrong product
    • You spend money on features nobody wants
    • You only discover the truth when it’s too late

    The solution: Identify and test your riskiest assumptions FIRST, before investing heavily in building.

    β˜• Hamed’s Analysis: The Most Expensive Mistake

    I’ve seen this mistake SO many times in my consulting work:

    The Pattern:

    • Someone has a “great idea”
    • They spend 6-12 months building it
    • They launch with excitement
    • Nobody uses it

    Why did this happen? They never tested their leap-of-faith assumption!

    Real example – Social Fitness App:

    I consulted for a team building a “social fitness app” where friends could challenge each other to workouts.

    Their untested assumption: “People want to compete with friends for fitness motivation.”

    What actually happened: After building for 8 months, they discovered people felt EMBARRASSED sharing their workouts with friends! The core assumption was wrong.

    Cost: $50,000 and 8 months wasted.

    How to avoid this: Test your leap-of-faith assumption in WEEK 1, not month 12!


    How to Identify Your Leap-of-Faith Assumptions

    Eric Ries provides a framework for finding your most critical assumptions:

    The 3-Question Framework

    Question 1: What must be true about your customers?
    Example: “Restaurant owners want to save time on inventory management”

    Question 2: What must be true about your product?
    Example: “Our automated inventory system will actually save them time”

    Question 3: What must be true about your market?
    Example: “Enough restaurants struggle with inventory to make this a viable business”

    Your leap-of-faith assumptions are the answers that, if FALSE, kill your business!

    Example: Identifying Leap-of-Faith for an Online Tutoring Service

    Let’s say you want to start an online tutoring platform for high school math.

    Potential assumptions:

    • “Parents will pay for online tutoring” β†’ Leap-of-faith? MAYBE
    • “Students will engage with tutors on video calls” β†’ Leap-of-faith? YES!
    • “Online tutoring is as effective as in-person” β†’ Leap-of-faith? YES!
    • “We can recruit qualified tutors” β†’ Leap-of-faith? NO (easy to test)

    The two biggest leaps: Will students actually engage online? Is it effective?

    Test these FIRST before building anything!

    β˜• Hamed’s Analysis: How I Find Leap-of-Faith Assumptions With Clients

    When I work with startups, I use this simple exercise:

    The “What if we’re wrong?” Test

    I ask: “If this assumption is wrong, does your entire business fail?”

    • If YES β†’ That’s a leap-of-faith assumption. Test it NOW!
    • If NO β†’ It’s important but not critical. Test it later.

    Example – My Restaurant Website Client:

    When we discussed their online ordering system, I asked about their assumptions:

    • “Customers want to order online” β†’ If wrong? Business fails. LEAP-OF-FAITH!
    • “They’ll pay by credit card” β†’ If wrong? We can accept cash. Not leap-of-faith.
    • “They want delivery” β†’ If wrong? They can pick up. Not leap-of-faith.

    We tested the FIRST assumption (online ordering demand) by adding a WhatsApp button – validated in 1 week!


    Case Study: Facebook’s Leap of Faith

    Eric Ries discusses Facebook (then called “The Facebook”) as a perfect example of testing leap-of-faith assumptions.

    Facebook’s Early Leap-of-Faith Assumption

    The assumption: “College students want a digital directory to connect with classmates.”

    In 2004, this was NOT obvious! Remember:

    • MySpace focused on music and entertainment
    • Friendster was for meeting new people online
    • The idea of a “digital yearbook” was unproven

    How Mark Zuckerberg Tested It

    Instead of building a global platform, Mark did something brilliant:

    • Step 1: Built a simple version just for Harvard students
    • Step 2: Launched it in one dorm first
    • Step 3: Watched: Did students actually sign up and use it?

    The result: Within 24 hours, over 1,200 Harvard students had signed up!

    Key insight: This validated the leap-of-faith assumption. ONLY THEN did he expand to other schools.

    β˜• Hamed’s Analysis: The Power of Starting Small

    Notice what Mark Zuckerberg did: He started with ONE college, not all colleges!

    Why is this brilliant?

    • If the assumption was WRONG, he’d know in days (not years)
    • Low cost to test (just one school’s worth of effort)
    • Easy to pivot if needed
    • Fast feedback loop

    Lesson: Test your leap-of-faith assumption in the SMALLEST possible market first!

    My example – Women’s Clothing Store Website:

    I worked with a boutique owner who wanted to sell online nationwide. But we started small:

    • Week 1: Posted 10 items on Instagram for local customers only
    • Week 2: Offered local delivery for orders via DM
    • Week 3: Got 23 orders from her existing customer base!

    This validated: “My customers WILL buy online.” THEN we built a proper e-commerce website.

    If it had FAILED? We’d know after investing just 3 weeks – not 6 months building a fancy website nobody would use!


    The Two Critical Leap-of-Faith Assumptions

    While every startup has multiple assumptions, Eric Ries emphasizes that two are almost always the most critical:

    1. The Value Hypothesis (Revisited)

    The question: Do customers find enough value in your product to actually use it?

    NOT just “Would you use this?” but “Will you KEEP using this?”

    How to test:

    • Give your MVP to 50 people
    • Track: How many use it more than 3 times?
    • If less than 40% β†’ Your value hypothesis is likely wrong

    2. The Growth Hypothesis (Revisited)

    The question: How will your product grow? Will customers tell others?

    This determines your entire business model!

    How to test:

    • Give your MVP to your first 100 customers
    • Don’t do ANY marketing
    • Track: Do they naturally tell their friends?
    • If NO β†’ You’ll need to rely on paid marketing (expensive!)

    “The point is not to find the average customer but to find early adopters: the customers who feel the need for the product most acutely. Those customers tend to be more forgiving of mistakes and are especially eager to give feedback.”


    End of Part 1

    In Part 2 we’ll cover:
    β€’ How to find and engage with early adopters
    β€’ The role of analogs and antilogs in identifying leaps of faith
    β€’ Practical exercises for testing your assumptions
    β€’ Five Key Takeaways from Chapter 5

    πŸ“– Chapter 5: Leap – PART 2


    Finding and Engaging Early Adopters

    Eric Ries emphasizes that when testing leap-of-faith assumptions, you should NOT target average customers. Instead, focus on early adopters.

    Who Are Early Adopters?

    Early adopters are customers who:

    • Feel the problem most acutely
    • Are actively looking for solutions RIGHT NOW
    • Are willing to try imperfect products
    • Will give you honest feedback
    • Will forgive early mistakes

    Example: For Dropbox, early adopters were tech-savvy people who already struggled with file syncing across multiple computers.

    How to Find Early Adopters

    Step 1: Identify where people with your problem hang out

    • Online forums and communities
    • Social media groups
    • Industry events
    • Professional networks

    Step 2: Look for people actively complaining about the problem

    • Search Twitter for keywords related to your problem
    • Read Reddit threads where people ask for solutions
    • Join Facebook groups where your target customers gather

    Step 3: Engage directly and offer your MVP

    • Send personal messages
    • Offer free access in exchange for feedback
    • Be transparent that your product is early-stage

    β˜• Hamed’s Analysis: How I Help Clients Find Early Adopters

    Most founders make this mistake: They try to launch to EVERYONE at once.

    Better approach: Find 10-20 people who DESPERATELY need your solution!

    Real example – Meal Prep Service for Busy Professionals:

    My client wanted to launch a healthy meal delivery service. Instead of spending money on ads, we did this:

    • Week 1: Posted in local Facebook groups: “Who struggles to eat healthy because you’re too busy to cook?”
    • Week 2: Got 47 responses! We messaged the 15 most engaged people
    • Week 3: Offered them 1 week of free meals in exchange for honest feedback
    • Week 4: 12 out of 15 said they’d pay for it!

    Cost: Just the food for 15 people for one week. Validation: Confirmed people WOULD pay for this service!

    Lesson: Early adopters are often HAPPY to be your guinea pigs if you’re solving a real problem for them!


    Using Analogs and Antilogs

    Eric Ries introduces a powerful tool for identifying leap-of-faith assumptions: looking at analogs (similar successes) and antilogs (similar failures).

    What Are Analogs?

    Analogs: Other companies or products that succeeded by solving a similar problem.

    How to use them: Study what worked for them and apply those lessons to your startup.

    Example – Airbnb’s Analog:

    • Analog: eBay (proved people will trust strangers in online transactions)
    • Insight: If eBay could build trust for buying/selling, Airbnb could build trust for home rentals
    • Leap-of-faith assumption validated: Trust can be created through ratings and reviews

    What Are Antilogs?

    Antilogs: Companies that failed trying something similar to what you’re attempting.

    How to use them: Study WHY they failed so you can avoid the same mistakes.

    Example – Uber’s Antilog:

    • Antilog: Traditional taxi dispatch services (slow, unreliable, poor customer experience)
    • Insight: The old model failed because it lacked real-time tracking and transparent pricing
    • Leap-of-faith assumption to test: People want to see their driver’s location and know the price upfront

    β˜• Hamed’s Analysis: How I Use Analogs and Antilogs With Clients

    When I work with startups, I always ask two questions:

    Question 1: “Who has succeeded doing something similar?” (Find your analog)

    Question 2: “Who has FAILED doing something similar?” (Find your antilog)

    Example – Online Tutoring Platform:

    My client wanted to launch an online tutoring service for high school students.

    Analogs we studied:

    • Khan Academy (proved students will learn online)
    • Coursera (proved video-based learning works)
    • Duolingo (proved gamification increases engagement)

    Antilogs we studied:

    • Several failed tutoring platforms that tried to be “Uber for tutors” (failed because no quality control)
    • Platforms that relied only on pre-recorded videos (failed because students need live interaction)

    Insight: We needed LIVE tutoring (analog: Khan Academy’s success) WITH quality control (antilog: Uber-for-tutors failures).

    Result: We designed an MVP that focused on live, vetted tutors – and it worked!


    Practical Exercise: Identifying Your Leap-of-Faith Assumptions

    Eric Ries provides a step-by-step process for identifying and testing your leap-of-faith assumptions.

    The 5-Step Process

    Step 1: List all your assumptions
    Write down EVERY assumption your business depends on.

    Step 2: Identify the riskiest ones
    Ask: “If this assumption is wrong, does my business fail?” If YES, it’s a leap-of-faith assumption.

    Step 3: Find analogs and antilogs
    Research similar companies to learn what worked and what didn’t.

    Step 4: Design a test for your riskiest assumption
    Create the simplest possible experiment to validate or invalidate this assumption.

    Step 5: Run the test and learn
    Execute the test, collect data, and decide whether to pivot or persevere.

    Example: Testing a Leap-of-Faith for a Fitness App

    Let’s say you’re building a fitness app that connects users with personal trainers via video calls.

    Step 1: List assumptions

    • People want to work out at home
    • People will pay for virtual training
    • Video call workouts are effective
    • Personal trainers want to work remotely

    Step 2: Identify riskiest assumption

    “Video call workouts are effective” β†’ If this is false, nobody will use your app!

    Step 3: Find analogs and antilogs

    • Analog: Peloton (proved people will work out at home with guidance)
    • Antilog: Many failed “live workout streaming” apps (people got bored without personal interaction)

    Step 4: Design a test

    Offer 10 people free sessions with a personal trainer via Zoom. After 4 weeks, ask: “Was this as effective as in-person training?”

    Step 5: Run the test

    • If 8+ out of 10 say YES β†’ Assumption validated! Build your app.
    • If 5 or fewer say YES β†’ Assumption INVALID. Pivot or abandon.

    β˜• Hamed’s Analysis: The Power of Testing Small

    Notice in the fitness app example: We’re testing with just 10 people!

    Why so few?

    • If your assumption is BADLY wrong, you’ll know it with just 10 people
    • If it’s RIGHT, you’ll see clear signals even with 10 people
    • It’s fast (weeks, not months)
    • It’s cheap (minimal cost to test)

    My rule: If you can’t validate your leap-of-faith assumption with 10-50 people, your assumption is probably wrong!

    Example – E-commerce for Handmade Jewelry:

    My client wanted to sell handmade jewelry online. Her leap-of-faith assumption: “People will buy handmade jewelry without seeing it in person.”

    Our test:

    • Posted 15 pieces on Instagram
    • Offered free shipping for first 10 buyers
    • Tracked: How many people bought? Did they return items?

    Result: 8 people bought, zero returns! Assumption validated with just 8 sales!

    Lesson: You don’t need 1,000 customers to validate a leap-of-faith assumption. Start with 10!


    Common Mistakes When Identifying Leap-of-Faith Assumptions

    Mistake 1: Testing Too Many Assumptions at Once

    The problem: You can’t learn which assumption was wrong if you test them all together.

    Solution: Test ONE leap-of-faith assumption at a time. Once validated, move to the next.

    Mistake 2: Confusing “Nice to Have” with “Must Have”

    The problem: Not all assumptions are equally critical.

    Solution: Focus ONLY on assumptions that, if wrong, would kill your business.

    Example: “Customers want blue buttons instead of green buttons” is NOT a leap-of-faith assumption. “Customers will pay for this solution” IS.

    Mistake 3: Testing With the Wrong People

    The problem: Asking friends and family for feedback leads to false positives.

    Solution: Test with REAL potential customers who don’t know you and have no reason to be polite.

    Mistake 4: Building Before Testing

    The problem: Many founders build a full product FIRST, then try to validate assumptions.

    Solution: Test your leap-of-faith assumptions BEFORE writing a single line of code!


    Five Key Takeaways from Chapter 5

    Takeaway 1: Identify Your Leap-of-Faith Assumptions Early

    Every startup has assumptions that, if wrong, will cause the business to fail. Identify these FIRST before investing time and money.

    Takeaway 2: Focus on the Value Hypothesis and Growth Hypothesis

    These two assumptions are almost always the most critical: Does your product deliver value? Will it grow?

    Takeaway 3: Test with Early Adopters, Not Average Customers

    Early adopters feel the problem most acutely and are more forgiving of imperfect solutions. They’re your best source of honest feedback.

    Takeaway 4: Use Analogs and Antilogs to Learn Faster

    Study similar successes (analogs) and failures (antilogs) to understand what works and what doesn’t.

    Takeaway 5: Test Assumptions Before Building

    Don’t build a full product until you’ve validated your riskiest assumptions. Start small, test fast, learn quickly.


    Action Plan: What To Do Right Now

    Your Next Steps

    Step 1: Write down all your business assumptions
    Spend 30 minutes listing EVERY assumption your business depends on.

    Step 2: Identify your top 3 leap-of-faith assumptions
    Use the question: “If this assumption is wrong, does my business fail?”

    Step 3: Research analogs and antilogs
    Find 3 companies that succeeded with similar ideas and 3 that failed. Learn from both.

    Step 4: Design a test for your riskiest assumption
    Create the simplest possible experiment to validate or invalidate this assumption in the next 2 weeks.

    Step 5: Find 10-20 early adopters
    Identify where your early adopters hang out online and start engaging with them today.

    “The lesson of the MVP is that any additional work beyond what was required to start learning is waste, no matter how important it might have seemed at the time.”


    End of Chapter 5: Leap

    Next up: Chapter 6: Test
    We’ll dive into how to design and run effective MVP tests, measure what matters, and make data-driven decisions about your startup’s future.

  • πŸ“– Chapter 4: Experiment

    πŸ“– Chapter 4: Experiment

    The Lean Startup – Turning Ideas into Scientific Experiments


    Overview: Why Experimentation Matters

    In Chapter 3, we learned about Validated Learning. Now in Chapter 4, we dive deeper into HOW to structure that learning through systematic experiments.

    The key insight: Every startup is essentially a grand experiment – an attempt to answer the question: “Can we build a sustainable business around this vision?”

    What is a Startup Experiment?

    A startup experiment is not just “trying something and seeing what happens.”

    It’s a structured test where you:

    • State a clear hypothesis about your customers
    • Design a minimum test to validate or invalidate it
    • Collect real data from real customer behavior
    • Make a decision based on what you learned

    The Two Most Important Hypotheses

    Eric Ries explains that every startup should test two fundamental assumptions first:

    1. The Value Hypothesis

    Question: Does your product actually deliver value to customers?

    Example: “Users will find our meal planning app valuable enough to use it daily.”

    How to test: Build a simple MVP and measure actual usage (not just signups!)

    2. The Growth Hypothesis

    Question: How will new customers discover your product?

    Example: “Users will refer their friends after trying our app.”

    How to test: Track how your first 100 users found you and how many referred others.

    β˜• Hamed’s Analysis: Why These Two Come First

    I always tell my clients: You need to prove these two things before anything else!

    Why? Because:

    • If you can’t deliver value β†’ No one will keep using your product
    • If you can’t grow β†’ You’ll run out of customers

    Real example: I worked with a client building a meditation app. We spent 2 weeks testing the Value Hypothesis first:

    • Built a simple prototype with just 3 guided meditations
    • Gave it to 50 people for free
    • Tracked: Did they use it more than once?

    Result: Only 8 out of 50 used it twice. This told us the VALUE wasn’t there yet – we needed to improve before worrying about growth!


    Case Study: Village Laundry Service

    One of the most powerful examples in this chapter comes from Village Laundry Service – a startup that offers laundry pickup and delivery.

    The Initial Hypothesis

    The founders believed: “Busy professionals will pay for convenient laundry pickup service.”

    But they didn’t know if this was true!

    The Experiment

    Instead of building an app, hiring drivers, and buying equipment, they did something simple:

    • Created a basic landing page explaining the service
    • Put their personal phone number on it
    • Ran a small Facebook ad targeting local professionals
    • When people called, they personally picked up and delivered laundry!

    Cost: About $200 for ads + their time

    The Results

    Within 2 weeks, they had 40 customers who were willing to pay!

    Key learning: Yes, the Value Hypothesis was validated – people DO want this service.

    Next step: Now they could confidently invest in building proper systems.

    β˜• Hamed’s Analysis: The Power of “Doing Things That Don’t Scale”

    This is one of my favorite startup principles: “Do things that don’t scale!”

    The Village Laundry founders could have spent 6 months building:

    • A mobile app for customers
    • A driver scheduling system
    • Payment processing integration
    • Automated notifications

    But instead, they did it ALL manually first!

    Lesson: Don’t automate until you’ve proven people want it manually!

    My example – Restaurant Website:

    A restaurant owner wanted online ordering on their website. Instead of building complex software, we did this:

    • Week 1: Added a WhatsApp button to their existing website
    • Week 2: Manually took orders via WhatsApp chat
    • Week 3: Tracked how many people ordered vs. just browsed the menu

    Result: 60% of people who clicked WhatsApp actually placed orders! NOW we knew it was worth building proper online ordering.


    How to Design Effective Experiments

    Eric Ries provides a framework for designing experiments that actually teach you something:

    The 5-Step Experiment Design Process

    Step 1: Write Down Your Assumption
    Be specific! “Users want healthy food” is too vague. Better: “Office workers aged 25-40 will order healthy lunches if delivered within 30 minutes.”

    Step 2: Identify What Data Would Prove/Disprove It
    What specific number or behavior would tell you if you’re right? Example: “If 30% of people who see our offer actually order, we’re onto something.”

    Step 3: Design the Minimum Test
    What’s the simplest, fastest way to collect that data? Often it’s way simpler than you think!

    Step 4: Determine Success Criteria in Advance
    Decide BEFORE the experiment: “If we get X result, we continue. If we get Y result, we pivot.”

    Step 5: Run the Experiment and Learn
    Actually do it, collect real data, and make a decision based on what you learned.

    “The goal of a startup is to figure out the right thing to build – the thing customers want and will pay for – as quickly as possible.”

    (The goal isn’t building – it’s learning what to build!)


    End of Part 1

    In Part 2 we’ll cover:
    β€’ More real-world experiment examples
    β€’ How to avoid common experimentation mistakes
    β€’ Five Key Takeaways from Chapter 4

    πŸ“– Chapter 4: Experiment – Part 2


    More Real-World Experiment Examples

    Let’s look at more examples of how successful companies used experiments to validate their ideas:

    Example 1: Dropbox’s Video MVP

    The Challenge: Dropbox needed to test if people wanted file syncing software, but building it would take months.

    The Experiment: Instead of building the product, founder Drew Houston made a 3-minute demo video showing how it would work.

    The Result: The video was posted on a tech forum and overnight, their beta waiting list went from 5,000 to 75,000 people!

    Key Learning: This validated the Value Hypothesis WITHOUT writing complex code.

    Example 2: Zappos’ First Experiment

    The Hypothesis: “People will buy shoes online without trying them on first.”

    The Experiment: Founder Nick Swinmurn didn’t buy inventory. Instead, he:

    • Took photos of shoes in local shoe stores
    • Posted them on a simple website
    • When someone ordered, he’d go buy the shoes from the store and ship them!

    The Result: People DID buy shoes online! This validated the core assumption before investing in warehouses and inventory.

    β˜• Hamed’s Analysis: What These Examples Teach Us

    Notice a pattern? None of these founders built the “real” product first!

    They all found creative ways to test their core assumption with minimal investment:

    • Dropbox β†’ Made a video (cost: $0, time: 1 weekend)
    • Zappos β†’ Manual fulfillment (cost: minimal, time: immediate)
    • Village Laundry β†’ Personal pickup (cost: gas money, time: their evenings)

    Key principle: Test the RISKIEST assumption first with the CHEAPEST method!

    My example – Fitness Coach App:

    I consulted for a fitness coach who wanted an app for workout plans. Instead of building an app, we tested this way:

    • Week 1: Created a Google Form for workout requests
    • Week 2: Sent workout plans as PDF files via email
    • Week 3: Asked: “Would you pay $10/month for this?”

    Result: 40 out of 50 people said YES! Only THEN did we start building the app.


    Common Experimentation Mistakes to Avoid

    Eric Ries warns against several common pitfalls when running startup experiments:

    ❌ Mistake #1: Analysis Paralysis

    The Problem: Spending months planning the “perfect” experiment instead of running an imperfect one.

    The Fix: Done is better than perfect! Run a quick, imperfect test this week rather than a perfect one in 3 months.

    ❌ Mistake #2: Vanity Metrics

    The Problem: Measuring things that make you feel good but don’t indicate real progress.

    Examples of Vanity Metrics:

    • Total registered users (who never come back)
    • Page views (from people who immediately leave)
    • Social media followers (who don’t buy)

    The Fix: Focus on actionable metrics like retention rate, customer lifetime value, and conversion rates.

    ❌ Mistake #3: Ignoring Negative Results

    The Problem: Only paying attention to data that confirms what you already believe.

    The Fix: Negative results are GOOD! They save you from wasting months on the wrong path. Embrace them!

    ❌ Mistake #4: Building Instead of Testing

    The Problem: Building features before validating if anyone wants them.

    The Fix: Always ask: “What’s the SMALLEST thing I can do to test this assumption?”

    β˜• Hamed’s Analysis: The Hardest Lesson

    In my experience, the hardest mistake to avoid is #3 – Ignoring Negative Results.

    Why? Because we get emotionally attached to our ideas!

    Real story: I worked on a project for an online course platform. We believed: “People want live interactive courses more than pre-recorded ones.”

    We tested it and guess what? Only 20% of users showed up for live sessions, but 80% watched recordings later!

    The temptation: “Maybe we just need better marketing!” or “The timing was bad!”

    The reality: Our assumption was WRONG. People wanted flexibility, not live interaction.

    Lesson: When data contradicts your belief, believe the data!


    The Scientific Method for Startups

    Eric Ries emphasizes that Lean Startup applies the scientific method to business:

    Traditional Science vs. Startup Experiments

    Traditional Science:

    • Form hypothesis
    • Design controlled experiment
    • Collect data
    • Accept or reject hypothesis
    • Publish results

    Lean Startup Science:

    • Form hypothesis about customers
    • Build minimum viable test
    • Collect data from REAL customers
    • Validated learning: What did we learn?
    • Decide: Persevere or Pivot

    “Experiments are more than just theoretical inquiries. They are the first products. If this or any other experiment is successful, it allows the manager to get started with his or her campaign: enlisting early adopters, adding employees to each further experiment, and eventually starting to build a product. By the time that product is ready to be distributed widely, it will already have established customers. It will have solved real problems and will offer detailed specifications for what needs to be built.”

    (Your experiments ARE your first products!)


    Quick Action Plan: Design Your First Experiment

    Here’s a practical template you can use RIGHT NOW:

    Your Experiment Design Template

    1. My Riskiest Assumption:
    (Example: “Busy parents will pay $20/month for meal planning service”)

    2. How I’ll Test It:
    (Example: “Create a landing page, run Facebook ads, collect emails of interested people”)

    3. Success Metric:
    (Example: “If 15% of page visitors give their email, the assumption is validated”)

    4. Timeline:
    (Example: “Run for 2 weeks with $200 ad budget”)

    5. What I’ll Do With Results:
    (Example: “If validated β†’ Build MVP. If not β†’ Talk to 10 people who visited but didn’t sign up”)

    β˜• Hamed’s Analysis: Start THIS WEEK

    Don’t wait for the “perfect” experiment design. Here’s what I tell every client:

    Pick ONE assumption and test it THIS WEEK with whatever you have!

    Examples of experiments you can run this week:

    • Post about your idea on social media and count how many people comment asking for more info
    • Create a Google Form describing your product and ask: “Would you pay $X for this?”
    • Call 10 potential customers and describe your idea – count how many say “That’s interesting!”
    • Make a simple landing page on Carrd (free!) and share it in relevant online communities

    Remember: An imperfect experiment THIS WEEK is worth more than a perfect one in 3 months!


    Five Key Takeaways from Chapter 4

    βœ… Key Takeaway #1: Every Startup is an Experiment

    Your startup is not just a business – it’s a scientific experiment to test if you can build something customers want and will pay for.

    βœ… Key Takeaway #2: Test Value BEFORE Growth

    First prove your product delivers value (Value Hypothesis), THEN worry about how to scale it (Growth Hypothesis). Don’t do it backwards!

    βœ… Key Takeaway #3: Do Things That Don’t Scale

    Manual processes are GOOD in the beginning! They let you test assumptions quickly without building complex systems. Automate later!

    βœ… Key Takeaway #4: Design Before You Build

    Before running an experiment, write down: (1) Your assumption, (2) How you’ll test it, (3) What success looks like, (4) What you’ll do with the results.

    βœ… Key Takeaway #5: Embrace Negative Results

    Negative results are NOT failures – they’re valuable learning! They save you from wasting months on the wrong path. Celebrate them!


    πŸŽ‰ Chapter 4 Complete!

    You now understand how to turn your startup ideas into scientific experiments!

    Next up: Chapter 5 – Leap
    (How to identify and test your leap-of-faith assumptions)

  • πŸ“– Chapter 3: Learn

    πŸ“– Chapter 3: Learn

    The Lean Startup – Validated Learning: The Real Measure of Progress


    Overview: What is Validated Learning?

    In the previous chapters, we learned about Startups and MVPs. But now the critical question is: How do we know we’re making real progress?

    Chapter 3 is all about Validated Learning – learning that is backed by real data and evidence, not just assumptions or gut feelings!

    What is Validated Learning?

    Validated Learning means learning through scientific experimentation – not just building products, but understanding whether customers actually want what we’re building.

    The key difference: Instead of saying “We have 1,000 users,” we should ask: “Are these users actually using our product and getting value from it?”


    Case Study: IMVU’s Painful Lesson

    Eric Ries (the book’s author) shares the real story of IMVU – an Avatar-based chat platform he co-founded.

    The Initial Mistake

    The IMVU team spent the first 6 months building a “complete” product – with many features, beautiful design, and everything they thought customers would want.

    Result: When they launched, nobody used it!

    The Key Lesson

    After this failure, they realized they needed to build faster and test sooner. So they built a simple MVP and put it in front of real users.

    Result: They learned what users actually wanted – and it was something they never expected!

    β˜• Hamed’s Analysis: Why This Matters

    I’ve seen this mistake countless times – in consulting projects, clients come and say: “I want to build an app with 20 features!”

    The first question I always ask: “Have you talked to real users? Have you tested these features?”

    Usually the answer is “no”!

    Core lesson: Before you invest time and money, you must test your assumptions – even with just a video, a landing page, or a very simple version.


    The Build-Measure-Learn Feedback Loop

    The heart of Lean Startup is a simple but powerful loop:

    The Build-Measure-Learn Cycle

    1. Build: Create a simple MVP that tests your hypothesis

    2. Measure: See real user reactions – not what they say, but what they actually do!

    3. Learn: Extract insights from the data and decide: continue, pivot, or stop

    Repeat: Keep cycling until you achieve Product-Market Fit!

    “The lesson of the MVP is that any additional work beyond what was required to start learning is waste, no matter how important it might have seemed at the time.”

    (Any extra work beyond the MVP is wasted resources – no matter how important it seems!)


    Value vs Waste: How to Measure Real Progress

    One of the biggest challenges for startups is: How do we measure real progress?

    ❌ Wrong Measurement (Vanity Metrics)

    Example: “We have 10,000 registered users!”

    Problem: But how many actually use the product? How many pay?

    These are Vanity Metrics – numbers that look good on paper but teach you nothing!

    βœ… Right Measurement (Actionable Metrics)

    Better example: “Out of 100 registered users, 30 used it in the first week, and 10 paid for it.”

    These are Actionable Metrics – you can learn from them and make decisions!

    β˜• Hamed’s Analysis: Practical Example – Online Restaurant

    Imagine you want to launch a food delivery app (like UberEats).

    Common mistake: Spend 3 months coding, complete the app, launch it, and find out nobody orders!

    Lean approach:

    • Week 1: Build a simple landing page: “Order healthy food from local restaurants – Coming soon!” + email form
    • Week 2: Run ads and see how many people sign up (if less than 50, maybe it’s not a good idea!)
    • Week 3: Call those 50 people and manually take food orders for them (yes, manually! No app!)
    • Learning: Now you know if people actually want this – before spending 3 months building!

    Key point: Validated Learning means getting maximum learning with minimum cost!


    End of Part 1

    In Part 2 we’ll cover:
    β€’ How to implement Validated Learning in your business
    β€’ More real examples from successful startups
    β€’ Five Key Takeaways

    πŸ“– Chapter 3: Learn – PART 2


    How to Implement Validated Learning in Your Business

    Now that we understand what Validated Learning is, let’s explore how to actually implement it in real projects.

    The 4-Step Process for Validated Learning

    Step 1: State Your Hypothesis
    Write down what you believe about your customers. Example: “Small business owners need a simple accounting tool.”

    Step 2: Design a Minimum Test
    What’s the smallest thing you can build to test this? Maybe just a landing page + email signup.

    Step 3: Run the Experiment
    Put your MVP in front of real users and observe their actual behavior (not just their words!).

    Step 4: Evaluate and Learn
    Look at the data: Did people behave as you expected? If yes, continue. If no, pivot or stop.


    Real Example: How Zappos Validated Learning

    Let’s look at one of the most famous examples from the book – Zappos!

    The Hypothesis

    Nick Swinmurn (Zappos founder) believed: “People will buy shoes online if they have a good selection and easy returns.”

    But he didn’t know if this was true!

    The MVP Test

    Instead of building a warehouse, hiring staff, and creating complex software, Nick did something brilliant:

    • He went to local shoe stores
    • Took photos of their shoes
    • Posted them on a simple website
    • When someone ordered, he’d go buy the shoes from the store and ship them himself!

    Result: He learned that people WOULD buy shoes online – all with near-zero investment!

    β˜• Hamed’s Analysis: The Power of Manual Testing

    This Zappos example is pure genius! Why?

    Because Nick validated his core assumption BEFORE building anything expensive!

    I use this exact approach with my consulting clients:

    Example – Fitness Coaching App:
    A client wanted to build an app where trainers create workout plans for clients.

    Instead of coding for 4 months, we did this:

    • Week 1: Created a Google Form for trainers to submit plans
    • Week 2: Manually sent these plans to clients via email
    • Week 3: Asked clients: “Would you pay $20/month for this service?”

    Result: 8 out of 10 said yes! NOW we knew it was worth building the app.

    Total cost: $0 (just our time!) vs. $15,000 to build the full app first.


    Vanity Metrics vs. Actionable Metrics

    One of the most important lessons in this chapter is understanding the difference between metrics that look good and metrics that actually teach you something.

    ❌ Vanity Metrics (Don’t Use These!)

    • Total registered users – tells you nothing about engagement
    • Total page views – could be bots or people who left immediately
    • Social media followers – means nothing if they don’t engage

    Why they’re dangerous: They make you feel good but don’t help you make real decisions!

    βœ… Actionable Metrics (Use These Instead!)

    • Active users (not just registered) – shows real engagement
    • Conversion rate – how many visitors actually buy or sign up?
    • Retention rate – do users come back after the first use?
    • Customer Lifetime Value (CLV) – how much revenue does each customer generate?

    Why they’re powerful: You can make real business decisions based on these numbers!

    β˜• Hamed’s Analysis: Real Example from Clothing Store Project

    I once worked with a client who ran an online clothing store. They were excited because they had:

    “5,000 Instagram followers and 2,000 website visitors per month!”

    But when I dug deeper:

    • Only 50 people actually bought something (2.5% conversion)
    • Of those 50, only 5 came back to buy again (10% retention)

    The real problem: Their products were too expensive, and customer service was slow.

    We learned this by tracking actionable metrics – not just vanity numbers!

    Fix: We tested lower prices with a small group (A/B test) and improved response time.

    Result: Conversion jumped to 8%, retention to 35%!


    The Innovation Accounting Framework

    Eric Ries introduces a concept called Innovation Accounting – a way to measure progress when building something new.

    The 3 Stages of Innovation Accounting

    Stage 1: Establish the Baseline
    Use your MVP to collect real data. Example: “10% of visitors sign up for our email list.”

    Stage 2: Tune the Engine
    Make small improvements and track changes. Example: “After improving the signup form, 15% now sign up.”

    Stage 3: Pivot or Persevere
    If your improvements are working, keep going. If not, it’s time to pivot (change direction).

    “Learning is the essential unit of progress for startups. The effort that is not absolutely necessary for learning what customers want can be eliminated.”

    (Learning is the only real measure of progress – everything else is waste!)


    Key Takeaways from Chapter 3

    🎯 5 Essential Lessons

    1. Progress = Validated Learning, Not Activity
    Don’t measure success by how much you built, but by how much you learned about your customers!

    2. Build-Measure-Learn is Your Core Loop
    Keep this cycle running as fast as possible – the faster you learn, the faster you succeed.

    3. Test Your Riskiest Assumption First
    Don’t waste time on features – test the one thing that could kill your business if you’re wrong!

    4. Use Actionable Metrics, Not Vanity Metrics
    Track numbers that actually help you make decisions, not just numbers that look impressive.

    5. Manual Testing is OK (and Smart!)
    Like Zappos buying shoes manually – you don’t need perfect systems to test your idea!


    πŸ’‘ Hamed’s Final Thought

    The biggest mistake I see entrepreneurs make is falling in love with their idea instead of falling in love with learning about their customers.

    Your idea will change – it always does! But if you master Validated Learning, you’ll always know which direction to move next.

    Remember: Every dollar and hour you invest should buy you learning – not just features!


    End of Chapter 3

    Next Chapter: Experiment
    In Chapter 4, we’ll learn how to design and run effective experiments to test your hypotheses!