AI today feels vast—too broad, too powerful—and approaching it can be overwhelming. I certainly feel it whenever I want to start something new, the overwhelm looming threateningly in the distance.
The answer I’ve found is this Think, Build, Brand segment of my newsletter.
Here, we look at how actual, high-impact builders and thinkers are using AI in real workflows and life situations. It’s not just a bunch of prompts or no-code tool workflows, too; it’s philosophies, experiences, and ways of thinking.
I believe that AI can be a force for good, but only if we learn how to use it properly. In this way, the TBB segment provides concrete examples of the Write10x philosophy of mindful AI integration.
What follows are powerful ideas we learned in September, distilled into lessons you can apply right away.
Table of Contents
Think
1. Map & compass before models (purpose > tool)
2. Direct the collaboration (reject the authenticity vs. efficiency trap)
3. Optimize for humans, not scoreboards
4. Vision before action
Build
5. Design for discontinuity (stress-test, don’t trend-chase)
6. The 20/80 reality of AI products
7. Let data diagnose (micro-signals > vanity metrics)
Brand
8. Brand the human outcome (confidence > features)
9. Make “Unpromptability” your moat
10. Ship with a voice check
11. Direction over perfection
Think
Thinking is where every creative process begins.
Before you can build or brand, you need the right mental models, the right questions, and the right perspective. This section captures how our guests are reshaping their thinking with AI—challenging assumptions, clarifying context, and structuring their workflows in smarter ways.
Here are their top insights:
1. Map & compass before models (purpose > tool)
Most people are jumping into AI backwards. They’re starting with the tool and hoping to find a problem to solve with it.
, with her two decades as a business analyst, cuts through this confusion: “Before you even think about using AI, you need a map and a compass.”Think in systems. Know your “why” and how parts connect before you plug in AI.
The map is your understanding of the current terrain—what’s working, what’s broken, and most importantly, how the pieces fit together. The compass is your purpose, the thing that keeps you oriented when the tech gets shiny and distracting. Without these, you’re just a person with an expensive tool looking for nails to hammer.
Do this next: Run Farida’s Navigation Checklist on your current initiative:
Map Questions:
What’s the current system and how do all the parts connect?
Where are the real bottlenecks versus perceived problems?
Compass Questions:
What’s my actual purpose here (not what I think it should be)?
How will I measure if this AI implementation serves that purpose?
“And” Questions:
How does this AI change the relationships between different parts of my business?
What second and third-order effects might I be missing?
Discontinuity Questions:
What happens when this AI fails or when the patterns it learned no longer apply?
How do I maintain strategic advantage when competitors have the same tools?
Diagnostic Questions:
What early warning signals will tell me this isn’t working as intended?
How do I separate AI-generated insights from AI-amplified biases?
The questions she provides aren’t comfortable, but that’s the point.
Read more: Why 20 Years of Experience Makes AI Actually Useful
2. Direct the collaboration (reject the authenticity vs. efficiency trap)
The creative world has split into camps. The purists who won’t touch AI. The efficiency zealots who’ve surrendered their voice to the machines. Both are missing the real opportunity.
nails this false choice: “The goal isn’t effortless writing, it’s preserving what makes you irreplaceable while scaling your reach.”Your job is to stay the author. Let AI accelerate expression—without surrendering voice.
She’s developed a five-phase process that keeps human thinking at the center while using AI strategically. You draft messy and real. AI interrogates your gaps. You provide rich context. AI helps structure. You make the final decisions. It’s collaboration, where you maintain creative control, not abdication where you hand over the keys.
Try Daria’s five-step AI writing approach:
Supercharge ideation by tracking your ideas and clarifying them with AI.
Use powerful AI tools for research (Perplexity, Notebook LM, etc.)
Start with a messy human draft—don’t worry about polish, just get your actual thoughts down.
Then use AI to explore gaps in your logic.
The most important part: finish with Daria’s four tests.
The experience test (could someone without your background write this?).
The voice test (does it sound like you talking?).
The value test (would someone seek out more of your work?).
The action test (can someone implement something specific?).
If it fails any of these, it needs more of you in it.
Read more: How to Use AI for Writing Without Losing Your Voice
3. Optimize for humans, not scoreboards
learned this the hard way when her language-learning AI suggested “What the f*** are you doing?!” as a common expression to practice with the user’s immigrant mother.Technically correct. Completely inappropriate.
“Accuracy is just table stakes. You’re building for humans, not scoreboards,” she discovered after three weeks of debugging user flows instead of optimizing models.
Building AI products isn’t just about better algorithms or technical tools. Rather, you must understand that the most important aspects of use are often the ones that are difficult to measure.
Metrics don’t capture trust, cultural sensitivity doesn’t show up in accuracy scores, and user confidence isn’t measured by response correctness. The model can be perfect, and the product can still fail if it doesn’t respect the human context it operates within.
Do this next: Replace one accuracy KPI with a trust metric. Ask questions like:
Would you feel safe letting your grandmother use this?
Does this respect cultural boundaries?
Would you come back tomorrow?
These aren’t technical metrics, but they determine whether your AI product lives or dies in the real world.
Read more: Why Accuracy Isn’t Good Enough for Your AI Product
4. Vision before action
AI needs a North Star, but most people don’t have them.
Jay Corso of
makes a simple but piercing point: too many builders rush into AI without a vision, and the result is hype-driven projects that don’t last. He says, “Using AI without a vision, or at least a rough idea of what you want to accomplish makes you a slave to AI and erodes critical thinking ability.”Vision doesn’t mean you need a 10-year roadmap. A general direction is enough, a feeling of something you want to know.
Without vision, AI is just noise amplified; with it, even a rough North Star can keep your builds aligned to purpose.
Actionable step: Write a one-sentence vision for what you want AI to achieve for you—not just what you want it to do. Revisit it each time you add a tool or workflow.
Read more: AI and the Lack of Vision
Read related: Stop Looking for a Niche. Start Looking for a Problem You Can’t Ignore.
Build
Building is where ideas take form.
Once you’ve shaped your thinking, the next challenge is turning it into real assets—tools, products, workflows. This section highlights how today’s creators are learning to build with AI: moving past feature-chasing, orchestrating tools like a team, and creating systems that scale without burning out.
Here’s how our guests are building…
5. Design for discontinuity (stress-test, don’t trend-chase)
AI excels at recognizing patterns. That’s its superpower and its blind spot.
Farida brings her business analyst lens to this: “Use AI to run ‘what-if’ scenarios and stress-test your plans against sudden breaks in the pattern.”
Markets shock. Competitors pivot. Cultural contexts shift. Your AI-powered system that works perfectly in normal conditions becomes a liability when patterns break. The companies that survive aren’t the ones with the best trend analysis—they’re the ones who’ve planned for discontinuity.
AI can help you model these breaks, but only if you ask it to. Most people use AI to extend current patterns into the future. The smart ones use it to find where those patterns might shatter.
Do this next: Run one weekly “what-if” simulation. What if your main traffic source disappeared? What if cultural attitudes shifted? What if your AI vendor changed their policies?
Document your decisions and contingency plans.
Read more: Why 20 Years of Experience Makes AI Actually Useful
6. The 20/80 reality of AI products
Claudia’s revelation should be a sticky note on every technical founder’s laptop: “Building AI products is 20% AI engineering, 80% everything else.”
She thought she was building a language-learning AI. She ended up building a confidence system for heritage speakers dealing with “English-switching” relatives. The AI was the easy part.
The hard part was everything wrapped around it—user research that revealed the real problem, frontend design that didn’t intimidate beginners, cultural awareness that prevented offensive suggestions, and error handling that maintained trust when things went wrong.
Products solve human problems, and humans are messy, cultural, emotional beings who need more than accurate responses.
Actionable takeaway: Create a non-AI work plan.
Schedule user interviews to understand the problem behind the problem. Design edge-case error flows for when AI says something inappropriate. Build tone guardrails that respect cultural contexts, and anything else you can think of.
That’s what determines if anyone actually uses what you build.
Read more: Why Accuracy Isn’t Good Enough for Your AI Product
Read related: Make This One Thing To Scale Your Thought Leadership
7. Let data diagnose (micro-signals > vanity metrics)
“Failure doesn’t come out of the blue,” Farida warns. “It’s a compilation of tiny, ignored signals.”
Most people use AI to predict future success based on past patterns. Farida suggests flipping this: use AI to diagnose present problems through micro-signals you’re missing.
That slight increase in support tickets about a specific feature. The pattern of users dropping off at a particular step. The correlation between certain phrases in feedback and eventual churn. These tiny denials of your assumptions are trying to tell you something. AI can surface them, but only if you’re looking for diagnosis, not validation.
So, try and set up a diagnostic dashboard tracking micro-events.
Not the big metrics everyone watches, but the small ones: friction points in user flows, cultural misfires in AI responses, confidence markers in user behavior. Review weekly. Look for patterns in what you’re ignoring. The goal isn’t to predict failure—it’s to catch it while it’s still small enough to fix.
Read more: Why 20 Years of Experience Makes AI Actually Useful
Brand
Branding is how your work is perceived by the world.
Once you’ve clarified your thinking and built systems that work, the next challenge is making them resonate with others. This section shows how creators are using AI not just to polish their image but to tell real stories, share the messy middle, and connect with audiences through authenticity.
Here’s what our guests reveal…
8. Brand the human outcome (confidence > features)
Claudia’s repositioning teaches us something vital about branding in the AI age. She went from “Data scientist with enterprise ML experience” to “Data scientist learning to build AI products people actually want to use.”
Notice what she’s not talking about? Model architectures. Technical specifications. Feature lists.
Instead, she’s sharing user insights, product decisions, the gap between impressive demos and sticky products. She’s not selling AI capabilities, but documenting the journey of making AI useful for actual humans.
Do this next: Publish weekly build notes focused on user insights and product choices, not just model tricks.
Share the human side -- the conversation where a user explained what they really needed, the decision to prioritize cultural sensitivity over response speed. Show the iteration that made the product safer, not smarter.
This transparency builds trust in a way that technical specifications never will.
Read more: Why Accuracy Isn’t Good Enough for Your AI Product
9. Make “Unpromptability” your moat
“Nobody else has lived exactly what you’ve lived,” Daria reminds us. “That accumulated wisdom is your competitive moat.”
This connects directly to the Write10x philosophy; the thrust on Unpromptability. AI is powerful, and with it, someone can copy your tone, ideas, or approach without even knowing who you are. If that happens, you’ve become promptable.
How to prevent this? Lean on your human side.
Your lived experience is the defensible core.
AI should amplify it, not replace it. The stories only you can tell. The failures only you’ve navigated. The insights only you’ve earned. These can’t be prompted into existence, no matter how sophisticated the model.
To implement, add an Unpromptable QA step to your content process.
Before publishing, ask: Would this piece still sound like me without my name on it? If the answer is no, you need to add more story, more stakes, more specificity. Share the client conversation that changed your perspective. Include the failure that taught you this lesson.
Make it so distinctly yours that no prompt could replicate it.
Read more: How to Use AI for Writing Without Losing Your Voice
Read related: 5 Rules for Mindful AI Writing
10. Ship with a voice check
Daria’s final filter is elegant: “Make the final piece authentically yours.”
But she doesn’t leave this to chance. She’s systematized authenticity through her four tests, creating a repeatable process for ensuring her voice survives the collaboration with AI.
This isn’t about rejecting AI assistance. It’s about maintaining creative control throughout the process. You can use AI to explore, structure, and polish—but the final decision about what ships is always human.
Do this next: Paste your draft into your LLM and ask it to self-evaluate against your voice tests using past writing as reference. Does it match your typical rhythm and tone? Does it include the kind of examples you’d naturally use? Does it reflect your actual expertise? Let AI help you diagnose where you’ve lost yourself, then revise before you publish.
Read more: How to Use AI for Writing Without Losing Your Voice
11. Direction over perfection
Jay’s writing points to a paradox: you need vision to guide your AI work, but you don’t need the full map to begin.
He admits he only has a general direction—photography effects with AI—but that’s enough to keep him moving. The clarity comes from the walk, not from standing still. For creators, this is liberating: you don’t need a perfect map of every step, just a compass pointing forward.
Let the work reveal the terrain as you go. The journey itself will shape the destination.
Do this: Write down your current direction in one line. Even if it’s rough, let that guide you, and refine it as your project unfolds.
Read more: AI and the Lack of Vision
Cross-cutting trends we’re seeing
These are patterns that cut across all the voices we’ve heard this month. They show how these insights all weave together. Think of this section as a distillation of the deeper shifts happening right now, bonus lessons that I believe will help you see the road ahead more clearly.
From accuracy to trust
The old model was simple: make AI more accurate, success follows. September’s builders demolished this assumption.
Claudia discovered this viscerally when her technically correct AI suggested profanity for family conversation practice. “Accuracy is just table stakes,” she realized, after watching users abandon a linguistically perfect but culturally tone-deaf product. Her pivot wasn’t toward better language models—it was toward understanding why people felt their relatives were “English-switching” them, why confidence mattered more than conjugation.
Daria codified this into her four-test framework: experience, voice, value, and action. Each test moves beyond correctness toward something harder to measure but impossible to fake—an authentic connection. Meanwhile, Jay warns that without this foundation, it “makes you a slave to AI.”
Trust, then, isn’t just accuracy plus features. Its accuracy filtered through cultural awareness, shaped by genuine purpose, and delivered with respect for human context.
From tool-chasing to purpose-driven systems
The gravitational pull of new AI tools is strong. Every week brings another model, another capability, another shiny object. But September’s contributors resist this pull through disciplined focus on purpose.
Farida’s “map and compass” framework isn’t metaphorical—it’s operational. Before using any AI tool, she recommends mapping your current system (where are the actual bottlenecks?) and calibrating your compass (what’s your true purpose?). She shares how businesses fail not from picking the wrong AI, but from not understanding the “and”—those critical connections between system components that no tool can automatically fix.
Jay echoes this through his evolving vision for AI-enhanced photography. “I am being guided to AI from an idea of creating entertainment that makes you a participant, rather than a mindless spectator.”
Notice the sequence: vision comes first, followed by tool selection. Even when his direction is admittedly vague, it’s still direction—a rough idea strong enough to resist the tool-first mentality.
This perfectly demonstrates Write10x’s Mission → Marketing → Community → Assets → Impact chain. The mission shapes everything downstream.
From binary (AI vs human) to directed collaboration
The exhausting debate—will AI replace human creativity?—misses the point entirely.
September’s builders show a third way: directed collaboration where humans maintain creative control while AI amplifies execution.
Daria’s five-phase writing process is the blueprint. She starts with messy human drafts, uses AI to identify gaps, provides rich context for collaboration, allows AI to help structure, and then makes final human decisions.
“The goal isn’t effortless writing, it’s preserving what makes you irreplaceable while scaling your reach.” The human remains the author; AI accelerates expression.
Jay reinforces this through his distinction between using AI as an enhancer versus a crutch. He describes AI as an exoskeleton—it amplifies human strength but requires human direction. Even in his experimental vibe coding projects, where he doesn’t know the exact outcome, he maintains a creative vision.
The collaboration requires constant human presence, not occasional human oversight.
From hidden work to building-in-public
The old instinct was to hide AI usage, as if it diminished creative legitimacy. September’s builders flip this, making their process transparent and their learning public.
Claudia completely reframed her professional identity: from “Data scientist with enterprise ML experience” to “Data scientist learning to build AI products people actually want to use.”
She’s not hiding her struggles or pretending expertise she hasn’t earned. Instead, she shares the unglamorous reality. This transparency about the messy middle builds more credibility than any polished case study.
Jay demonstrates similar vulnerability, openly admitting his AI vision remains unclear: “I don’t know yet what that is, but I continue to get involved in conversations with AI as the subject.”
Rather than waiting for perfect clarity, he’s documenting the exploration itself. This willingness to share incomplete journeys proves that process has become the new proof of expertise.
From speed to resilience
The race to ship fast with AI is seductive. September’s builders choose a different race: building systems that survive when patterns break.
Farida’s framework explicitly designs for discontinuity. Her approach treats resilience as a design principle, not an afterthought. Every system should be tested against market shocks, cultural shifts, and competitive pivots.
Jay adds another dimension to resilience: allowing vision to evolve through practice. “Whether I succeed in creating this app I imagine or not is not relevant. I am going to do something with AI that will either give me clarity, or point me to a possibility I haven’t seen.”
This isn’t failure tolerance—it’s evolution by design. The vision guides but doesn’t constrain, adapts but doesn’t abandon purpose.
Speed without resilience is just faster failure.
The work starts now
The September TBB set a clear hierarchy: purpose first, human outcomes second, tools third. In this age, this is a practical necessity.
The builders who shared their insights aren’t theorizing about AI—they’re using it daily, learning from failures, adjusting their approach based on honest user feedback.
So, what can you do with these deep insights?
Adopt one move this week, then the next.
Notice how your work becomes harder to copy because it’s rooted in your specific experience. It becomes easier to love because it solves real human problems. It becomes built to last because you’ve stress-tested it against discontinuity.
The builders in September’s roundup aren’t just using AI.
They’re shaping how we think about it, build with it, and brand ourselves authentically alongside it. They’re proving that the future isn’t AI replacing human creativity—it’s thoughtful collaboration; one where humans direct and AI amplifies.
If there’s one thing you take away from September, it’s this: In the age of AI, being human isn’t a limitation. It’s your competitive advantage.
Make your work irreplaceably yours, intelligently amplified.
Love seeing this roundup out, James. You did such a great job capturing everyone’s perspectives 🙌 Grateful I got to share my process on writing with AI on Write10x + I’ve already picked up new insights from the others too!
Thank you. Happy to know my insights resonated with you