Really enjoyed your approach to moving beyond prompt wrestling; the Four Foundations framework is much more sophisticated than the usual "better prompting" advice floating around.
I'm curious about the longer term viability of this model, especially with infrastructure costs scaling up so dramatically. When you're advising people to upload their entire corpus and build these deep feedback loops, how are you thinking about the compute economics behind personalized AI assistants?
Jensen Huang from Nvidia mentioned we'll need 100x more compute power for next gen reasoning models. Given that data centers already account for 44% of new electricity demand growth, I'm wondering when you think truly personal AI assistants will be available that aren't just API access to cloud LLMs with rising usage costs.
Your training methodology is excellent for creating sophisticated AI relationships, but I'm trying to understand the business model sustainability when the underlying compute becomes way more expensive. Are you seeing this as a premium service tier, or do you think local inference will make personal assistants economically accessible for individual creators?
The irony is that the better we get at training these systems (using methods like yours), the more compute intensive and expensive they become to run. Curious how you're thinking about that trajectory for your subscribers.
Really appreciate the depth of this question—you're zeroing in on a conversation we need to have more often.
I do believe compute will keep rising short-term, but like any tech curve, maturation tends to create stratified solutions. More efficient inference, cheaper distillations, and creative architectural shifts will emerge to serve different customer segments. I feel like that's inevitable.
There will be premium, full-stack AI co-pilots. Then there will be leaner models built for speed and budget. In that case, I believe these methodologies can be adapted to the capabilities available to you.
But even if that doesn’t happen? You can always opt out. AI is powerful, but it’s still just a tool. I don’t believe in dependence, I believe in augmentation. At Write10x, I’ve always advocated for sharpening your human edge alongside AI fluency.
Creators who can write, think, edit, and also prompt are the ones who will thrive no matter where compute costs land.
Thank you! When you ground your process in clarity, context, and constraints, AI stops being a guessing game and starts becoming a trusted collaborator.
James, fantastic post! resonate with your approach of building personalized AI assistants instead of endlessly wrestling with prompts.
Over the past months, I've been exploring a complementary method called Reflective Prompting, which emphasizes intentionality, meta-cognition, humility, and iterative alignment in prompt interactions. Your idea of training an AI assistant to deeply know your voice, style, and frameworks is a perfect complement to Reflective Prompting.
A trained assistant can simplify and shorten the prompts needed, while Reflective Prompting helps maintain clarity, alignment, and creativity in complex or creative interactions.
Combining both methods seems like the ideal way forward: thoughtful training upfront combined with reflective alignment throughout. I'd love to hear your thoughts, do you see potential in integrating these two approaches?
Definitely! The Reflective Prompting method seems to align really well with my philosophy of mindful AI use. I love the idea of Reflective Prompting as the meta-layer that keeps us intentional throughout the interaction, not just at the start. When you combine that with a well-trained assistant that understands your voice and patterns, the result isn’t just faste but it’s richer, more grounded, more you.
James I like the idea of a compounding creative asset. Trying to create iterative feedback loops is a cool idea, and I kind of use that from time to time. But do you have additional ideas on how to improve the system or AI's performance (of course, considering using the same model)? Given AI's memory is a limitation, I have found that improvements are either non-existent or don't stick for me.
By "improving performance" what exactly do you mean? Because the fix for it depends heavily in what you hope to achieve. As for persistent memory, that's exactly what a custom GPT solves, in my opinion. It front loads all the context (including core memories, non negotiables, processes, improvements, etc.) so you won't have to add it to the prompt every time.
I understand what I meant is my preferences and whatever changes I make to a script or an AI idiosyncrasy on how it's behaving that I identify — all of it, once I tweak or give feedback to the AI, is known to the AI assistant. Don't think that's solved for even with tweaking the custom GPT prompt. Though yeah there are some gains and better performance.
Ah yeah, fair enough. There'll always be imperfections. My personal philosophy is, get it as close to being usable as possible, but never expect perfection. It's a lesser reflection of me with some other abilities, and heck, I myself am not perfect, so I expect my AI assistant to be all the more imperfect. But making it usable already gives a lot of advantages.
Really enjoyed your approach to moving beyond prompt wrestling; the Four Foundations framework is much more sophisticated than the usual "better prompting" advice floating around.
I'm curious about the longer term viability of this model, especially with infrastructure costs scaling up so dramatically. When you're advising people to upload their entire corpus and build these deep feedback loops, how are you thinking about the compute economics behind personalized AI assistants?
Jensen Huang from Nvidia mentioned we'll need 100x more compute power for next gen reasoning models. Given that data centers already account for 44% of new electricity demand growth, I'm wondering when you think truly personal AI assistants will be available that aren't just API access to cloud LLMs with rising usage costs.
Your training methodology is excellent for creating sophisticated AI relationships, but I'm trying to understand the business model sustainability when the underlying compute becomes way more expensive. Are you seeing this as a premium service tier, or do you think local inference will make personal assistants economically accessible for individual creators?
The irony is that the better we get at training these systems (using methods like yours), the more compute intensive and expensive they become to run. Curious how you're thinking about that trajectory for your subscribers.
Would love your take on the economics side.
Really appreciate the depth of this question—you're zeroing in on a conversation we need to have more often.
I do believe compute will keep rising short-term, but like any tech curve, maturation tends to create stratified solutions. More efficient inference, cheaper distillations, and creative architectural shifts will emerge to serve different customer segments. I feel like that's inevitable.
There will be premium, full-stack AI co-pilots. Then there will be leaner models built for speed and budget. In that case, I believe these methodologies can be adapted to the capabilities available to you.
But even if that doesn’t happen? You can always opt out. AI is powerful, but it’s still just a tool. I don’t believe in dependence, I believe in augmentation. At Write10x, I’ve always advocated for sharpening your human edge alongside AI fluency.
Creators who can write, think, edit, and also prompt are the ones who will thrive no matter where compute costs land.
When I read your introduction, I screamed, “that’s so me”!
I would like a copy of your ebook but not sure the link is directing me to the right place.
Oh no! These articles were created pre-launch, so they're not linked to the product page yet. I'll still have to modify them CTAs here :)
You can find the ebook on this link: presbitero.gumroad.com/l/wfnnnf
It's currently at 50% off as a promotional price.
Thanks. I will get on this now.
Hope it helps!
I like how you broke down the four foundations. It’s such a practical way to make AI work for us, not against us!
Thank you! When you ground your process in clarity, context, and constraints, AI stops being a guessing game and starts becoming a trusted collaborator.
James, fantastic post! resonate with your approach of building personalized AI assistants instead of endlessly wrestling with prompts.
Over the past months, I've been exploring a complementary method called Reflective Prompting, which emphasizes intentionality, meta-cognition, humility, and iterative alignment in prompt interactions. Your idea of training an AI assistant to deeply know your voice, style, and frameworks is a perfect complement to Reflective Prompting.
A trained assistant can simplify and shorten the prompts needed, while Reflective Prompting helps maintain clarity, alignment, and creativity in complex or creative interactions.
Combining both methods seems like the ideal way forward: thoughtful training upfront combined with reflective alignment throughout. I'd love to hear your thoughts, do you see potential in integrating these two approaches?
Definitely! The Reflective Prompting method seems to align really well with my philosophy of mindful AI use. I love the idea of Reflective Prompting as the meta-layer that keeps us intentional throughout the interaction, not just at the start. When you combine that with a well-trained assistant that understands your voice and patterns, the result isn’t just faste but it’s richer, more grounded, more you.
James I like the idea of a compounding creative asset. Trying to create iterative feedback loops is a cool idea, and I kind of use that from time to time. But do you have additional ideas on how to improve the system or AI's performance (of course, considering using the same model)? Given AI's memory is a limitation, I have found that improvements are either non-existent or don't stick for me.
By "improving performance" what exactly do you mean? Because the fix for it depends heavily in what you hope to achieve. As for persistent memory, that's exactly what a custom GPT solves, in my opinion. It front loads all the context (including core memories, non negotiables, processes, improvements, etc.) so you won't have to add it to the prompt every time.
I understand what I meant is my preferences and whatever changes I make to a script or an AI idiosyncrasy on how it's behaving that I identify — all of it, once I tweak or give feedback to the AI, is known to the AI assistant. Don't think that's solved for even with tweaking the custom GPT prompt. Though yeah there are some gains and better performance.
Ah yeah, fair enough. There'll always be imperfections. My personal philosophy is, get it as close to being usable as possible, but never expect perfection. It's a lesser reflection of me with some other abilities, and heck, I myself am not perfect, so I expect my AI assistant to be all the more imperfect. But making it usable already gives a lot of advantages.