Discussion about this post

User's avatar
Attention Maps/// Mr A's avatar

Really enjoyed your approach to moving beyond prompt wrestling; the Four Foundations framework is much more sophisticated than the usual "better prompting" advice floating around.

I'm curious about the longer term viability of this model, especially with infrastructure costs scaling up so dramatically. When you're advising people to upload their entire corpus and build these deep feedback loops, how are you thinking about the compute economics behind personalized AI assistants?

Jensen Huang from Nvidia mentioned we'll need 100x more compute power for next gen reasoning models. Given that data centers already account for 44% of new electricity demand growth, I'm wondering when you think truly personal AI assistants will be available that aren't just API access to cloud LLMs with rising usage costs.

Your training methodology is excellent for creating sophisticated AI relationships, but I'm trying to understand the business model sustainability when the underlying compute becomes way more expensive. Are you seeing this as a premium service tier, or do you think local inference will make personal assistants economically accessible for individual creators?

The irony is that the better we get at training these systems (using methods like yours), the more compute intensive and expensive they become to run. Curious how you're thinking about that trajectory for your subscribers.

Would love your take on the economics side.

Expand full comment
Tope Olofin's avatar

When I read your introduction, I screamed, “that’s so me”!

I would like a copy of your ebook but not sure the link is directing me to the right place.

Expand full comment
12 more comments...

No posts