3 hours has never flown by so fast in my life 😳 Using these prompts and engaging in AI this way is so exciting and fun when you're using it to expand your thinking.
They’re good for different things. What I might suggest is trying the free versions so you get a sense of what they’re good at. Problem is though that the models are changing constantly, so for example openai 4o is quite different to 4.5. And the latest version of Gemini is loads more engaging than the previous. So it’s a bunch of moving targets.
Also you can use a service like TypingMind which is a UI to gain access to all of them (and the free ones) via the api…
Mike, again thanks for Meraki-filled notes here. As it pertains to using AI to create your "Genius Mind" as I think you put it. Would an app like "typing mind" allow us to curate and build those deeper relationships to the AI knowledge base that you speak of in your first two articles or are you using one AI model to build those deeper connections to get to the gold? cheers, dakota
I had to look Meraki up - it's what I'm attempting here, so thank you!
Yes, TypingMind would be great for this. It has many more features than ChatGPT or Claude. Something to watch out for, though...
One of the downsides of using TypingMind is you're paying for/tracking every token that's sent and outputted. In my case this is good, because I'm learning much more about how it works "under the hood".
It's not much money, but if you have a load of files and a large Genius File, the number of "input" tokens is vast every time you send a message.
For example, I was planning my final Strategic Evolutions workshop (of the 1st group). I had all the transcripts of previous 2-3 hour sessions in a TypingMind project and my Genius file in the project. With these files, every time I sent a message, this was 150k tokens! It was okay because I only needed it to create a summary of what had happened before I could delete the full transcripts (because then it had the "knowledge" of the entire workshop).
BUT different models have different usage limits through the API. For example, Claude Sonnet 3.7 (my preferred model for workshop planning) has a 40k input token per minute / 16k output token limit. I was well within the output limit but blasted through the input. So I only got a couple of messages in before I had to wait (but I knew this - so I'd planned for it.)
Now, there is another way of creating context with TypingMind (Retrieval Augmented Generation), which I don't believe would have the same large file issue. But it's more technical to implement, so it's taking a while to get my head around it. That might solve this problem - although I'd still have to use the Genius File in projects as it's constantly changing. AFAIK RAG is where you keep "static" information the LLM might need.
So you can probably tell - with the consumer-facing apps (ChatGPT and Claude), you can be more relaxed about number of project files and the size of them - because both of them tell you when you've got too much. Plus I presume they have something in place in the app itself to deal with this situation. Plus I guess with these apps heavy token users like me are being subsidised by other users (who use them like Google) and VCs.
So if all you want is to use this method, I might suggest either CGPT or Claude. Although even on the paid plan of Claude, while you're not tracking/paying for each token, you'll run out of usage quite quickly with longer chats. So if you're expecting to have long conversations (which is where I get the most value) go for CGPT. While Claude is "better" output (imo) the usage limits make it much less useful.
thanks, Mike for the thoughtful response.I'm leaning into what you are saying and as a story wayfinder and Multi-Media creative, I'm just getting under the hood as you say and will certainly be wanting more long-form chats as I create and will have to navigate this usage since I just learned with Typingmind you still need an API key for each model when using their app if I have that right and then pay for usage. There is a lot to learn and as someone who is looking for the personal mastermind group/ as Napolean Hill I believe spoke of -what you are tapping into with the Genuis Well of inspiration and insight using AI is resonating with me. thank you
Yes - it’s very easy to get the api key, and typingmind docs are very comprehensive. What you’re doing with typingmind is paying as you go for what you use rather than paying a subscription no matter how much you use.
3 hours has never flown by so fast in my life 😳 Using these prompts and engaging in AI this way is so exciting and fun when you're using it to expand your thinking.
Yes, time compression is a thing
Will be following along and hope to join this soon as I venture out form my monk mode :)
While scratching the surface with AI I waws wondering before I setup my Agent and commit, do you have a recommended LLM? thanks!
They’re good for different things. What I might suggest is trying the free versions so you get a sense of what they’re good at. Problem is though that the models are changing constantly, so for example openai 4o is quite different to 4.5. And the latest version of Gemini is loads more engaging than the previous. So it’s a bunch of moving targets.
Also you can use a service like TypingMind which is a UI to gain access to all of them (and the free ones) via the api…
Mike, again thanks for Meraki-filled notes here. As it pertains to using AI to create your "Genius Mind" as I think you put it. Would an app like "typing mind" allow us to curate and build those deeper relationships to the AI knowledge base that you speak of in your first two articles or are you using one AI model to build those deeper connections to get to the gold? cheers, dakota
I had to look Meraki up - it's what I'm attempting here, so thank you!
Yes, TypingMind would be great for this. It has many more features than ChatGPT or Claude. Something to watch out for, though...
One of the downsides of using TypingMind is you're paying for/tracking every token that's sent and outputted. In my case this is good, because I'm learning much more about how it works "under the hood".
It's not much money, but if you have a load of files and a large Genius File, the number of "input" tokens is vast every time you send a message.
For example, I was planning my final Strategic Evolutions workshop (of the 1st group). I had all the transcripts of previous 2-3 hour sessions in a TypingMind project and my Genius file in the project. With these files, every time I sent a message, this was 150k tokens! It was okay because I only needed it to create a summary of what had happened before I could delete the full transcripts (because then it had the "knowledge" of the entire workshop).
BUT different models have different usage limits through the API. For example, Claude Sonnet 3.7 (my preferred model for workshop planning) has a 40k input token per minute / 16k output token limit. I was well within the output limit but blasted through the input. So I only got a couple of messages in before I had to wait (but I knew this - so I'd planned for it.)
Now, there is another way of creating context with TypingMind (Retrieval Augmented Generation), which I don't believe would have the same large file issue. But it's more technical to implement, so it's taking a while to get my head around it. That might solve this problem - although I'd still have to use the Genius File in projects as it's constantly changing. AFAIK RAG is where you keep "static" information the LLM might need.
So you can probably tell - with the consumer-facing apps (ChatGPT and Claude), you can be more relaxed about number of project files and the size of them - because both of them tell you when you've got too much. Plus I presume they have something in place in the app itself to deal with this situation. Plus I guess with these apps heavy token users like me are being subsidised by other users (who use them like Google) and VCs.
So if all you want is to use this method, I might suggest either CGPT or Claude. Although even on the paid plan of Claude, while you're not tracking/paying for each token, you'll run out of usage quite quickly with longer chats. So if you're expecting to have long conversations (which is where I get the most value) go for CGPT. While Claude is "better" output (imo) the usage limits make it much less useful.
thanks, Mike for the thoughtful response.I'm leaning into what you are saying and as a story wayfinder and Multi-Media creative, I'm just getting under the hood as you say and will certainly be wanting more long-form chats as I create and will have to navigate this usage since I just learned with Typingmind you still need an API key for each model when using their app if I have that right and then pay for usage. There is a lot to learn and as someone who is looking for the personal mastermind group/ as Napolean Hill I believe spoke of -what you are tapping into with the Genuis Well of inspiration and insight using AI is resonating with me. thank you
Yes - it’s very easy to get the api key, and typingmind docs are very comprehensive. What you’re doing with typingmind is paying as you go for what you use rather than paying a subscription no matter how much you use.
Brilliant, thanks Mike! I will check out TypingMind.
same
Great post Mike! Super exciting and intriguing!!!!
Thanks James, I’d love to hear how it works for you!