You are viewing a single comment's thread from:

RE: LeoThread 2024-07-28 00:50

in LeoFinance10 months ago

I tried with the 8b version running it locally, but it's not very nice. Currently installing the 70b version of llama3 on my system - 39 gigabytes. Very excited to see if it will even run

The nice thing about running them locally is that I can add as much context as I want - including the entire codebase for the application I'm building - as opposed to on groq

Sort:  

Guess that idea went out the window based upon some of your other threads. Too big. LOL

Yep. I tried using both GPT4o, Llama3.1 70b and Google Gemini Premium yesterday to continue developing the code. But none of them compared to Claude3.5. The closest I got to success was with Llama3.1, but that was still miles behind claude- in this particular case that is.

Using Claude, I managed to progress the development of the game significantly. Going to continue this afternoon with refreshed tokens. I'm quite excited about it