GPT is a lot better than Claude. You still need to be a competent dev to do anything useful and 99% of people will just create garbage with it.
You are viewing a single comment's thread from:
GPT is a lot better than Claude. You still need to be a competent dev to do anything useful and 99% of people will just create garbage with it.
100%. And so many tools are cheaper ( with same or close to same performance or better). M2.5/7 Very good and cost 10$ a month for 1500 calls/ 5h ( is more like 15k since they are retarded), qwen ( good), GPT like you said, same game and the list is endless.
Swarm them, paperclip or whatever.
The worst is to give away Claude not an alternative ( for 10$ month it could be wasteful in 99% of cases and still ok). But 200$ plans, IDK.
IMO a "project founder" should be able to make basic research for alternatives. 3spk people, same game. They have no idea about alternatives.
How is this possible?
I use MiniMax locally, their cloud sucks. They charge 2x for Highspeed, but it doesn't even hit the speeds advertised for their standard. I am using their cloud right now until the m2.7 weights drop and it is very disappointing. 34t/sec for standard and 44t/sec for high speed, yet it should be 50/100.
interesting. you use it over minimax io or alibaba? They changed the 500 thing too ( it was 3 weeks ago a different usage maximum ( more, but cant remember details again).
I really wonder why you token speed is low, i never experienced it lmao.
I like to run local qwen quant versions for different tasks. But the Big ones ofc.
Qwen 3.5 27B is probably the best choice for local, but it will be slow but it's a solid model with much lower memory demands than most.
and 1M context window
27B is only 262K context window.
I use it via minimax direct. I signed up when I heard they are open sourcing m2.7 weights, something that was looking not likely.
The token speed is low due to demand. They promise 100+ tokens/sec with high speed, but it is barely faster than standard. You do get a lot of usage though compared to others and it's a good model. I'm waiting for m2.7 weights to drop so I can run it on my RTX 6000 Pros. Should be any day now.