The funny thing to me is that it’s still basically machine learning, the same tech that we’ve had since the mid 2000s, it’s just we have fancier hardware now.
So much of the modern Microsoft/ChatGPT project is effectively brute-forcing intelligence from accumulated raw data. That’s why they need phenomenal amounts of electricity, processing power, and physical space to make the project work.
There are other - arguably better, but definitely more sophisticated - approaches to developing genetic algorithms and machine learning techniques. If any of them prove out, they have the potential to render a great deal of Microsoft’s original investment worthless by doing what Microsoft is doing far faster and more efficiently than the Sam Altman “Give me all the electricity and money to hit the AI problem with a very big hammer” solution.
It takes a lot of energy to train the models in the first place, but very little once you have them. I run mixture of agents on my laptop, and it outperforms anything openai has released on pretty much every benchmark, maybe even every benchmark. I run it quite a bit and have noticed no change in my electricity bill. I imagine inference on gpt4 must almost be very efficient, if not, they should just switch to piping people open sourced llms run through MoA.
Are you saying you have a local agent that is better than anything OpenAI has released? Where did this agent come from? Did you make it from scratch? How are you not worth billions if you can out perform them on “every benchmark”?
My dude, no, I’m not the creator, settle down. Mixture of agents is free and open to anyone to use. Here is a demo of it by Matthew Berman. It isnt hard to set up.
So much of the modern Microsoft/ChatGPT project is effectively brute-forcing intelligence from accumulated raw data. That’s why they need phenomenal amounts of electricity, processing power, and physical space to make the project work.
There are other - arguably better, but definitely more sophisticated - approaches to developing genetic algorithms and machine learning techniques. If any of them prove out, they have the potential to render a great deal of Microsoft’s original investment worthless by doing what Microsoft is doing far faster and more efficiently than the Sam Altman “Give me all the electricity and money to hit the AI problem with a very big hammer” solution.
It takes a lot of energy to train the models in the first place, but very little once you have them. I run mixture of agents on my laptop, and it outperforms anything openai has released on pretty much every benchmark, maybe even every benchmark. I run it quite a bit and have noticed no change in my electricity bill. I imagine inference on gpt4 must almost be very efficient, if not, they should just switch to piping people open sourced llms run through MoA.
Are you saying you have a local agent that is better than anything OpenAI has released? Where did this agent come from? Did you make it from scratch? How are you not worth billions if you can out perform them on “every benchmark”?
My dude, no, I’m not the creator, settle down. Mixture of agents is free and open to anyone to use. Here is a demo of it by Matthew Berman. It isnt hard to set up.
https://youtu.be/aoikSxHXBYw
Believe it or not, openai is no longer making the best models. Claude Sonnet 3.5 is much better than openai’s best models by a considerable amount.