Their point is that the name and logo are clearly drawing from the Metamorphosis of Prime Intellect, with all the potential baggage that comes with it. It's an interesting choice.
The novel was the first popular codifier of the concepts of strongly superhuman ASI and hard-takeoff singularity, literally the work that introduced these ideas to the then quasi-New Atheist hangers-on among the kuro5hin crowd who became the initial core of what would develop into the follower base for singularitarianism. It was quite well written for that purpose, with enough sex and action to paper over the slow parts, and a real grasp of what it feels like when time contracts and dilates at once in those dolly-zoom moments where the universe is different forever and nothing outwardly changes. Combined with the seductive appeal and literally universal scope of the ideas that power its plot, it is no wonder the novel should have left so strong an impression on a few.
Someone intentionally invoking that history is interesting indeed. Someone doing it by accident might be more so. But I already gave that choice the name I judge it deserves.
It's not that they trained a new model, but they took an existing model and RL'd it a bit?
The scores are very close to QwQ-32B, and at the end:
"Overall, as QwQ-32B was already extensively trained with RL, it was difficult to obtain huge amounts of generalized improvement on benchmarks beyond our improvements on the training dataset. To see stronger improvements, it is likely that better base models such as the now available Qwen3, or higher quality datasets and RL environments are needed."
The interesting delta here is that this proves that we can distribute the training and get a functioning model. The scaling factor is way bigger than datacenters
RL is still training. Just like pretraining is still training. SFT is also training. This is how I look at it. Models weights are being updated in all cases.
Simplifying it down to "adjusting any weights is training, ipso facto this is meaningful" obscures more light than it sheds (as they noted, RL doesn't get you very far, at all)
Third party fine tuned open weighted LLMs tend to be good at a handful of benchmarks, but parity or lower on others compared to the original model. There are some exceptions like Nvidia's Nemotron series, but the differences generally are so small as to be imperceptible. Deepseek released finetunes of several Qwen and Llama models alongside R1, and while they were better in some select (mostly math) and coding domains, there were problems resulting from fine tuning that didn't result in them overtaking the original models in usage.
I read an argument, that proof of work needs to be useless and wasteful. If it would produce value in itself it would make 51% attacks more economic and thus the currency less secure.
Sure. The whole point of "proof of work" is to show (prove) you've lost energy to heat (work). That's what makes it costly and thus an honest signal.
The model breaks where work can be counterfeited (usually impossible) or where energy prices go to zero, which is why "bitcoin colonialism" was briefly a thing last decade. Much of bitcoin's design, this aspect also, is intended to protect against the bare-fanged, red-eyed money weasels it was also designed to attract.
I’ve seen an argument that military power and credible threat are the proof of work mechanism for fiat currencies. That is also useless, but it does throw off secondary useful effects like inventions.
Not totally convinced the analogy maps but interesting.
Somebody spilled bong water on that before it got to you, I feel like. What backs the credible threat of military force is that the threat is credible, which is why the United States maintains a dozen carrier strike groups and does not want to have any kind of conversation at all about hypersonic weapons and especially hypersonic anti-shipping missiles.
That's why I said the analogy doesn't map perfectly.
Still I do think there's some validity to the comparison. Fiat currencies are not backed by "nothing." They are backed by a state. Some percentage of the cost of operating a state is therefore "work" done to back the currency's value.
The question is: if we had a cryptocurrency backed by digital PoW that scaled to the level of fiat currencies (millions of transactions per second) and had some of their other desirable characteristics, would the state be able to proportionally shrink? That's what I'm not convinced of, but it'd be an interest experiment if we could spin up another universe and try it.
Hadn't thought of it in that way, but there's some merit to that if you include government, police & power in general. Law enforcement needed really high penalties on counterfeiting money and check fraude to make cash and checks work. And I guess some of that is still the case with credit card fraude.
"Fraud," and there is no historicity to the idea that counterfeiting and adulteration only became a problem with the introduction of paper instruments. Indeed those replaced specie in considerable part to reduce opportunities for chicanery! Gold is gold, after all.
There's nothing provable here. Crypto proof of work is easily verified (does the hash of this value look the way I expect?). How do you prove in ~O(1) time that someone did some operation with their GPU? You don't. You don't even know what the thing is that you're training (without a trained model you don't have the ability to know whether the model the was allegedly trained learned the thing you want it to learn).
> How do you prove in ~O(1) time that someone did some operation with their GPU? You don't.
The work in this case could be that the weights after the was done work have lower loss than the input weights. Applying the new weights to input to check that it's lower is much cheaper than calculating the weights, which is the same trend as proof of work (not sure about the magnitude of difficulty being enough to replace proof of work though).
> That's far from O(1). Now you need to transfer the weights back and test them.
I think what matters most is that the verification is much, much cheaper than the calculation itself to prove that work was done, it doesn't explicitly have to be O(1), eg. the magnitude difference has to exceed a certain threshold to make proof of work viable.
- Minimizing loss could be a useful heuristic on a base model. Here, we expect the distribution to be different as we are only doing RL. Measuring loss means we're measuring the difference against the base model inputs: a non-goal, we expect reasoning post RL-training to look quite different from a web scrape.
Let's set that aside. Let's say lower loss = model improved.
- Checking the loss requires the entire dataset used to train the base model + forward pass. That’s O(N·d) where N is samples, d is model size. This takes us from "cool demo of RL can be done on the edge with little benefit" to "we're shipping around terabytes of data constantly among clients"
- Proof of work as a technical term is different from proof of work as a colloquial term: the former is a cryptographic puzzle whose solution is universally and instantly checkable, while the latter just means “I can show I did something,” with no strict guarantee or uniqueness. Randomly perturbing one parameter could show "proof of work" without the work we actually wanted to be done, being done.
- Early in base model training, shaving 0.01 off the loss is easy. Later, impossible. In an RL environment, we're expecting some to go bad. In our interpretation of "loss decrease means model better means you did work", that would mean loss would increase -- that is how it learns in an RL environment. However, that does not mean no work is done.
The emphasis is indeed on "without trust" – as far as I can tell this project is unable to verify whether the decentralized training nodes are contributing productively.
Without the ability to validate that training compute is heading in the globally desired direction, it is unlikely you could use it as the foundation of a (sound) cryptocurrency.
The reward model could be used as a validation/reward for the client. Give the same nodes the same inferences to make, and the one with the highest reward (those could be short, or even partially calculated long-term) will also get the "currency" reward.
Arguably that's worse than crypto proof of work: inference is extremely expensive and you're multiplying every operation by N. Which means the cost is multiplied by N.
And like, what are you doing? You've managed to find a use case where you don't care that you're doing compute on some untrusted servers online (and no, there's no magic AI homomorphic encryption) but at the same time you're willing to accept the latency of doing the work multiple times AND it's probably all low end 4090s doing the work AND you're willing to pay for the wasted compute? I'm here shuddering at the thought of model setup times when one node in a cluster goes down and you're facing that on... well, probably most inferences? If you're not administering the infra, you get the lowest common denominator of performance.
There could be merit to this. Proofs are generally computationally hard, so it's possible that a currency could be created by quantifying verification.
> To stop wasting computing resources in crypto currencies and get something useful as a byproduct.
Bitcoin is the only major cryptocurrency that still use proof of work today (others are either using “proof of stakes” or are “Layer 2” chains), and due to its (relative lack of) governance structure, it's very unlikely to ever change.
This is rather exciting! I see the future of Co-op models made by a community of experts on a specific field that would still allow them to be competitive with "AI monopolies". Maybe not all hope is lost!
The checksum is validated by redoing the computation, but making use of the fact that you already have the entire response to enable greater parallelism than when generating it one token at a time.
TOPLOC attempts to detect model substitution, i.e. responses being generated by a different model than requested, it comes with certain caveats, as far as I can tell the TOPLOC paper considers verifiable learning / training as out of scope.
I used to have an idea related to science fiction novels that artificial intelligence could aggregate computing power through the network to perform ultra-large-scale calculations, thereby achieving strong artificial intelligence.
Reality will also develop in this way, which is very interesting
The most interesting thing I see is the productization of the diloco work done here [1]. If someone can make this scale, then we can say goodbye to expensive backend networking and mainframe-like AI training machinery.
Why? Quantum safe cryptography is mostly interesting right now in the context of defending against store now, decrypt later attacks. That doesn't seem helpful here, because they'll still need to decrypoit for training. Did you mean homomorphic encryption?
I wonder why they randomly noted a torch-compile vs non torch-compile figure where torch-compile degraded model performance. What made it degrade? It seems to only appear in one figure and nowhere else.
Personal story time: I met a couple of their engineers at an event a few months back. They mentioned they were building a distributed training system for LLMs.
I asked them how they were building it and they mentioned Python. I said something along the lines of “not to be the typical internet commenter guy, but why aren’t you using something like Rust for the distributed system parts?”
They mumbled something about Python as the base for all current LLMs, and then kinda just walked away…
From their article:
> “Rust-based orchestrator and discovery service coordinate permissionless workers”
The technical underpinning has nothing to do with the language. It is a different way of optimizing parameters called diloco. I agree though that python is an abomination for systems services componentry when there are languages like rust.
There's a name and a logo. "Hubris" feels slightly beggared. https://en.m.wikipedia.org/wiki/The_Metamorphosis_of_Prime_I...
This looks like a startup company. Why shouldn't it have a name and logo?
Their point is that the name and logo are clearly drawing from the Metamorphosis of Prime Intellect, with all the potential baggage that comes with it. It's an interesting choice.
The novel was the first popular codifier of the concepts of strongly superhuman ASI and hard-takeoff singularity, literally the work that introduced these ideas to the then quasi-New Atheist hangers-on among the kuro5hin crowd who became the initial core of what would develop into the follower base for singularitarianism. It was quite well written for that purpose, with enough sex and action to paper over the slow parts, and a real grasp of what it feels like when time contracts and dilates at once in those dolly-zoom moments where the universe is different forever and nothing outwardly changes. Combined with the seductive appeal and literally universal scope of the ideas that power its plot, it is no wonder the novel should have left so strong an impression on a few.
Someone intentionally invoking that history is interesting indeed. Someone doing it by accident might be more so. But I already gave that choice the name I judge it deserves.
[dead]
Maybe torment nexus was taken
I guess I'm bearish?
It's not that they trained a new model, but they took an existing model and RL'd it a bit?
The scores are very close to QwQ-32B, and at the end:
"Overall, as QwQ-32B was already extensively trained with RL, it was difficult to obtain huge amounts of generalized improvement on benchmarks beyond our improvements on the training dataset. To see stronger improvements, it is likely that better base models such as the now available Qwen3, or higher quality datasets and RL environments are needed."
The interesting delta here is that this proves that we can distribute the training and get a functioning model. The scaling factor is way bigger than datacenters
But does that mean much when the training that produced the original model was not distributed?
The RL, not the training. No?
RL is still training. Just like pretraining is still training. SFT is also training. This is how I look at it. Models weights are being updated in all cases.
Simplifying it down to "adjusting any weights is training, ipso facto this is meaningful" obscures more light than it sheds (as they noted, RL doesn't get you very far, at all)
Third party fine tuned open weighted LLMs tend to be good at a handful of benchmarks, but parity or lower on others compared to the original model. There are some exceptions like Nvidia's Nemotron series, but the differences generally are so small as to be imperceptible. Deepseek released finetunes of several Qwen and Llama models alongside R1, and while they were better in some select (mostly math) and coding domains, there were problems resulting from fine tuning that didn't result in them overtaking the original models in usage.
Seems that's mostly a byproduct from working on the core business idea, GPU arbitrage.
It’s interesting that it does something useful (training a LLM) without trust and in a decentralized way.
Maybe this could be used as proof of work? To stop wasting computing resources in crypto currencies and get something useful as a byproduct.
I read an argument, that proof of work needs to be useless and wasteful. If it would produce value in itself it would make 51% attacks more economic and thus the currency less secure.
Sure. The whole point of "proof of work" is to show (prove) you've lost energy to heat (work). That's what makes it costly and thus an honest signal.
The model breaks where work can be counterfeited (usually impossible) or where energy prices go to zero, which is why "bitcoin colonialism" was briefly a thing last decade. Much of bitcoin's design, this aspect also, is intended to protect against the bare-fanged, red-eyed money weasels it was also designed to attract.
It needs to not have economic value but it doesn't necessarily need to be useless and wasteful.
If it improves the economic value of something else it has economic value just not on its own discrete value.
Wrappers on candy don’t have value intrinsically but improve the quality of the candy.
For instance if the end product, in this case the LLM, is made available to anyone, publicly...
I’ve seen an argument that military power and credible threat are the proof of work mechanism for fiat currencies. That is also useless, but it does throw off secondary useful effects like inventions.
Not totally convinced the analogy maps but interesting.
Somebody spilled bong water on that before it got to you, I feel like. What backs the credible threat of military force is that the threat is credible, which is why the United States maintains a dozen carrier strike groups and does not want to have any kind of conversation at all about hypersonic weapons and especially hypersonic anti-shipping missiles.
That's why I said the analogy doesn't map perfectly.
Still I do think there's some validity to the comparison. Fiat currencies are not backed by "nothing." They are backed by a state. Some percentage of the cost of operating a state is therefore "work" done to back the currency's value.
The question is: if we had a cryptocurrency backed by digital PoW that scaled to the level of fiat currencies (millions of transactions per second) and had some of their other desirable characteristics, would the state be able to proportionally shrink? That's what I'm not convinced of, but it'd be an interest experiment if we could spin up another universe and try it.
Hadn't thought of it in that way, but there's some merit to that if you include government, police & power in general. Law enforcement needed really high penalties on counterfeiting money and check fraude to make cash and checks work. And I guess some of that is still the case with credit card fraude.
"Fraud," and there is no historicity to the idea that counterfeiting and adulteration only became a problem with the introduction of paper instruments. Indeed those replaced specie in considerable part to reduce opportunities for chicanery! Gold is gold, after all.
Military is certainly proof of burn...
> Maybe this could be used as proof of work
There's nothing provable here. Crypto proof of work is easily verified (does the hash of this value look the way I expect?). How do you prove in ~O(1) time that someone did some operation with their GPU? You don't. You don't even know what the thing is that you're training (without a trained model you don't have the ability to know whether the model the was allegedly trained learned the thing you want it to learn).
> How do you prove in ~O(1) time that someone did some operation with their GPU? You don't.
The work in this case could be that the weights after the was done work have lower loss than the input weights. Applying the new weights to input to check that it's lower is much cheaper than calculating the weights, which is the same trend as proof of work (not sure about the magnitude of difficulty being enough to replace proof of work though).
That's far from O(1). Now you need to transfer the weights back and test them.
> That's far from O(1). Now you need to transfer the weights back and test them.
I think what matters most is that the verification is much, much cheaper than the calculation itself to prove that work was done, it doesn't explicitly have to be O(1), eg. the magnitude difference has to exceed a certain threshold to make proof of work viable.
Trying again, apologies:
- Minimizing loss could be a useful heuristic on a base model. Here, we expect the distribution to be different as we are only doing RL. Measuring loss means we're measuring the difference against the base model inputs: a non-goal, we expect reasoning post RL-training to look quite different from a web scrape.
Let's set that aside. Let's say lower loss = model improved.
- Checking the loss requires the entire dataset used to train the base model + forward pass. That’s O(N·d) where N is samples, d is model size. This takes us from "cool demo of RL can be done on the edge with little benefit" to "we're shipping around terabytes of data constantly among clients"
- Proof of work as a technical term is different from proof of work as a colloquial term: the former is a cryptographic puzzle whose solution is universally and instantly checkable, while the latter just means “I can show I did something,” with no strict guarantee or uniqueness. Randomly perturbing one parameter could show "proof of work" without the work we actually wanted to be done, being done.
- Early in base model training, shaving 0.01 off the loss is easy. Later, impossible. In an RL environment, we're expecting some to go bad. In our interpretation of "loss decrease means model better means you did work", that would mean loss would increase -- that is how it learns in an RL environment. However, that does not mean no work is done.
[flagged]
No, this process doesn't produce "proof of work", i.e. verifiable proofs that energy has been used.
New weights that have lower loss than the input weights is proof that work has been done.
The emphasis is indeed on "without trust" – as far as I can tell this project is unable to verify whether the decentralized training nodes are contributing productively.
Without the ability to validate that training compute is heading in the globally desired direction, it is unlikely you could use it as the foundation of a (sound) cryptocurrency.
The reward model could be used as a validation/reward for the client. Give the same nodes the same inferences to make, and the one with the highest reward (those could be short, or even partially calculated long-term) will also get the "currency" reward.
Arguably that's worse than crypto proof of work: inference is extremely expensive and you're multiplying every operation by N. Which means the cost is multiplied by N.
And like, what are you doing? You've managed to find a use case where you don't care that you're doing compute on some untrusted servers online (and no, there's no magic AI homomorphic encryption) but at the same time you're willing to accept the latency of doing the work multiple times AND it's probably all low end 4090s doing the work AND you're willing to pay for the wasted compute? I'm here shuddering at the thought of model setup times when one node in a cluster goes down and you're facing that on... well, probably most inferences? If you're not administering the infra, you get the lowest common denominator of performance.
That sounds like it'll lead to human-driven reward hacking [0]?
[0]: https://en.wikipedia.org/wiki/Reward_hacking
There could be merit to this. Proofs are generally computationally hard, so it's possible that a currency could be created by quantifying verification.
That would be indeed a very promising way of FINALLY making cryptocurrency useful!
Arweave and Filecoin use PoW algorithms that prove something useful.
> To stop wasting computing resources in crypto currencies and get something useful as a byproduct.
Bitcoin is the only major cryptocurrency that still use proof of work today (others are either using “proof of stakes” or are “Layer 2” chains), and due to its (relative lack of) governance structure, it's very unlikely to ever change.
This is rather exciting! I see the future of Co-op models made by a community of experts on a specific field that would still allow them to be competitive with "AI monopolies". Maybe not all hope is lost!
Summary: We've use the most complexest, buzzwordiest training infrastructure to increase the performance of our base model by a whopping 0.5% (±1%).
But this isn’t about the performance, the infrastructure is the product here.
Indeed, most reliable way to make money in a gold rush is to sell shovels.
Does this have anything to do with The Metamorphosis Of Prime Intellect, or did they just abuse the name and the cover art?
Prime Intellect is a grabby AI :)
I made some GGUFs at https://huggingface.co/unsloth/INTELLECT-2-GGUF
./llama.cpp/llama-cli -hf unsloth/INTELLECT-2-GGUF:Q4_K_XL -ngl 99
Also it's best to read https://docs.unsloth.ai/basics/tutorial-how-to-run-qwq-32b-e... on sampling issues for QwQ based models.
Or TLDR, use the below settings:
./llama.cpp/llama-cli -hf unsloth/INTELLECT-2-GGUF:Q4_K_XL -ngl 99 --temp 0.6 --repeat-penalty 1.1 --dry-multiplier 0.5 --min-p 0.00 --top-k 40 --top-p 0.95 --samplers "top_k;top_p;min_p;temperature;dry;typ_p;xtc"
How are they ensuring robustness against adversarial responses?
From the article, seems like TOPLOC:
> based on top of novel components such as TOPLOC, which verifies rollouts from untrusted inference workers
https://github.com/PrimeIntellect-ai/toploc
Can an expert explain how this protects against adversarial actors?
At a glance it looks like something akin to a computing a checksum that's locality sensitive, so it's robust to floating point errors, etc.
What's to stop someone from sending bad data + a matching bad checksum?
The validation procedure is described on page 8 of the TOPLOC paper: https://arxiv.org/abs/2501.16007
The checksum is validated by redoing the computation, but making use of the fact that you already have the entire response to enable greater parallelism than when generating it one token at a time.
TOPLOC attempts to detect model substitution, i.e. responses being generated by a different model than requested, it comes with certain caveats, as far as I can tell the TOPLOC paper considers verifiable learning / training as out of scope.
I used to have an idea related to science fiction novels that artificial intelligence could aggregate computing power through the network to perform ultra-large-scale calculations, thereby achieving strong artificial intelligence. Reality will also develop in this way, which is very interesting
Awesome work this team is doing. Globally distributed MoE could have real legs
The most interesting thing I see is the productization of the diloco work done here [1]. If someone can make this scale, then we can say goodbye to expensive backend networking and mainframe-like AI training machinery.
[1] https://arxiv.org/abs/2311.08105
Wonder what the privacy story is like. Enterprises don't usually like broadcasting their private data across a freely accessible network.
A strong use case here for quantum-safe encryption.
Why? Quantum safe cryptography is mostly interesting right now in the context of defending against store now, decrypt later attacks. That doesn't seem helpful here, because they'll still need to decrypoit for training. Did you mean homomorphic encryption?
I wonder why they randomly noted a torch-compile vs non torch-compile figure where torch-compile degraded model performance. What made it degrade? It seems to only appear in one figure and nowhere else.
Pretty badass
very cool work!
Congrats to the team on the launch!
Personal story time: I met a couple of their engineers at an event a few months back. They mentioned they were building a distributed training system for LLMs.
I asked them how they were building it and they mentioned Python. I said something along the lines of “not to be the typical internet commenter guy, but why aren’t you using something like Rust for the distributed system parts?”
They mumbled something about Python as the base for all current LLMs, and then kinda just walked away…
From their article: > “Rust-based orchestrator and discovery service coordinate permissionless workers”
Glad to see that I wasn’t entirely off-base :)
The technical underpinning has nothing to do with the language. It is a different way of optimizing parameters called diloco. I agree though that python is an abomination for systems services componentry when there are languages like rust.
Given the latencies at play python did probably make more sense though