About 25% faster than a 2080Ti. But apparently none of the apps are making use of the second FP32 pipeline in the architecture. Running doubles just doubles the crunch time per task like in previous generations.
A card that good just begs for a CUDA app. That means GPUGrid, or else Folding, which is just coming out with one. I got my first Folding CUDA work unit today.
I would only use AMD here, I am just trying to figure out what there new RDNA2 series will look like. I need something more in the 120 watt range rather than the big ones.
Yes the 3080 is twice as fast on the Primegrid PPSieve CUDA application compared to the 2080Ti. But still won't run tasks on both of the FP32 pipelines.
I need something more in the 120 watt range rather than the big ones.
With modest throttling the 5700 cards (non XT) fit under that tidily and produce a lot of Einstein output per watt. Probably the 5600 card would be a good purchase cost savings and let you meet that power goal with little or no throttling.
They lose to previous generation 570 cards on current purchase price per unit output, but win considerably on power consumption per unit output and on Einstein production per system. I've got several perfectly good 570 cards sitting on my shelf I replaced with 5700 cards.
Some of the 5700 cards are quite large physically (all three dimensions), so a bit of fit caution is in order.
Yes, the 5700 cards look good, but we are close to the next generation. If AMD's promise of 50% more efficiency holds (if that was the promise), then the new ones will look better. The GW work units here at limited by CPU power also, and so you would need a significant increase in GPU power to make an upgrade worthwhile. The 570's are actually good enough for the moment.
Yes, the 5700 cards look good, but we are close to the next generation. If AMD's promise of 50% more efficiency holds (if that was the promise), then the new ones will look better. The GW work units here at limited by CPU power also, and so you would need a significant increase in GPU power to make an upgrade worthwhile. The 570's are actually good enough for the moment.
I should have specified that my 5700 vs. 570 observations were specific to Einstein GRP work. I do run GW on one of my four 5700s. I have no comparison with any other configuration running that work, which in any case is tricky for comparisons since the units vary quite a lot, not just in memory demand but also in CPU computation and also in GPU computation.
I shall be quite surprised if AMD manages a new generation so soon after the 5700 that gives a 50% power efficiency improvement at the same point in the price range and performance range. That would be wonderful, but I rather doubt it.
I shall be quite surprised if AMD manages a new generation so soon after the 5700 that gives a 50% power efficiency improvement at the same point in the price range and performance range. That would be wonderful, but I rather doubt it.
Your curiosity will be answered on October 28 when the new RX-6000 cards launch.
AMD would have to screw things up a bunch to not see some improvements.
They are guessing the Big Navi is going to fall somewhere between a 3070 & a 3080.
Expect to see NVIDIA launch a Ti or super card soon to counter AMD.
The 5700 is good, but it's not the compute card of the RVII. I wonder if AMD will give us another compute card or just aim at gamers.
There are a few stories getting around about how Ampere is a bit of compute & gaming. A few cries of why can't they just make a pure gaming GPU.
Anyhoo..... looking forward to seeing how they go. I bet there's still some burnt fingers from the 5700XT release. They were no good at BOINC for a while till driver updates fixed the issue (I think it was drivers)
Even though I have never run an AMD gpu, I still thank them for participating in the marketplace. Makes for good competition and generational improvements from the vendors.
If I only ran double-precision FP64 compute projects like Milkyway, I would probably run an AMD card. But the Nvidia cards are the jack of all trades, not best at anything but good enough for everything.
And the few projects that are CUDA only are logical candidates for choosing Nvidia.
https://einsteinathome.org/go
)
https://einsteinathome.org/goto/comment/180099
And for comparison, I ran a few on my RX 570 (Win7 64-bit) and did not see an obvious improvement in efficiency for the new card.
https://einsteinathome.org/host/12799653/tasks/0/0
About 25% faster than a
)
About 25% faster than a 2080Ti. But apparently none of the apps are making use of the second FP32 pipeline in the architecture. Running doubles just doubles the crunch time per task like in previous generations.
A card that good just begs
)
A card that good just begs for a CUDA app. That means GPUGrid, or else Folding, which is just coming out with one. I got my first Folding CUDA work unit today.
I would only use AMD here, I am just trying to figure out what there new RDNA2 series will look like. I need something more in the 120 watt range rather than the big ones.
Have fun.
Yes the 3080 is twice as fast
)
Yes the 3080 is twice as fast on the Primegrid PPSieve CUDA application compared to the 2080Ti. But still won't run tasks on both of the FP32 pipelines.
Needs a new app to fully use the card.
Jim1348 wrote: I need
)
With modest throttling the 5700 cards (non XT) fit under that tidily and produce a lot of Einstein output per watt. Probably the 5600 card would be a good purchase cost savings and let you meet that power goal with little or no throttling.
They lose to previous generation 570 cards on current purchase price per unit output, but win considerably on power consumption per unit output and on Einstein production per system. I've got several perfectly good 570 cards sitting on my shelf I replaced with 5700 cards.
Some of the 5700 cards are quite large physically (all three dimensions), so a bit of fit caution is in order.
Yes, the 5700 cards look
)
Yes, the 5700 cards look good, but we are close to the next generation. If AMD's promise of 50% more efficiency holds (if that was the promise), then the new ones will look better. The GW work units here at limited by CPU power also, and so you would need a significant increase in GPU power to make an upgrade worthwhile. The 570's are actually good enough for the moment.
Jim1348 wrote:Yes, the 5700
)
I should have specified that my 5700 vs. 570 observations were specific to Einstein GRP work. I do run GW on one of my four 5700s. I have no comparison with any other configuration running that work, which in any case is tricky for comparisons since the units vary quite a lot, not just in memory demand but also in CPU computation and also in GPU computation.
I shall be quite surprised if AMD manages a new generation so soon after the 5700 that gives a 50% power efficiency improvement at the same point in the price range and performance range. That would be wonderful, but I rather doubt it.
Quote: I shall be quite
)
Your curiosity will be answered on October 28 when the new RX-6000 cards launch.
AMD would have to screw things up a bunch to not see some improvements.
They are guessing the Big
)
They are guessing the Big Navi is going to fall somewhere between a 3070 & a 3080.
Expect to see NVIDIA launch a Ti or super card soon to counter AMD.
The 5700 is good, but it's not the compute card of the RVII. I wonder if AMD will give us another compute card or just aim at gamers.
There are a few stories getting around about how Ampere is a bit of compute & gaming. A few cries of why can't they just make a pure gaming GPU.
Anyhoo..... looking forward to seeing how they go. I bet there's still some burnt fingers from the 5700XT release. They were no good at BOINC for a while till driver updates fixed the issue (I think it was drivers)
Even though I have never run
)
Even though I have never run an AMD gpu, I still thank them for participating in the marketplace. Makes for good competition and generational improvements from the vendors.
If I only ran double-precision FP64 compute projects like Milkyway, I would probably run an AMD card. But the Nvidia cards are the jack of all trades, not best at anything but good enough for everything.
And the few projects that are CUDA only are logical candidates for choosing Nvidia.