Re B580 power use, it's interesting that your card is using a lot less power, and similarly seems to be running slower than the sample earlier in this thread from Mumak. I wonder if he was overclocking or anything?
he seemed to be getting runtimes less than 500s and running ~111W.
Re B580 power use, it's interesting that your card is using a lot less power, and similarly seems to be running slower than the sample earlier in this thread from Mumak. I wonder if he was overclocking or anything?
he seemed to be getting runtimes less than 500s and running ~111W.
I noticed that too and was very curious about that. Can't say i have any explanation as of yet. Going to investigate tomorrow.
Also, I can't remember is A4500 gpus are running 2x or 3x.. too many numbers in my head..
No concrete conclusions about the power usage being so different. I did a simple overclock based on what others were doing/posting online for these GPUs, and I saw a ~10% increase in wattage (peaked ~80w). I know I could push the speeds further but this is what others were commonly overclocking to. I am not trying to overclock when running E@H, I am trying to match the reported wattage earlier in the thread.
What is interesting is that when I do a GPU benchmark, it will hit 130w easily. The GPU is not using the "compute" function of the GPU, just rendering. I then bumped the MeerKAT work to 2x and it still hovered around ~73w.
I am running a few different benchmarks so I can see how the wattage fluctuates. The GPU has two 8-pin power cords. I wonder if one of them is not working? Would the GPU power on with one 8-pin instead of two? I know 8-pin is 150 watts, so this crossed my mind.
At least some cards for at least some connections actively report during boot-up if a connection is not active.
I rather doubt that it would silently cut power consumption because of a failure to connect one of two 8-pin connectors, but I can't be sure.
I don't spot obvious reporting for 2X runs in the tasks list. Maybe those have not been reported yet?
Mumak declined to try 2X on the ground that the reported GPU utilization at 1X was so extremely high as to make it unlikely 2X would add production to 1X.
We've seen very high GPU chip and host CPU capability dependence on the improvement or lack of improvement from higher multiplicity. We'll all be interested to see what happens on your card for 2X on BRP7 running an opencl app under Windows.
You might find it interesting to check GPU utilization. As it may vary all over the place in the short term, I'd suggest using a monitor which can average between button clicks. For example GPU-Z or HWiNFO (which by the way is Mumak's work.
I don't spot obvious reporting for 2X runs in the tasks list. Maybe those have not been reported yet?
Mumak declined to try 2X on the ground that the reported GPU utilization at 1X was so extremely high as to make it unlikely 2X would add production to 1X.
We've seen very high GPU chip and host CPU capability dependence on the improvement or lack of improvement from higher multiplicity. We'll all be interested to see what happens on your card for 2X on BRP7 running an opencl app under Windows.
Definite improvement running 2x. They are starting to show up and validate on the host (I sometimes temporarily disconnect a host so I can just look at completed work units/times in BOINC when trying something new or different). GPU shows consistent 100% utilization when running 2x versus the drops in utilization when running 1x. Highly doubt 3x will show any improvement, but will be trying this tomorrow.
Edit: 3x crashed the gpu... plenty of vram left and temperature headroom, so must have been driver related. It didn't like that at all. However, 2x was far superior to 1x.
While the Arc B580 has been getting good reviews as a strong value proposition in the moderate price GPU market, mostly based on price and game performance, I've recently seen comments which might or might not be relevant to Einstein folks.
Apparently comparing B580 gaming results on high-end recent CPUs to that observed when using older less capable CPUs the B580 suffers a larger performance drop than do comparison products.
As both "new" and "high-end" are somewhat non-specific categories, I don't have a good guess on which actual attribute of the host CPU drives this variation.
But I'd urge anyone contributing B580 Einstein observations in this thread to document their host arrangements (both CPU and motherboard) just in case those turn out to matter here.
One other point: apparently initial reviews are positive enough, and Intel's willingness to invest in producing rollout quantities was cautious enough, that B580 GPUs have ever since rollout been reported at:
as going into stock (usually just for preorders, but also sometimes for actual orders) only briefly, before reverting to the status of being out-of-stock with new orders not currently being accepted. NoinStock certainly does not survey all retailers, but does detect matters at Amazon, NewEgg, and B&H. So this is an indication that the GPUs produced so far have been finding a ready market for some use.
While the Arc B580 has been getting good reviews as a strong value proposition in the moderate price GPU market, mostly based on price and game performance, I've recently seen comments which might or might not be relevant to Einstein folks.
Apparently comparing B580 gaming results on high-end recent CPUs to that observed when using older less capable CPUs the B580 suffers a larger performance drop than do comparison products.
As both "new" and "high-end" are somewhat non-specific categories, I don't have a good guess on which actual attribute of the host CPU drives this variation.
But I'd urge anyone contributing B580 Einstein observations in this thread to document their host arrangements (both CPU and motherboard) just in case those turn out to matter here.
One other point: apparently initial reviews are positive enough, and Intel's willingness to invest in producing rollout quantities was cautious enough, that B580 GPUs have ever since rollout been reported at:
as going into stock (usually just for preorders, but also sometimes for actual orders) only briefly, before reverting to the status of being out-of-stock with new orders not currently being accepted. NoinStock certainly does not survey all retailers, but does detect matters at Amazon, NewEgg, and B&H. So this is an indication that the GPUs produced so far have been finding a ready market for some use.
We plan to provide a report here about this gpu very soon, perhaps by the end of the week. What has been interesting is the amount of driver updates (almost weekly) so things are continuing to evolve. Stay tuned!
Our experience with the Intel Arc B580 has been good, so far. First of all, we wanted to thank everyone here who played a role in getting the B580 up and running. Here is a log of what we did to get it up and running along with observations along the way.
GPU: Intel Arc B580 Steel Legend 12GB OC
Initially, we installed the B580 into our Intel i9 14900KS system with a Asus ROG Maximus Z790 Apex Encore motherboard running Linux Mint. The GPU would generate a video output but we could not get the Intel Arc drivers to work properly. We attempted every version of install that we could find online (including the instructions on the Intel website for Ubuntu install) but it never would recognize the GPU and its compute capabilities. There are/were three potential issues:
I am not great with Ubuntu/Linux so attempting to get the drivers installed was more based on other’s experiences with posted instructions, which were limited in scope.
Driver compatibility with the hardware (unlikely)
Driver compatibility with Linux Mint (possible)
We decided to move on to the next system which was:
We were fully aware that this old Precision would probably not be compatible due to “resizable bar” issues, but we believed that a change in the system’s BIOS might resolve the issue. It did not. We attempted to trick the system by installing a working GPU (AMD FirePro) as video output and using the B580 as a secondary GPU so we might be able to install the Intel drivers. This also did not work. Potential issues:
Old hardware that in no way was designed for an Intel Arc. The drivers would not recognize the GPU, even if secondary.
We decided to try one of our only modern Windows systems next:
CPU: AMD Ryzen Threadripper PRO 5965WX (128 MB cache, 24 cores, 48 threads, 3.8GHz to 4.5GHz)
Chassis: Dell Precision 7865 Tower
RAM: 64GB 8x8GB DDR4 3200MHz RDIMM ECC
Storage: 2TB, M.2, PCIe NVMe, SSD, Class 40
OS: Windows 11 Pro
This system is DEFINITELY not intended to house the B580 and Dell would probably not be happy about this (or, they might think its cool). Initially, it would not work but then we disabled almost all virtualization options in the BIOS based on other users reports using Arcs in Dell systems. This fixed the issue. System booted normally and the Intel Drivers installed with zero issues. The B580 compute capabilities were enabled.
Now, to get this GPU to work for E@H work in Windows:
- Change the coproc_info file:
<name> to "<name>Intel(R) HD Graphics 5500 pretending to be HD Graphics 599</name> TWICE in the file (you will see the first <name> close to the top of the file and then again seven lines down from the first <name>).
- Save file
- Then, change the file to "read only" in "properties".
The “spoofed name” GPU was recognized the was sent work. It has been crunching ever since.
Running 1x has never been an issue- tasks run without any problems.
Running 2x has run into problems where the driver appears to hang for one of the work units. A system restart fixes the issue.
The drivers are updated almost weekly, so I am attempting to run 2x again to see if there is any improvement to stability.
Usage info running 2x MeerKAT:
Roughly ~72 watts
Holds around 55c
“Compute” usage = 100%.
I hope this can help whoever comes here next trying to get the B series Arc working. It was a great experience with LOTS of valuable troubleshooting. Thank you all again for your support!
Our experience with the Intel Arc B580 has been good, so far. First of all, we wanted to thank everyone here who played a role in getting the B580 up and running. Here is a log of what we did to get it up and running along with observations along the way.
.....snip.....
Good job BRCHS! Keep up the great work! I wish I had you as a teacher when I was in HS, but... that was like 55 yrs ago.
As a sidebar: Have you ever considered teaching your students the differences between all of the Linux versions out there? The Good, the Bad, and Not-So-Good?
Our experience with the Intel Arc B580 has been good, so far. First of all, we wanted to thank everyone here who played a role in getting the B580 up and running. Here is a log of what we did to get it up and running along with observations along the way.
GPU: Intel Arc B580 Steel Legend 12GB OC
I hope this can help whoever comes here next trying to get the B series Arc working. It was a great experience with LOTS of valuable troubleshooting. Thank you all again for your support!
That's GREAT info thanks!!
One question, you haven't tried running the 03AS tasks yet, do you plan to at some point?
I like the 1000 second tasks you are doing on Meerkat though!!!
Re B580 power use, it's
)
Re B580 power use, it's interesting that your card is using a lot less power, and similarly seems to be running slower than the sample earlier in this thread from Mumak. I wonder if he was overclocking or anything?
he seemed to be getting runtimes less than 500s and running ~111W.
_________________________________________________________________________
Ian&Steve C. wrote:Re B580
)
I noticed that too and was very curious about that. Can't say i have any explanation as of yet. Going to investigate tomorrow.
Also, I can't remember is A4500 gpus are running 2x or 3x.. too many numbers in my head..
Idle wattage is ~7w.
No concrete conclusions about
)
No concrete conclusions about the power usage being so different. I did a simple overclock based on what others were doing/posting online for these GPUs, and I saw a ~10% increase in wattage (peaked ~80w). I know I could push the speeds further but this is what others were commonly overclocking to. I am not trying to overclock when running E@H, I am trying to match the reported wattage earlier in the thread.
What is interesting is that when I do a GPU benchmark, it will hit 130w easily. The GPU is not using the "compute" function of the GPU, just rendering. I then bumped the MeerKAT work to 2x and it still hovered around ~73w.
I am running a few different benchmarks so I can see how the wattage fluctuates. The GPU has two 8-pin power cords. I wonder if one of them is not working? Would the GPU power on with one 8-pin instead of two? I know 8-pin is 150 watts, so this crossed my mind.
At least some cards for at
)
At least some cards for at least some connections actively report during boot-up if a connection is not active.
I rather doubt that it would silently cut power consumption because of a failure to connect one of two 8-pin connectors, but I can't be sure.
I don't spot obvious reporting for 2X runs in the tasks list. Maybe those have not been reported yet?
Mumak declined to try 2X on the ground that the reported GPU utilization at 1X was so extremely high as to make it unlikely 2X would add production to 1X.
We've seen very high GPU chip and host CPU capability dependence on the improvement or lack of improvement from higher multiplicity. We'll all be interested to see what happens on your card for 2X on BRP7 running an opencl app under Windows.
You might find it interesting to check GPU utilization. As it may vary all over the place in the short term, I'd suggest using a monitor which can average between button clicks. For example GPU-Z or HWiNFO (which by the way is Mumak's work.
archae86 wrote:I don't spot
)
Definite improvement running 2x. They are starting to show up and validate on the host (I sometimes temporarily disconnect a host so I can just look at completed work units/times in BOINC when trying something new or different). GPU shows consistent 100% utilization when running 2x versus the drops in utilization when running 1x. Highly doubt 3x will show any improvement, but will be trying this tomorrow.
Edit: 3x crashed the gpu... plenty of vram left and temperature headroom, so must have been driver related. It didn't like that at all. However, 2x was far superior to 1x.
While the Arc B580 has been
)
While the Arc B580 has been getting good reviews as a strong value proposition in the moderate price GPU market, mostly based on price and game performance, I've recently seen comments which might or might not be relevant to Einstein folks.
Apparently comparing B580 gaming results on high-end recent CPUs to that observed when using older less capable CPUs the B580 suffers a larger performance drop than do comparison products.
As both "new" and "high-end" are somewhat non-specific categories, I don't have a good guess on which actual attribute of the host CPU drives this variation.
But I'd urge anyone contributing B580 Einstein observations in this thread to document their host arrangements (both CPU and motherboard) just in case those turn out to matter here.
One other point: apparently initial reviews are positive enough, and Intel's willingness to invest in producing rollout quantities was cautious enough, that B580 GPUs have ever since rollout been reported at:
Arc B580 NowinStock
as going into stock (usually just for preorders, but also sometimes for actual orders) only briefly, before reverting to the status of being out-of-stock with new orders not currently being accepted. NoinStock certainly does not survey all retailers, but does detect matters at Amazon, NewEgg, and B&H. So this is an indication that the GPUs produced so far have been finding a ready market for some use.
archae86 wrote: While the
)
We plan to provide a report here about this gpu very soon, perhaps by the end of the week. What has been interesting is the amount of driver updates (almost weekly) so things are continuing to evolve. Stay tuned!
Our experience with the Intel
)
Our experience with the Intel Arc B580 has been good, so far. First of all, we wanted to thank everyone here who played a role in getting the B580 up and running. Here is a log of what we did to get it up and running along with observations along the way.
GPU: Intel Arc B580 Steel Legend 12GB OC
Initially, we installed the B580 into our Intel i9 14900KS system with a Asus ROG Maximus Z790 Apex Encore motherboard running Linux Mint. The GPU would generate a video output but we could not get the Intel Arc drivers to work properly. We attempted every version of install that we could find online (including the instructions on the Intel website for Ubuntu install) but it never would recognize the GPU and its compute capabilities. There are/were three potential issues:
We decided to move on to the next system which was:
Dell Precision system
CPU: Dual Intel Xeon Processor E5-2650 v3 (10 Core, 20 Threads, 25MB Cache, 2.3GHz Turbo) = 20 cores, 40 threads
Chassis: Dell Precision Tower 7810
RAM: 64GB (8x8GB) 2133MHz DDR4 RDIMM ECC
OS: Windows 10
We were fully aware that this old Precision would probably not be compatible due to “resizable bar” issues, but we believed that a change in the system’s BIOS might resolve the issue. It did not. We attempted to trick the system by installing a working GPU (AMD FirePro) as video output and using the B580 as a secondary GPU so we might be able to install the Intel drivers. This also did not work. Potential issues:
We decided to try one of our only modern Windows systems next:
CPU: AMD Ryzen Threadripper PRO 5965WX (128 MB cache, 24 cores, 48 threads, 3.8GHz to 4.5GHz)
Chassis: Dell Precision 7865 Tower
RAM: 64GB 8x8GB DDR4 3200MHz RDIMM ECC
Storage: 2TB, M.2, PCIe NVMe, SSD, Class 40
OS: Windows 11 Pro
This system is DEFINITELY not intended to house the B580 and Dell would probably not be happy about this (or, they might think its cool). Initially, it would not work but then we disabled almost all virtualization options in the BIOS based on other users reports using Arcs in Dell systems. This fixed the issue. System booted normally and the Intel Drivers installed with zero issues. The B580 compute capabilities were enabled.
Now, to get this GPU to work for E@H work in Windows:
- Change the coproc_info file:
<name> to "<name>Intel(R) HD Graphics 5500 pretending to be HD Graphics 599</name> TWICE in the file (you will see the first <name> close to the top of the file and then again seven lines down from the first <name>).
- Save file
- Then, change the file to "read only" in "properties".
The “spoofed name” GPU was recognized the was sent work. It has been crunching ever since.
Observations about its usage on this host:
Usage info running 2x MeerKAT:
I hope this can help whoever comes here next trying to get the B series Arc working. It was a great experience with LOTS of valuable troubleshooting. Thank you all again for your support!
Boca Raton Community HS
)
Good job BRCHS! Keep up the great work! I wish I had you as a teacher when I was in HS, but... that was like 55 yrs ago.
As a sidebar: Have you ever considered teaching your students the differences between all of the Linux versions out there? The Good, the Bad, and Not-So-Good?
Just thought I'd ask.
Proud member of the Old Farts Association
Boca Raton Community HS
)
That's GREAT info thanks!!
One question, you haven't tried running the 03AS tasks yet, do you plan to at some point?
I like the 1000 second tasks you are doing on Meerkat though!!!