|New benchmark, 13hr run (I think the previous was 24hrs).|
Again I won't post the whole list as I don't want this score in the chart either .
Q6600 @3.35 GHz, 372 MHz FSB, 466.5 MHz RAM, 5-5-5, TRD 10.
(Btw overall CAS latency time is almost identical).
1213 Kpt/s (run averaged), that's 4.5% faster than before.
|An overnight run with the Atom N270 1.6GHz resulted in:|
...that's it! Maybe I'll need to leave it at home over a whole weekend to get a reliable reading...
|not weekend but week|
|lol, well looking at an old chart it's about as fast as an Athlon 1.67 GHz |
Btw I never did find out how 'fast' my Celeron 366 @566 was because even though the results file was updated it reset to zero after I restarted it , any ideas as to why it would do that?
On my Q6600 rig I've managed to lower tRD to 7, currently stabilty testing with OCCT, after that I'll do another DPAD benchmark .
|Here's a whole weekend's worth.|
Being as fast as an Athlon 1.67GHz isn't bad if I just want the machine to run XP - doing Muon1 in the background is of course a bonus.
Got my RAM speed wrong earlier, should of been 465 MHz.
This was a 19hr run, again I won't post the whole list yet as I've got further tweaking to do. Gonna try boosting RAM speed some more, & see if I can run tRD 6 without boosting NBv much.
Score with tRD lowered from 10 to 7. (Wouldn't boot with tRD 6 on default NBv)
Q6600 @3.35 GHz, 372 MHz FSB, 465 MHz RAM, 5-5-5-15, tRD 7.
1267 Kpt/s (run averaged), that's again a 4.5% improvement .
|Well this is wierd, I've tried the following settings thinking the higher FSB would boost speeds bit it didn't |
Q6600 @3.34 GHz, 417 MHz FSB, 834 MHz RAM 4-4-4-15 timings, tRD 8.
After a 21hr run it scored 1258 Kpts/s , worse than the 372:930 settings!
I guess the memory bandwidth counts as well as the FSB speed.
Well I've got some DDR2 1066 RAM to try now, more benchmarks with that
|Opps, using 2 different RAM speed units , I meant 417 MHz RAM for the above.|
|First tests of i7 on 4Ghz are quite nice, about 750 kpts/sec per core, which should lead to approx 3000kpts/sec for all together!!|
Due to less than 100% stability I fiddled around with some multipliers so I'll test it again once I know it's stable
|Cool. RAM is cruical I think. If you have 1600 MHz it should be no problem.|
If you have ASUS MB just rewrite from:
|Small teaser with muon on auto threads (makes 8 threads which don't use 100% of the CPU cycles.|
Uptime (secs),Mpts in file,Estimate kpts/sec
tomaz, thanks for the article. It gave me some more insight in those voltages. I use the MSI Eclipse because it has a better slot layout for when I want to add a second video card. I also need a third PCI-E slot for a RAID controller. Quite a nice board
|Wow!:ekk: , those i7s are just awesome at DPAD! |
You might well find that running 2 clients on 4 threads is fastest combination.
As for my rig I've yet to find the max stable speed for my DDR2 1066 RAM.
Also trying to re-benchmark my old Sempron64 3100 @2.5GHz before I retire it, this current clients infrequent flushing is very annoying!, I've been running it since about 7.30am and it's STILL showing no change in file size!
|I guess it'll be time to redraw the graph when Haiya-Dragon's i7 benchmark is through?|
|Here it is!|
2 clients of 4 threads on a quadcore Core I7 965 running at 4.0Ghz
Very close to 3k kpts/sec
Uptime (secs),Mpts in file,Estimate kpts/sec
Uptime (secs),Mpts in file,Estimate kpts/sec
|That's odd, why's the 2nd client got a higher score?|
Awesome score anyway!
|Oh yea & new score for my rigs :-|
Q6600 @3.35 GHz, 372 MHz FSB, 495 MHz RAM, 5-5-5-18, tRD 7.
Average score over a 21.2 hr run was 1278.1 Kpts/s, slightly higher at ~1% faster than the previous best .
Sempron64 3100 @2.5GHz, 278 MHz FSB, 227.5 MHz RAM 3-3-3-11. Same setup as before just benchmarked with the current client v4.44d.
Uptime (secs),Mpts in file,Estimate kpts/sec
Ignoring the 1st line that's an average of 236 Kpt/s, oh yea & despite that being a 15.6 hr run those were the only results shown!!
|I hope that's still valid? is it the number of lines that count for a stable score or the hrs run?|
|Well I'm getting varying results from the benchmark, eariler I got 1278 Kpt/s from my Q6600 @3.35 GHz, 372:495 FSB:RAM, today I got 1256.4 Kpt/s from the same setup . I know that's only ~2% less but I was hoping it would be more accurate so I could measure the changes I'm making.|
Is that sort of variance normal?
Oh btw, previously I called my Sempron a 'Sempron64 3100' because it can do 64bit, however I noticed the other day that 'Sempron64' is the title given to AM2 Semprons, mine isn't, it's S754.
|Forgot new benchmark.|
Q6600 @3.34 GHz, 417 MHz FSB, 523 MHz RAM, 5-5-5-18, tRD 8.
Average score was 1272.9 Kpts/s (ignoring the 1st 3 lines).
I may well tweak some more but the score isn't going to go up by much, so count this as a score for the next graph .
Oh & stephen, throw out the scores from previous clients .
|I've build another system and "just for testing" it runs DPAD.|
It's a AMD Phenom II 920 @ 3360MHz (14x240).
Uptime (secs),Mpts in file,Estimate kpts/sec
Total average 1600mpts, but the first part was while working on it (installing software etc). Averaging the latter part, 1620mpts is a better indication of its speed.
|Cool, UC. I was just about to ask if somebody can bench Phenom II !|
It seem like AMD is catching up a bit... Your machine has 120 kpts/Mclock/core which is very close to i7 (128) with HT off. But HT does miracle on i7 and with HT on kpts/Mclosk/core jumps to 175+ !
|haya dragon, did you burn your 965 or what ?|
acording to various articles it should go to 4.5 easyly, there are reports of 5 and even 5.5 GHz (on air).
Nice score! , smokes my Q6600 at virtually the same speed!
Btw what are your RAM specs & timings?
The RAM-timings are OCZ-standard (5,5,5....), don't know exactly, but the first 3 are the most influential (I thought).
Unfortunately, the machine isn't mine, so after weekend it has to go his new owner. Maybe the new owner wants to keep it running DPAD. The previous Phenom system I build still runs DPAD (not 24/7, but at least 6 hours a day) at the new owner.
|Aww pitty it's not your own rig |
Btw it's 23% faster than my Q6600 & that's only running 20MHz slower than the PhII.
|Maybe I'll built a new real system for my own (just having an old dual Athlon MP which doesn't work anymore and using a laptop with a T5500 CPU 24/7).|
I want a new dual socket system. Maybe a lower speed Intel Xeon based on the new i7 technology. Undervolting the system to the max, i mean to min... and it's ready for 24/7.
|Why would you want to under-volt it??|
|At default speed the cpu runs at a certain voltage, lets say 1.4V. Then I lower the voltage until it just runs stable (it has to be stable of course). Because P=U*U/R (P=power, U=voltage, R=resistance) the cpu uses much less power, but runs at the same speed. This doesn't only results in a lower bill, but also less noise of the fans.|
Even when I overclock the machine i will try to use an as low as possible voltage.
|Yea I know what under-volting is & that less voltage eqauls less power used , I just wondered why you did that rather than o/cing to the best MHz with default vcore,especially being a DCer .|
I doubt you'll be able to lower the voltage much but I guess any saving is useful .
What sort of vcore drops have you been able to do on previous systems?
|Well, my current system is a laptop (T5500 cpu) and normally runs at 1.3V. But it now runs at 1.06V (1.66GHz full load). The Phenom II runs normally at 1.35V, the lowest stable voltage at 2.8GHz full load it could run was 1.175V .|
|I just opened my new Q6600 system today, and am benchmarking now (also heating it up some, to see the noise. You'll never see me overclocking though, because i know better |
Seriously, I value my equipment too much to drastically shorten its lifespan, and introduce calculation errors, for a performance boost (which is what happened with OCing)
The better way to improve performance is to reduce the code-load. Efficient sensible coding is the key. I've used systems around 1Ghz for the last 9 years, at the same time I used a 1Mhz (approx) system constantly since 1986, only stopping because I moved stateside, and its got an integrated PSU.
Software conspires to make computers just as fast now, as in the past. While hardware tends to obay moores law, software tends to obay a negative, or inverse moores law, software HALVES in speed for the same hardware, every 18 months.
|I don't get your philosophy about not overclocking because of the software getting less efficient every 18 months. Bad software needs a fast cpu, so overclock the thing.|
Also, overclocking does not affect the lifespan of a cpu, heat does (as with almost everything, heat is the killer). Ok, overclocking results in higher temperatures, but effective overclockings needs lower temperatures, so overclockers try to lower the temperatures. Most overclockers push their system until it's getting unstable, then they lower the clock a bit, resulting in a perfectly stable and fast(er) system.
And if all this overclocking kills year cpu earlier, so be it. Instead of 25 years lifespan the cpu will "only" works for 10-15 years. By that time, the cpu of a "gameboy" is as fast as your 10 year old desktop.
|"Also, overclocking does not affect the lifespan of a cpu, heat does (as with almost everything, heat is the killer)."|
Better go tell that to chip fabs. Sorry, but our survay says *eh EH"
It's hard to explain, but there's two things that can kill a chip, and heats only one. The other is a short developing, and that can happen via 'electron migration'. It's hard to explain, and It's been more than ten years since it was explained to be, so i don't remember how it was done so, but lets see...
Ever seen a river with a stagnant loop (sometimes called an oxbow lake)? At one time it meandered in an S shape, with big loops, and the banks on the outside got more and more worn, right? At some point at short occurs. here's a graphical illustration - http://www.geocaching.com/seek/cache_details.aspx?wp=gcqv5m - but bear in mind, this is just an illustration. A simmilar sort of thing happens with electronics, where electrons migrate thtough circuit pathways, and eventually create a short into another area. Overclocking makes this a lot more liekly to happen.
I don't say this with no basis either, part of my degree covered these topics, and i spent an instructive time at ARM (www.arm.com) at their blackburn location, working on just this thing. I can also tell you that no matter how much you cool it, even if you keep it to 16C while running (thus taking the heat from the equasion) overclocked chips will die.
It should also be noted that, due to 'minimum feature size', no two chips are ever identical. Sometimes, adjacent chips will be rated differently, because the flaws or differences in their makeup, while extremely minor, make migration more likely on one, so it gets a lower rating. Thats just how electronics is. It's also why, for instance, resistors are banded +-5% or 10%, because we can't even accurately and consistently make simple resistors. how much hope is there for CPUs?.
As far as software goes, I prefer to make software faster, streamline, and set it up better, rather than damage hardware. My main system (until yesterday) used ram chips that have been in near constant use for 10 years, switched between two mobos (one a MSI based dual P3 system, and the other a 1ghz athlon based compaq). Both run just fine, but the lack of USB2 ports, and a need for high def video encoding and decoding (and the associated firewire ports) meant a new system was essential. (yu're looking at someone, that used the same computer for his databases from 1987 to 2001, mainly because going to a modern system and database software, would be no quicker.
|I was familiar with the electron-migration issue, i thought that that problem also made Intel and AMD to produce on SOI or strained silicon (correct me if I'm wrong). I just didn't realise that electron-migration could "kill" cpu's, i thought it just made them more unstable at certain clock rates.|
But still, software is what you get, you can only decide to make your hardware faster. Yes i know that you can tweak your software between some limits, but that's it.
You also know that a lot of cpu's are clocked at lower speed by Intel/AMD, or even disable cores on purpose (AMD does this with the triple cores). So there's room for a little overclock and still have a perfectly stable system.
|"You also know that a lot of cpu's are clocked at lower speed by Intel/AMD, or even disable cores on purpose (AMD does this with the triple cores)."|
yeah, I know, and I know why. I even said why in my last post.
"Sometimes, adjacent chips will be rated differently, because the flaws or differences in their makeup, while extremely minor, make migration more likely on one, so it gets a lower rating."
They have a certain level of reliability they aim for, and if it doesn't make the relability spec, they lower the rating until it does. On rare occasions though, they just don't have enough lower-rated chips, and rate them lower; this is the only time its 'safe' to overclock, and then only to its original rating.
Also, what looks like a 'perfectly stable' system to you, can often not be. Years and years ago, I was helping out with distributed.net (I was one of the ones that pointed out the flaw with their OGR-25 implimentation, leading to their second run) and when RC5-64 was at end, the key wasn't found. one of the most likely reasons was that the key, when issued for the first time, went to an overclocked machine, that mis-calculated, and sent back a non-decrypted answer. There was also a number1 design a few lattices ago, that no-one could come near. At the time, i voiced to stephen an observation that it probably came from an OC'd machine.
Short of using a chip analysis machine, you can't tell how stable it is. Observational evidence is impossible to tell, unless you do a lot of calcualtions with known results. (This, Stephen, is another point for a set benchmarking subsystem)
|K'Tetch, I think you will find that cosmic radiation induced bit-flip errors, would also explain erroneous results (We don't all run server/workstations with EEC memory).|
>>>Seriously, I value my equipment too much to drastically shorten its lifespan, and introduce calculation errors, for a performance boost (which is what happened with OCing)<<<
Overclocking does NOT drastically shorten lifespan! lol, as long as excessive voltages are avoided (typically, & approximatly a max of 10% over default) and temperatures are kept within safe limits. I have a 9 1/2 yr old Celeron 366 o/ced to 550MHz & it still works & is still stable.
Calculation errors are sought out & eliminated by extensive stress testing, this is fairly basic overclocking knowledge btw
>>>Software conspires to make computers just as fast now, as in the past<<<
Lol, well this might be true for MS OSs , but seriously for games at least, though the h/w requirements definitly go up over time so does the visual quality & the (potential) gameplay. And you of all people should know that DC benefits from faster h/w. I do agree though programs should still be effiecent.
>>>It's hard to explain, but there's two things that can kill a chip, and heats only one. The other is a short developing, and that can happen via 'electron migration'. It's hard to explain, and It's been more than ten years since it was explained to be, so i don't remember how it was done so, but lets see...
No, there's 3, voltage plays a significant role in electro-migration too.
>>> I can also tell you that no matter how much you cool it, even if you keep it to 16C while running (thus taking the heat from the equasion) overclocked chips will die.<<<
So will non overclocked chips ,it's just a matter of when, it is true that a higher clock speed will shorten the lifespan as well as higher voltage & higher temps (which is more controlable & can be lowered), but bear in mind their is a very high tolerance range, so that a CPU at default clock speeds in good conditions will f-a-r outlive its useful life.
Overclocking does eat into that life, but as UC said even if the CPU 'only' lasts 10yrs (or even 7yrs) it has a very limited useage by then, & also 2nd hand replacements will be dirt cheap by that point, (assuming you really needed to fix an old PC).
>>>this is the only time its 'safe' to overclock, and then only to its original rating.<<<<
Utter rubbish, if you wanted to play it ultra safe & overclock you can, just simply don't raise the vcore at all & get a HSF which keeps your temps below which the retail HSF can manage & that will be a perfectly safe o/c for a very longtime.
As far as DC goes, maybe early versions of RC5 were suseptable to h/w errors (of which most would likely be down to unstable overclocks ,but ALSO down to failing h/w at default speeds) & SETI classic certainly was, but I can tell you that SETI BOINC has triple checking by giving out any 1WU 3 times, the result isn't counted until it is verefied by at least 2 different machines.
F@H also has some form of internal checking, if a WU is corrupted by h/w errors it terminates the WU early (so called EUEs), in fact F@H is so stringent that in itself it makes it a good stress testing program .
As far as DPAD goes I take it you haven't seen my recent thread about that, in that thread I'm asking what sort of error checking occurs, here is Stephens answer:-
>>>If it's a rare single bit error, it will only affect one particle in the beam and produce a very small yield error. The most dangerous error condition would be one where it produced a false high yield, ----> but these are rerun with the quarantine mechanism and the highest and lowest of 5 runs discarded.<<<
>>>At the time, i voiced to stephen an observation that it probably came from an OC'd machine.<<<
Pure speculation, could be right could be wrong
It seems that a significant proportion of your arguement against overclocking is based on many mis-understandings, despite your technical education.
>>>Also, what looks like a 'perfectly stable' system to you, can often not be.<<<
This is where extensive stress testing comes in & then close observation afterwards. If any programs have problems then simple drop the CPUs speed or raise vcore, most (if not all now) DC projects have error checking so the odd error doesn't cause any real problems.
Btw I'm not tyring to get you to o/c , I just trying to get the right picture painted here.
|Also non-overclocked cpu's make calculation mistakes, but they occur in very limited condition. Sometimes they make too many mistakes and then need to go back to Intel/AMD, just like the recent TLB-bug in i7-cpu's. Years ago, you could buy Pentium key chain holders because they had a serious calculation bug.|
The default clockspeed of cpu is a safe speed in (almost) every condition. The cpu needs to work in extreme cold or hot, because Intel/AMD wants satisfied buyers all around the world. The cpu's need to work in laptops/thinclients with rubbish cooling resulting in temperatures of 70 degrees Celcius (or sometimes higher). This means that cpu's have reasonable margins, so in optimal conditions they can run faster and are as stable as a cpu running in not so optimal condition like a laptop/thinclient.
Every product has its margins, the expert knows by experiment where the "still save" limit is. Just like chiptunning of cars or making structural changes in a house.
|The TLB bug was in AMDs Phenom not Intels i7|
|Sorry, AMD of course. I read an article about problems of the i7 with TLB. But that were only rumours, because Intel had already solved those problems with the Core2-cpu's.|
|"As far as DC goes, maybe early versions of RC5 were suseptable to h/w errors (of which most would likely be down to unstable overclocks ,but ALSO down to failing h/w at default speeds) & SETI classic certainly was, but I can tell you that SETI BOINC has triple checking by giving out any 1WU 3 times, the result isn't counted until it is verefied by at least 2 different machines."|
the classic SETI@home did that too, pre-boinc. in fact, I remember the 'month of 30', when for a month, right when the project started, they only gave the same 30 work units out to over 100,000 participants. I think i did each unit at least twice.
"No, there's 3, voltage plays a significant role in electro-migration too."
yes, voltage increases the chances of perminant electron migration, AND increases heat. but unless you start ramping the voltage WAY up, its not going to do it on its own, but will be a contributing factor to the development of BOTH the factors listed to kill a chip.
"Overclocking does eat into that life, but as UC said even if the CPU 'only' lasts 10yrs (or even 7yrs) it has a very limited useage by then"
I'vfe just pushed back a system i've been using for the last 9 years. It was my main system (and you can see it on the earlier graphs - the dual P3-550) Ran XP great, ran everything fine, maybe not the most modern games, but since 95% are crap, thats no loss. I've even been working on repairing my duron850-based laptop, as a cheap writing machine, even though yesterday I just bought a QL-62 based system (got a really good deal - $320). They've only got a limited usage if you're REALLY bad about system setup, or mainly care about games (especially the latest, greatest version of Quake)
"Calculation errors are sought out & eliminated by extensive stress testing, this is fairly basic overclocking knowledge btw"
Yeah, I know. When I was still active in the field, there was a project that asked for people with stable Overclocks to test their systems. It used a very expensive chip analyser which took the chip, and emulated the motherboard, giving pre-programed inputs and seeing the output. Very nice, and it found that in maybe 40% of the 'stable, extensively stress tested' cases, the chips were producing errors that went unspotted, mainly because they were areas that didn't get used often, or had redundency, or were intermittant.
To put it in perspective, the 'extensive stress testing' is like checking a car's running ok, by listening to the sound of the engine - you can only detect the most obvious errors. What i'm talking about is like plugging in a diagnosstics box, and sticking a gas probe up the exhaust and actually measuring, not going by human perceptable observations. Of course, most overclocking is more of a placebo effect anyway, even a 10% speedup is usually unnoticable unless you're specifically watching the workrate (or framerate etc)
|ok, first time i've added benchmark data since the thread started.|
Q6600 based HP 6535c factory stock, on vista-64
124697 , 106213.5 , 645.15
126197 , 107349.2 , 647.61
126497 , 107394.7 , 645.44
127697 , 108222.7 , 646.20
130097 , 109924.0 , 648.29
130397 , 109948.2 , 645.94
132798 , 111728.7 , 649.02
134598 , 112877.4 , 648.76
added in some spaces to make it a little more readable.
I'll give my new QL-62 based laptop a try in the next few days.
|actually, the next few entries today have been higher - 134598,112877.4,648.76|
|K'Tetch, You know there are specially code stress tests for Overclocking, that calculate algoriths with a known result to a very high accuracy. If the result is out... the CPU gives errors. Its very simple. You seem to be talking about equipment that is very old. Maybe that is where most of your experience lies. Placebo effect is irrelvent. If you overclock while running a CPU bound application you get more results and faster executiuon time. Of course this only makes sense. |
As for your anectodal example of strees testing the 'stable' cpus. Its likely that many errors are a result of the volumes of errata that come with every CPU.
|I suppose thats why Intel now actively support overclocking sites and events.|
|Intel supporting OC events... yeah, because I can see NO reason at all they wouldn't. After all, if a chip blows 'not our fault, you did it, and went outside the spec'. It's just more business for htem.|
the software you talk about has been around for a long time as well. Longer than chip analysis machines even. While they're more 'detailed' than "well, it seems to run GIMPS/superPi/muon1 ok" it's still less detailed than plugging a chip into a system thats purpose designed to systimatically test a cpu, by checking each bit of the chip in turn.
I've heard most of these pro-overclock arguments before. Most are the computing equivilent of "eat your crusts, it'll make your hair grow curly"
|Yes Intel support OVerclocking events and are getting heavily involved in the Enthusiast market. AMD always were sumewhat involved and sponsored the recent Liquid Helium AMD effort with Phenom at CES this year.|
Its rare that a CPU blows. It takes a death wish on the CPU's owner behalf, or someone incredibly stupid. Incredibly stupid. The only other times CPU's can 'blow' unexpectly is with ancient CPU's which it seems you have a lot of experience. Like Northwoods 'Sudden Death' for example.
You can tell a good overclocking CPU. Many of the out of spec CPU's that the Manufactureres throw away are perfect. The hots ones with high leakage currents are perfect. They clock higher with very good cooling.
I've heard most ot these anti-overclock arguments before. Most are the computing equivalent of "Don't get married before you're forthy".
|yes riptide, because the laws of physics, and the materials used to make CPUs have radically changed in the last 10 years. Oh no, wait, its the same materials, the same physics, and smaller circuits. but for some reason, you believe the principles just don't apply.|
I ain't going to change your mind, and you're not going to convince me that you know better than a lot of really experianced CPU designers. Or, with your "Many of the out of spec CPU's that the Manufactureres throw away are perfect." you apparantly know better than the very people that designed the things, which makes me wonder how many years YOU have been designing CPUs, and what your formal education in this is. Or hell, lets even just start with how many fab plants you've even BEEN in.
|Listen buddy. The forum I'm with is laced with everybody from fab engineers to system architechs. I think i know what I'm talking about. I've picked a few brains along the way, along with folk I personally know from the local Intel Fab here. (24 / 24-2). You may have the olde ARM tech blinkers on. Can't help there. The principles apply, your frame of reference does not.|
|I think K`tetch has the wrong end of the stick here too. As long as the cooling is sufficient to keep the chip temperature down (including hotspots) and the time-critical paths on the chip can complete before the next clock cycle, overclocking shouldn't risk "burning out" your chip.|
Wires can quite happily carry signals of extremely high frequency, it's not the frequency that kills the chips, it's heat.