stephenbrooks.orgForumMuon1GeneralBenchmarking with Mpts thread
Username: Password:
Page 1 2 3 4 5 6 7 8 9 10 11 12 13
Search site:
Subscribe to thread via RSS
[TA]Assimilator1
2009-01-08 20:07:05
New benchmark, 13hr run (I think the previous was 24hrs).
Again I won't post the whole list as I don't want this score in the chart either .

Q6600 @3.35 GHz, 372 MHz FSB, 466.5 MHz RAM, 5-5-5, TRD 10.
(Btw overall CAS latency time is almost identical).

1213 Kpt/s (run averaged), that's 4.5% faster than before.
Stephen Brooks
2009-01-09 10:05:58
An overnight run with the Atom N270 1.6GHz resulted in:

Uptime (secs),Mpts in file,Estimate kpts/sec
224564,2654756.1,0.00
233878,2655692.5,100.53

...that's it!  Maybe I'll need to leave it at home over a whole weekend to get a reliable reading...
tomaz
2009-01-09 11:16:56
not weekend but week
[TA]Assimilator1
2009-01-10 13:32:05
lol, well looking at an old chart it's about as fast as an Athlon 1.67 GHz

Btw I never did find out how 'fast' my Celeron 366 @566 was because even though the results file was updated it reset to zero after I restarted it , any ideas as to why it would do that?

On my Q6600 rig I've managed to lower tRD to 7, currently stabilty testing with OCCT, after that I'll do another DPAD benchmark .
Stephen Brooks
2009-01-12 12:34:43
Here's a whole weekend's worth.

Uptime (secs),Mpts in file,Estimate kpts/sec
308132,2661672.8,0.00
319250,2662874.8,108.12
320452,2663001.6,107.86
372436,2668273.8,102.65
427123,2674272.9,105.89
437940,2675474.4,106.32
480607,2680163.8,107.21
491725,2681365.9,107.27


Being as fast as an Athlon 1.67GHz isn't bad if I just want the machine to run XP - doing Muon1 in the background is of course a bonus.
[TA]Assimilator1
2009-01-12 19:37:20


Got my RAM speed wrong earlier, should of been 465 MHz.

This was a 19hr run, again I won't post the whole list yet as I've got further tweaking to do.  Gonna try boosting RAM speed some more, & see if I can run tRD 6 without boosting NBv much.
Score with tRD lowered from 10 to 7. (Wouldn't boot with tRD 6 on default NBv)

Q6600 @3.35 GHz, 372 MHz FSB, 465 MHz RAM, 5-5-5-15, tRD 7.

1267 Kpt/s (run averaged), that's again a 4.5% improvement .
[TA]Assimilator1
2009-01-16 19:53:48
Well this is wierd, I've tried the following settings thinking the higher FSB would boost speeds bit it didn't

Q6600 @3.34 GHz, 417 MHz FSB, 834 MHz RAM 4-4-4-15 timings, tRD 8.

After a 21hr run it scored 1258 Kpts/s , worse than the 372:930 settings!
I guess the memory bandwidth counts as well as the FSB speed.
Well I've got some DDR2 1066 RAM to try now, more benchmarks with that
[TA]Assimilator1
2009-01-16 19:55:22
Opps, using 2 different RAM speed units , I meant 417 MHz RAM for the above.
Haiya-Dragon
2009-01-18 22:42:30
First tests of i7 on 4Ghz are quite nice, about 750 kpts/sec per core, which should lead to approx 3000kpts/sec for all together!!
Due to less than 100% stability I fiddled around with some multipliers so I'll test it again once I know it's stable
tomaz
2009-01-19 01:21:39
Cool.  RAM is cruical I think.  If you have 1600 MHz it should be no problem.
If you have ASUS MB just rewrite from:
http://www.bit-tech.net/hardware/cpus/2008/11/06/overclocking-intel-core-i7-920/1
Haiya-Dragon
2009-01-19 16:43:10
Small teaser with muon on auto threads (makes 8 threads which don't use 100% of the CPU cycles.
Uptime (secs),Mpts in file,Estimate kpts/sec
32598,100033.5,2843.41
32898,101277.6,2856.31
34998,107468.7,2862.26
37098,113474.3,2862.10
37698,114636.1,2846.27
37998,115518.8,2847.08
38298,116729.5,2857.07

tomaz, thanks for the article.  It gave me some more insight in those voltages.  I use the MSI Eclipse because it has a better slot layout for when I want to add a second video card.  I also need a third PCI-E slot for a RAID controller.  Quite a nice board
[TA]Assimilator1
2009-01-19 18:40:52
Wow!:ekk: , those i7s are just awesome at DPAD! 
You might well find that running 2 clients on 4 threads is fastest combination.

As for my rig I've yet to find the max stable speed for my DDR2 1066 RAM.

Also trying to re-benchmark my old Sempron64 3100 @2.5GHz before I retire it, this current clients infrequent flushing is very annoying!, I've been running it since about 7.30am and it's STILL showing no change in file size! 
Stephen Brooks
2009-01-21 09:55:05
I guess it'll be time to redraw the graph when Haiya-Dragon's i7 benchmark is through?
Haiya-Dragon
2009-01-22 16:49:40
Here it is!
2 clients of 4 threads on a quadcore Core I7 965 running at 4.0Ghz

Very close to 3k kpts/sec

Client 1:
Uptime (secs),Mpts in file,Estimate kpts/sec
1831,87374.3,0.00
2731,88563.5,1321.22
3331,89732.1,1571.73
4231,91079.9,1543.87
4531,91111.6,1384.06
5131,92253.4,1478.39
5431,92450.3,1409.88
6031,93594.7,1480.92
6331,93683.2,1401.86
7231,94972.8,1407.01
7831,95992.6,1436.26
8731,97207.2,1424.94
9331,98174.9,1439.96
9931,99019.6,1437.57
10231,99379.3,1429.04
11131,100683.8,1431.01
12031,101876.6,1421.67
12931,103112.4,1417.73
17432,109528.7,1420.03
18032,110586.5,1432.73
22532,117067.7,1434.34
23432,118358.8,1434.35
23732,118744.3,1432.30
24632,120035.5,1432.39
25533,121356.6,1433.73
25833,121410.9,1418.07
26733,122713.6,1419.13
27333,123942.6,1433.93
27633,123995.7,1419.31
28533,125377.1,1423.21
29133,126450.3,1431.23
30033,127736.8,1431.17
30333,127791.8,1418.04
30933,128912.5,1427.31
31833,130154.2,1425.88
32733,131427.6,1425.55
33633,132823.4,1429.10
34533,134244.2,1433.21

Client 2:
Uptime (secs),Mpts in file,Estimate kpts/sec
2431,618644.5,0.00
3031,619498.2,1422.71
3931,620557.5,1275.22
4531,621769.4,1487.92
8731,627776.5,1449.40
12632,633787.2,1484.45
12932,634228.5,1484.06
13532,635458.5,1514.64
17732,641458.7,1491.00
18332,642513.4,1501.06
18932,643549.9,1509.29
22833,649560.6,1515.36
23733,650841.2,1511.45
27933,657281.5,1515.05
28233,657658.1,1512.02
32433,664011.0,1512.09
33033,664886.8,1511.06
33333,664978.6,1499.36
[TA]Assimilator1
2009-01-22 19:36:54
That's odd, why's the 2nd client got a higher score?

Awesome score anyway! 
[TA]Assimilator1
2009-01-22 20:04:14
Oh yea & new score for my rigs :-

Q6600 @3.35 GHz, 372 MHz FSB, 495 MHz RAM, 5-5-5-18, tRD 7.

Average score over a 21.2 hr run was 1278.1 Kpts/s, slightly higher at ~1% faster than the previous best .

Sempron64 3100 @2.5GHz, 278 MHz FSB, 227.5 MHz RAM 3-3-3-11. Same setup as before just benchmarked with the current client v4.44d.

Uptime (secs),Mpts in file,Estimate kpts/sec
25259,796120.9,0.00
50819,802131.1,235.14
51420,802273.8,235.19
56231,803483.3,237.71

Ignoring the 1st line that's an average of 236 Kpt/s, oh yea & despite that being a 15.6 hr run those were the only results shown!!
[TA]Assimilator1
2009-01-22 20:05:11
I hope that's still valid?  is it the number of lines that count for a stable score or the hrs run?
[TA]Assimilator1
2009-01-29 19:34:24
Well I'm getting varying results from the benchmark, eariler I got 1278 Kpt/s from my Q6600 @3.35 GHz, 372:495 FSB:RAM, today I got 1256.4 Kpt/s from the same setup . I know that's only ~2% less but I was hoping it would be more accurate so I could measure the changes I'm making.

Is that sort of variance normal?


Oh btw, previously I called my Sempron a 'Sempron64 3100' because it can do 64bit, however I noticed the other day that 'Sempron64' is the title given to AM2 Semprons, mine isn't, it's S754.
[TA]Assimilator1
2009-01-29 19:42:53
Forgot new benchmark.

Q6600 @3.34 GHz, 417 MHz FSB, 523 MHz RAM, 5-5-5-18, tRD 8.

Average score was 1272.9 Kpts/s (ignoring the 1st 3 lines).
I may well tweak some more but the score isn't going to go up by much, so count this as a score for the next graph .

Oh & stephen, throw out the scores from previous clients .

1219,16844363.6,0.00
2124,16845374.4,1117.23
3028,16846582.3,1226.16
3330,16847067.9,1281.02
4235,16848296.1,1303.97
9060,16854288.2,1265.72
9663,16855192.5,1282.40
10568,16856009.6,1245.70
11473,16857495.4,1280.69
16298,16863487.5,1268.25
16600,16863996.1,1276.45
17504,16865183.5,1278.45
17806,16865285.5,1261.35
22330,16871296.3,1275.79
23234,16872506.0,1278.31
24139,16873461.5,1269.54
28964,16879468.2,1265.25
29869,16880674.0,1267.38
30774,16881819.4,1267.34
31678,16883029.8,1269.43
36202,16889040.6,1277.10
41027,16895024.7,1272.62
41932,16896234.4,1274.05
42535,16897181.7,1278.38
47361,16903192.5,1274.96
52186,16909244.9,1273.01
53091,16910462.3,1274.27
53392,16910685.4,1271.18
53995,16911433.9,1270.84
54599,16912223.7,1271.27
55202,16913138.1,1274.01
56107,16914349.3,1275.07
57011,16915551.1,1275.94
61837,16921543.2,1273.22
62741,16922742.9,1274.00
63344,16923540.3,1274.46
63948,16924055.6,1270.42
64551,16925034.9,1273.79
69376,16931045.7,1271.80
69979,16931959.9,1273.94
70884,16933146.6,1274.43
72090,16934349.8,1269.71
72694,16935189.8,1270.75
72995,16935706.5,1272.61
73900,16936916.7,1273.42
74805,16937856.8,1270.53
75709,16939013.6,1270.63
76313,16939873.7,1271.88
76614,16940177.8,1270.83
77519,16941389.2,1271.63
77821,16941840.0,1272.51
78725,16943047.0,1273.23
Universal Creations
2009-02-05 21:40:57
I've build another system and "just for testing" it runs DPAD.
It's a AMD Phenom II 920 @ 3360MHz (14x240).

Uptime (secs),Mpts in file,Estimate kpts/sec
664,19263.0,0.00
4264,25270.5,1668.74
4864,25914.3,1583.63
6305,27067.7,1383.50
9905,33072.7,1494.35
13505,39088.5,1543.89
17105,45113.0,1572.26
20705,51105.1,1588.82
21305,51958.1,1583.96
24905,57952.5,1596.01
28505,63963.3,1605.54
32405,70042.1,1599.78
32705,70865.0,1610.48
33005,70937.3,1597.78
33605,72076.3,1603.25
34206,73097.3,1605.01
34806,74158.7,1607.89
38406,80163.0,1613.61
39306,81342.9,1606.56
39606,82069.8,1612.85
43506,88095.0,1606.67
44106,89306.4,1612.36
44706,90517.0,1617.89
45306,91143.3,1610.17
46206,92571.1,1609.70
49806,98560.2,1613.65
50106,99317.4,1619.18
53706,105309.5,1622.25
57606,111320.3,1616.70
58206,112555.7,1621.31
61806,118569.9,1624.21
62706,119868.9,1621.59
63306,120993.3,1624.01
67206,127051.0,1619.86
67506,127796.0,1623.74
68106,128691.3,1622.57
69006,129902.7,1618.92

Total average 1600mpts, but the first part was while working on it (installing software etc).  Averaging the latter part, 1620mpts is a better indication of its speed.
tomaz
2009-02-05 22:22:57
Cool, UC.  I was just about to ask if somebody can bench Phenom II !
It seem like AMD is catching up a bit... Your machine has 120 kpts/Mclock/core which is very close to i7 (128) with HT off.  But HT does miracle on i7 and with HT on kpts/Mclosk/core jumps to 175+ !
tomaz
2009-02-05 22:29:15
haya dragon, did you burn your 965 or what ?
acording to various articles it should go to 4.5 easyly, there are reports of 5 and even 5.5 GHz (on air).
[TA]Assimilator1
2009-02-06 20:00:14
UC
Nice score!  , smokes my Q6600 at virtually the same speed! 

Btw what are your RAM specs & timings?
Universal Creations
2009-02-06 21:10:16
Thanks!

The RAM-timings are OCZ-standard (5,5,5....), don't know exactly, but the first 3 are the most influential (I thought).
Unfortunately, the machine isn't mine, so after weekend it has to go his new owner.  Maybe the new owner wants to keep it running DPAD.  The previous Phenom system I build still runs DPAD (not 24/7, but at least 6 hours a day) at the new owner.
[TA]Assimilator1
2009-02-11 19:52:28
Aww pitty it's not your own rig
Btw it's 23% faster than my Q6600 & that's only running 20MHz slower than the PhII.
Universal Creations
2009-02-11 20:30:51
Maybe I'll built a new real system for my own (just having an old dual Athlon MP which doesn't work anymore and using a laptop with a T5500 CPU 24/7).
I want a new dual socket system.  Maybe a lower speed Intel Xeon based on the new i7 technology.  Undervolting the system to the max, i mean to min... and it's ready for 24/7.
[TA]Assimilator1
2009-02-13 00:32:42
Why would you want to under-volt it??
Universal Creations
2009-02-13 14:46:27
At default speed the cpu runs at a certain voltage, lets say 1.4V. Then I lower the voltage until it just runs stable (it has to be stable of course).  Because P=U*U/R (P=power, U=voltage, R=resistance) the cpu uses much less power, but runs at the same speed.  This doesn't only results in a lower bill, but also less noise of the fans.
Even when I overclock the machine i will try to use an as low as possible voltage.
[TA]Assimilator1
2009-02-19 00:23:49
Yea I know what under-volting is & that less voltage eqauls less power used , I just wondered why you did that rather than o/cing to the best MHz with default vcore,especially being a DCer .
I doubt you'll be able to lower the voltage much but I guess any saving is useful .
What sort of vcore drops have you been able to do on previous systems?
Universal Creations
2009-02-19 23:23:01
Well, my current system is a laptop (T5500 cpu) and normally runs at 1.3V. But it now runs at 1.06V (1.66GHz full load).  The Phenom II runs normally at 1.35V, the lowest stable voltage at 2.8GHz full load it could run was 1.175V .
K`Tetch
2009-02-22 23:18:03
I just opened my new Q6600 system today, and am benchmarking now (also heating it up some, to see the noise.  You'll never see me overclocking though, because i know better

Seriously, I value my equipment too much to drastically shorten its lifespan, and introduce calculation errors, for a performance boost (which is what happened with OCing)

The better way to improve performance is to reduce the code-load.  Efficient sensible coding is the key.  I've used systems around 1Ghz for the last 9 years, at the same time I used a 1Mhz (approx) system constantly since 1986, only stopping because I moved stateside, and its got an integrated PSU.

Software conspires to make computers just as fast now, as in the past.  While hardware tends to obay moores law, software tends to obay a negative, or inverse moores law, software HALVES in speed for the same hardware, every 18 months.
Universal Creations
2009-02-22 23:40:54
I don't get your philosophy about not overclocking because of the software getting less efficient every 18 months.  Bad software needs a fast cpu, so overclock the thing.

Also, overclocking does not affect the lifespan of a cpu, heat does (as with almost everything, heat is the killer).  Ok, overclocking results in higher temperatures, but effective overclockings needs lower temperatures, so overclockers try to lower the temperatures.  Most overclockers push their system until it's getting unstable, then they lower the clock a bit, resulting in a perfectly stable and fast(er) system.
And if all this overclocking kills year cpu earlier, so be it.  Instead of 25 years lifespan the cpu will "only" works for 10-15 years.  By that time, the cpu of a "gameboy" is as fast as your 10 year old desktop.
K`Tetch
2009-02-23 16:27:33
"Also, overclocking does not affect the lifespan of a cpu, heat does (as with almost everything, heat is the killer)."

Better go tell that to chip fabs.  Sorry, but our survay says *eh EH"

It's hard to explain, but there's two things that can kill a chip, and heats only one.  The other is a short developing, and that can happen via 'electron migration'. It's hard to explain, and It's been more than ten years since it was explained to be, so i don't remember how it was done so, but lets see...

Ever seen a river with a stagnant loop (sometimes called an oxbow lake)?  At one time it meandered in an S shape, with big loops, and the banks on the outside got more and more worn, right?  At some point at short occurs.  here's a graphical illustration - http://www.geocaching.com/seek/cache_details.aspx?wp=gcqv5m - but bear in mind, this is just an illustration.  A simmilar sort of thing happens with electronics, where electrons migrate thtough circuit pathways, and eventually create a short into another area.  Overclocking makes this a lot more liekly to happen.

I don't say this with no basis either, part of my degree covered these topics, and i spent an instructive time at ARM (www.arm.com) at their blackburn location, working on just this thing.  I can also tell you that no matter how much you cool it, even if you keep it to 16C while running (thus taking the heat from the equasion) overclocked chips will die.

It should also be noted that, due to 'minimum feature size', no two chips are ever identical.  Sometimes, adjacent chips will be rated differently, because the flaws or differences in their makeup, while extremely minor, make migration more likely on one, so it gets a lower rating.  Thats just how electronics is.  It's also why, for instance, resistors are banded +-5% or 10%, because we can't even accurately and consistently make simple resistors.  how much hope is there for CPUs?.

As far as software goes, I prefer to make software faster, streamline, and set it up better, rather than damage hardware.  My main system (until yesterday) used ram chips that have been in near constant use for 10 years, switched between two mobos (one a MSI based dual P3 system, and the other a 1ghz athlon based compaq).  Both run just fine, but the lack of USB2 ports, and a need for high def video encoding and decoding (and the associated firewire ports) meant a new system was essential.  (yu're looking at someone, that used the same computer for his databases from 1987 to 2001, mainly because going to a modern system and database software, would be no quicker.
Universal Creations
2009-02-24 00:54:44
I was familiar with the electron-migration issue, i thought that that problem also made Intel and AMD to produce on SOI or strained silicon (correct me if I'm wrong).  I just didn't realise that electron-migration could "kill" cpu's, i thought it just made them more unstable at certain clock rates.

But still, software is what you get, you can only decide to make your hardware faster.  Yes i know that you can tweak your software between some limits, but that's it.

You also know that a lot of cpu's are clocked at lower speed by Intel/AMD, or even disable cores on purpose (AMD does this with the triple cores).  So there's room for a little overclock and still have a perfectly stable system.
K`Tetch
2009-02-25 15:16:15
"You also know that a lot of cpu's are clocked at lower speed by Intel/AMD, or even disable cores on purpose (AMD does this with the triple cores)."

yeah, I know, and I know why.  I even said why in my last post.
"Sometimes, adjacent chips will be rated differently, because the flaws or differences in their makeup, while extremely minor, make migration more likely on one, so it gets a lower rating."
They have a certain level of reliability they aim for, and if it doesn't make the relability spec, they lower the rating until it does.  On rare occasions though, they just don't have enough lower-rated chips, and rate them lower; this is the only time its 'safe' to overclock, and then only to its original rating.

Also, what looks like a 'perfectly stable' system to you, can often not be.  Years and years ago, I was helping out with distributed.net (I was one of the ones that pointed out the flaw with their OGR-25 implimentation, leading to their second run) and when RC5-64 was at end, the key wasn't found.  one of the most likely reasons was that the key, when issued for the first time, went to an overclocked machine, that mis-calculated, and sent back a non-decrypted answer.  There was also a number1 design a few lattices ago, that no-one could come near.  At the time, i voiced to stephen an observation that it probably came from an OC'd machine.

Short of using a chip analysis machine, you can't tell how stable it is.  Observational evidence is impossible to tell, unless you do a lot of calcualtions with known results.  (This, Stephen, is another point for a set benchmarking subsystem)
RGtx
2009-02-25 15:53:06
K'Tetch, I think you will find that cosmic radiation induced bit-flip errors, would also explain erroneous results (We don't all run server/workstations with EEC memory).
[TA]Assimilator1
2009-02-27 00:19:40
K`Tetch

>>>Seriously, I value my equipment too much to drastically shorten its lifespan, and introduce calculation errors, for a performance boost (which is what happened with OCing)<<<

Overclocking does NOT drastically shorten lifespan!  lol, as long as excessive voltages are avoided (typically, & approximatly a max of 10% over default) and temperatures are kept within safe limits.  I have a 9 1/2 yr old Celeron 366 o/ced to 550MHz & it still works & is still stable.
Calculation errors are sought out & eliminated by extensive stress testing, this is fairly basic overclocking knowledge btw

>>>Software conspires to make computers just as fast now, as in the past<<<

Lol, well this might be true for MS OSs , but seriously for games at least, though the h/w requirements definitly go up over time so does the visual quality & the (potential) gameplay.  And you of all people should know that DC benefits from faster h/w. I do agree though programs should still be effiecent.

>>>It's hard to explain, but there's two things that can kill a chip, and heats only one.  The other is a short developing, and that can happen via 'electron migration'. It's hard to explain, and It's been more than ten years since it was explained to be, so i don't remember how it was done so, but lets see...
<<<

No, there's 3, voltage plays a significant role in electro-migration too.

>>> I can also tell you that no matter how much you cool it, even if you keep it to 16C while running (thus taking the heat from the equasion) overclocked chips will die.<<<

So will non overclocked chips ,it's just a matter of when, it is true that a higher clock speed will shorten the lifespan as well as higher voltage & higher temps (which is more controlable & can be lowered), but bear in mind their is a very high tolerance range, so that a CPU at default clock speeds in good conditions will f-a-r outlive its useful life. 
Overclocking does eat into that life, but as UC said even if the CPU 'only' lasts 10yrs (or even 7yrs) it has a very limited useage by then, & also 2nd hand replacements will be dirt cheap by that point, (assuming you really needed to fix an old PC).

>>>this is the only time its 'safe' to overclock, and then only to its original rating.<<<<

Utter rubbish, if you wanted to play it ultra safe & overclock you can, just simply don't raise the vcore at all & get a HSF which keeps your temps below which the retail HSF can manage & that will be a perfectly safe o/c for a very longtime.

As far as DC goes, maybe early versions of RC5 were suseptable to h/w errors (of which most would likely be down to unstable overclocks ,but ALSO down to failing h/w at default speeds) & SETI classic certainly was, but I can tell you that SETI BOINC has triple checking by giving out any 1WU 3 times, the result isn't counted until it is verefied by at least 2 different machines.

F@H also has some form of internal checking, if a WU is corrupted by h/w errors it terminates the WU early (so called EUEs), in fact F@H is so stringent that in itself it makes it a good stress testing program .
As far as DPAD goes I take it you haven't seen my recent thread about that, in that thread I'm asking what sort of error checking occurs, here is Stephens answer:-

>>>If it's a rare single bit error, it will only affect one particle in the beam and produce a very small yield error.  The most dangerous error condition would be one where it produced a false high yield, ----> but these are rerun with the quarantine mechanism and the highest and lowest of 5 runs discarded.<<<

>>>At the time, i voiced to stephen an observation that it probably came from an OC'd machine.<<<

Pure speculation, could be right could be wrong
It seems that a significant proportion of your arguement against overclocking is based on many mis-understandings, despite your technical education.

>>>Also, what looks like a 'perfectly stable' system to you, can often not be.<<<

True, sometimes. 
This is where extensive stress testing comes in & then close observation afterwards.  If any programs have problems then simple drop the CPUs speed or raise vcore, most (if not all now) DC projects have error checking so the odd error doesn't cause any real problems.

Btw I'm not tyring to get you to o/c , I just trying to get the right picture painted here.
Universal Creations
2009-02-27 09:05:40
Also non-overclocked cpu's make calculation mistakes, but they occur in very limited condition.  Sometimes they make too many mistakes and then need to go back to Intel/AMD, just like the recent TLB-bug in i7-cpu's. Years ago, you could buy Pentium key chain holders because they had a serious calculation bug.

The default clockspeed of cpu is a safe speed in (almost) every condition.  The cpu needs to work in extreme cold or hot, because Intel/AMD wants satisfied buyers all around the world.  The cpu's need to work in laptops/thinclients with rubbish cooling resulting in temperatures of 70 degrees Celcius (or sometimes higher).  This means that cpu's have reasonable margins, so in optimal conditions they can run faster and are as stable as a cpu running in not so optimal condition like a laptop/thinclient.

Every product has its margins, the expert knows by experiment where the "still save" limit is.  Just like chiptunning of cars or making structural changes in a house.
[TA]Assimilator1
2009-02-27 18:25:57
The TLB bug was in AMDs Phenom not Intels i7
Universal Creations
2009-02-28 00:12:16
Sorry, AMD of course.  I read an article about problems of the i7 with TLB.  But that were only rumours, because Intel had already solved those problems with the Core2-cpu's.
K`Tetch
2009-03-03 20:15:11
"As far as DC goes, maybe early versions of RC5 were suseptable to h/w errors (of which most would likely be down to unstable overclocks ,but ALSO down to failing h/w at default speeds) & SETI classic certainly was, but I can tell you that SETI BOINC has triple checking by giving out any 1WU 3 times, the result isn't counted until it is verefied by at least 2 different machines."

the classic SETI@home did that too, pre-boinc.  in fact, I remember the 'month of 30', when for a month, right when the project started, they only gave the same 30 work units out to over 100,000 participants.  I think i did each unit at least twice.

"No, there's 3, voltage plays a significant role in electro-migration too."
yes, voltage increases the chances of perminant electron migration, AND increases heat.  but unless you start ramping the voltage WAY up, its not going to do it on its own, but will be a contributing factor to the development of BOTH the factors listed to kill a chip.

"Overclocking does eat into that life, but as UC said even if the CPU 'only' lasts 10yrs (or even 7yrs) it has a very limited useage by then"

I'vfe just pushed back a system i've been using for the last 9 years.  It was my main system (and you can see it on the earlier graphs - the dual P3-550) Ran XP great, ran everything fine, maybe not the most modern games, but since 95% are crap, thats no loss.  I've even been working on repairing my duron850-based laptop, as a cheap writing machine, even though yesterday I just bought a QL-62 based system (got a really good deal - $320).  They've only got a limited usage if you're REALLY bad about system setup, or mainly care about games (especially the latest, greatest version of Quake)

"Calculation errors are sought out & eliminated by extensive stress testing, this is fairly basic overclocking knowledge btw"
Yeah, I know.  When I was still active in the field, there was a project that asked for people with stable Overclocks to test their systems.  It used a very expensive chip analyser which took the chip, and emulated the motherboard, giving pre-programed inputs and seeing the output.  Very nice, and it found that in maybe 40% of the 'stable, extensively stress tested' cases, the chips were producing errors that went unspotted, mainly because they were areas that didn't get used often, or had redundency, or were intermittant.

To put it in perspective, the 'extensive stress testing' is like checking a car's running ok, by listening to the sound of the engine - you can only detect the most obvious errors.  What i'm talking about is like plugging in a diagnosstics box, and sticking a gas probe up the exhaust and actually measuring, not going by human perceptable observations.  Of course, most overclocking is more of a placebo effect anyway, even a 10% speedup is usually unnoticable unless you're specifically watching the workrate (or framerate etc)
K`Tetch
2009-03-04 17:26:49
ok, first time i've added benchmark data since the thread started.
Q6600 based HP 6535c factory stock, on vista-64

124697 , 106213.5 , 645.15
126197 , 107349.2 , 647.61
126497 , 107394.7 , 645.44
127697 , 108222.7 , 646.20
130097 , 109924.0 , 648.29
130397 , 109948.2 , 645.94
132798 , 111728.7 , 649.02
134598 , 112877.4 , 648.76

added in some spaces to make it a little more readable.

I'll give my new QL-62 based laptop a try in the next few days.
K`Tetch
2009-03-05 01:29:20
actually, the next few entries today have been higher - 134598,112877.4,648.76
146899,121270.8,653.41
148099,122102.7,653.94
149899,123268.4,653.82
162500,131717.9,655.83
163700,132414.5,654.98
[XS]riptide
2009-03-06 17:23:56
K'Tetch, You know there are specially code stress tests for Overclocking, that calculate algoriths with a known result to a very high accuracy.  If the result is out... the CPU gives errors.  Its very simple.  You seem to be talking about equipment that is very old.  Maybe that is where most of your experience lies.  Placebo effect is irrelvent.  If you overclock while running a CPU bound application you get more results and faster executiuon time.  Of course this only makes sense. 

As for your anectodal example of strees testing the 'stable' cpus.  Its likely that many errors are a result of the volumes of errata that come with every CPU.
[XS]riptide
2009-03-06 17:31:20
I suppose thats why Intel now actively support overclocking sites and events. 
K`Tetch
2009-03-06 20:14:08
Intel supporting OC events... yeah, because I can see NO reason at all they wouldn't. After all, if a chip blows 'not our fault, you did it, and went outside the spec'. It's just more business for htem.

the software you talk about has been around for a long time as well.  Longer than chip analysis machines even.  While they're more 'detailed' than "well, it seems to run GIMPS/superPi/muon1 ok" it's still less detailed than plugging a chip into a system thats purpose designed to systimatically test a cpu, by checking each bit of the chip in turn. 

I've heard most of these pro-overclock arguments before.  Most are the computing equivilent of "eat your crusts, it'll make your hair grow curly"
[XS]riptide
2009-03-06 21:36:58
Yes Intel support OVerclocking events and are getting heavily involved in the Enthusiast market.  AMD always were sumewhat involved and sponsored the recent Liquid Helium AMD effort with Phenom at CES this year.

Its rare that a CPU blows.  It takes a death wish on the CPU's owner behalf, or someone incredibly stupid.  Incredibly stupid.  The only other times CPU's can 'blow' unexpectly is with ancient CPU's which it seems you have a lot of experience.  Like Northwoods 'Sudden Death' for example. 
You can tell a good overclocking CPU.  Many of the out of spec CPU's that the Manufactureres throw away are perfect.  The hots ones with high leakage currents are perfect.  They clock higher with very good cooling.

I've heard most ot these anti-overclock arguments before.  Most are the computing equivalent of "Don't get married before you're forthy".
K`Tetch
2009-03-09 12:03:40
yes riptide, because the laws of physics, and the materials used to make CPUs have radically changed in the last 10 years.  Oh no, wait, its the same materials, the same physics, and smaller circuits.  but for some reason, you believe the principles just don't apply.

I ain't going to change your mind, and you're not going to convince me that you know better than a lot of really experianced CPU designers.  Or, with your "Many of the out of spec CPU's that the Manufactureres throw away are perfect." you apparantly know better than the very people that designed the things, which makes me wonder how many years YOU have been designing CPUs, and what your formal education in this is.  Or hell, lets even just start with how many fab plants you've even BEEN in.
[XS]riptide
2009-03-10 13:31:17
Listen buddy.  The forum I'm with is laced with everybody from fab engineers to system architechs.  I think i know what I'm talking about.  I've picked a few brains along the way, along with folk I personally know from the local Intel Fab here.  (24 / 24-2).  You may have the olde ARM tech blinkers on.  Can't help there.  The principles apply, your frame of reference does not.
Stephen Brooks
2009-03-10 17:29:01
I think K`tetch has the wrong end of the stick here too.  As long as the cooling is sufficient to keep the chip temperature down (including hotspots) and the time-critical paths on the chip can complete before the next clock cycle, overclocking shouldn't risk "burning out" your chip.

Wires can quite happily carry signals of extremely high frequency, it's not the frequency that kills the chips, it's heat.
: contact : - - -
E-mail: sbstrudel characterstephenbrooks.orgTwitter: stephenjbrooksMastodon: strudel charactersjbstrudel charactermstdn.io RSS feed

Site has had 26761389 accesses.