Jump to content
  • Sign Up

GW2 and Overclocking


Recommended Posts

It's well known that GW2 is "sensitive" to overclocking, but what makes GW2 sensitive and why does it have issues with some overclocked hardware and not others? For example. I have one machine with an eVGA nVidia GTX 670 SC and the game will hard crash the system if I do not downclock the video card. In general the system is fine with most other games and passes all hardware bench tests and stress tests. On another machine, I have an eVGA nVidia GTX 1080 SC and GW2 is fine. In both cases the CPU (i7-3930k/i7-7700k) is not overclocked, only the GPU, which is factory overclocked. I guess, at the end of the day, the biggest question is, "While it has been recognized as an issue, why, in 7 years, has the issue never been resolved?"

Link to comment
Share on other sites

How on earth do you expect them to "fix" a problem caused by your computer and its misuse? Overclocking is not a good thing. You void your warranty on the chip; why would you expect another company to "fix" a problem that only exists on one specific PC, when even the manufacturer warns against it?

Link to comment
Share on other sites

@"Daddicus.6128" said:How on earth do you expect them to "fix" a problem caused by your computer and its misuse? Overclocking is not a good thing. You void your warranty on the chip; why would you expect another company to "fix" a problem that only exists on one specific PC, when even the manufacturer warns against it?

Details... sheesh...

Link to comment
Share on other sites

@"Daddicus.6128" said:How on earth do you expect them to "fix" a problem caused by your computer and its misuse? Overclocking is not a good thing. You void your warranty on the chip; why would you expect another company to "fix" a problem that only exists on one specific PC, when even the manufacturer warns against it?

It's not an uncommon issue with GW2 actually. Common enough that ANet had officially stated on the forums that GW2 is "sensitive" to overclocking. Is it really caused by my computer when it only consistently happens in a single application? I guess, you missed the part where I said both GPUs are "factory overclocked"...not overclocked by me. The eVGA GTX 670 SC (Superclock) was a legitimate retail version of the card, as is the eVGA GTX 1080 SC. ...and you also must have missed the part where I said the PC "passes all hardware bench tests and stress tests" and "is fine with most other games"...it's an application problem...always has been.

Did you only reply based on the title? :p

Link to comment
Share on other sites

There is nothing wrong with overclocking as long as you have thoroughly tested that your pc is stable and heat is in acceptable range for your hardware at max load over an extended test (1 hour is a good start) . I've overclocked my cpu and gfx card since day 1 of gw2, and never had problems. I suspect what support means is that gw2 leans heavily on your cp, so any issue is magnified. Re cpu over clocking I do it through my bios and switch off boosting. Gfx card I do through an app.

I would go as far as saying a stable overclock gives a lot of value in GW2 because of this dependancy, if I recall I get another 5 fps out of it.

On train just now so can't recall test apps I use at mo, but I basically run a gpu stress test and a cpu stress test at same time for an hour, if I get an error, I crank it back 1% and try again. If its stable I crank back 1 more % just to give a little breathing space and call it good.

Link to comment
Share on other sites

@"vesica tempestas.1563" said:There is nothing wrong with overclocking as long as you have thoroughly tested that your pc is stable and heat is in acceptable range for your hardware. I've overclock Ed my cpu and gfx card since day 1 of gw2, and never had problems. I suspect what support means is that gw2 leans heavily on your cp, so any issue is magnified. Re cpu over clocking I do it through my bios and switch off boosting. Gfx card I do through an app.

I would go as far as saying a stable overclock gives a lot of value in GW2 because of this dependancy, if I recall I get another 5 fps out of it.

I don't typically overclock beyond what's delivered from the factory. It's great when it works. Like with my 1080. I get a pretty consistent 70-75fps at 2k with nearly max details. The problem is that when problems do arise there is very little in the way of help with debugging it. It's one of those "curse" situations where there's an obvious issue but no apparent problem. ANet says it the card/configuration, the vendor, card and chipset manufacturers blame the game since the card remains stable on the bench and in stress applications such as Furmark and Superposition. So the only option you're really left with is either replacing hardware at your own cost or trying things like downclocking. After 3 years (The length of the warranty) of fighting with vendors and manufacturers, I just downclock now. Thankfully that machine with the 670 is soon to be retired. It's about 7 years old, so has served a long life. Hopefully, when I get the new one, it will be stable.

Link to comment
Share on other sites

Re factories they batch cpu in terms of quality roughly speaking, but there is allways a variance, if you get lucky you basically have free performance that you can tap into for no cost. For e.g my gfx card is a model that is factory clocked higher, but it is known to clock up on average another 0-200, I got anoth 170 out of it, which was nearly 20% faster while still at bottom end of temp recommended limit, (50 at load)

Link to comment
Share on other sites

@"Leamas.5803" said:It's well known that GW2 is "sensitive" to overclocking, but what makes GW2 sensitive and why does it have issues with some overclocked hardware and not others? For example. I have one machine with an eVGA nVidia GTX 670 SC and the game will hard crash the system if I do not downclock the video card. In general the system is fine with most other games and passes all hardware bench tests and stress tests. On another machine, I have an eVGA nVidia GTX 1080 SC and GW2 is fine. In both cases the CPU (i7-3930k/i7-7700k) is not overclocked, only the GPU, which is factory overclocked. I guess, at the end of the day, the biggest question is, "While it has been recognized as an issue, why, in 7 years, has the issue never been resolved?"

When you overclock you should run stability tests to make sure you OC correctly.

Link to comment
Share on other sites

@anduriell.6280 said:

When you overclock you should run stability tests to make sure you OC correctly.

Which was done extensively by myself and it was in to the shop where I bought it twice for bench testing. They kept it on the bench for 9 days the second time I brought it in. As I said...I didn't do any overclocking. It was a retail card an eVGA GTX 670 SC. I like to think eVGA knows what they're doing.

Link to comment
Share on other sites

@Leamas.5803 said:

@"vesica tempestas.1563" said:There is nothing wrong with overclocking as long as you have thoroughly tested that your pc is stable and heat is in acceptable range for your hardware. I've overclock Ed my cpu and gfx card since day 1 of gw2, and never had problems. I suspect what support means is that gw2 leans heavily on your cp, so any issue is magnified. Re cpu over clocking I do it through my bios and switch off boosting. Gfx card I do through an app.

I would go as far as saying a stable overclock gives a lot of value in GW2 because of this dependancy, if I recall I get another 5 fps out of it.

I don't typically overclock beyond what's delivered from the factory. It's great when it works. Like with my 1080. I get a pretty consistent 70-75fps at 2k with nearly max details. The problem is that when problems do arise there is very little in the way of help with debugging it. It's one of those "curse" situations where there's an obvious issue but no apparent problem. ANet says it the card/configuration, the vendor, card and chipset manufacturers blame the game since the card remains stable on the bench and in stress applications such as Furmark and Superposition. So the only option you're really left with is either replacing hardware at your own cost or trying things like downclocking. After 3 years (The length of the warranty) of fighting with vendors and manufacturers, I just downclock now. Thankfully that machine with the 670 is soon to be retired. It's about 7 years old, so has served a long life. Hopefully, when I get the new one, it will be stable.

both of those applications seems to be GPU oriented but even if a benchmark can cover both it is important to make sure that it can stress both CPU and GPU simultaneously for an accurate test otherwise you can end up in a situation where your cooling is enough for a heavily stressed CPU or GPU but not both, a similar scenario would apply to power as well

Link to comment
Share on other sites

Don't overclock your machine unless you really know what you're doing.

Even if it passes stability tests, there's alot that can go wrong under the surface, computers are very complex and there's alot of factors involved in their proper operation, which is only made possible by a ton of regulated design, shielding and error correction to begin with since the world is analog, not digital, and that progress took nearly a century to perfect too.

For example your computer has to deal with cosmic rays that flip bits in its RAM and caches, and even the quality of the power from your AC outlet can affect stability if your PSU isn't high end. Did you know that even in developed countries, greedy power companies can feed consumers "dirty" power to save on maintenance costs, with out-of-spec fluctuating voltages and excessive line noise? It's not noticeable on something like a light, but the PSUs in a computer have to struggle to clean it up enough to be suitable for your fragile PC components.

So even going by factory spec and professional bench testing doesn't guarantee your system is going to be stable.

The reason GW2 is sensetive to overclocking is that ever since GW1 ArenaNet has run a stability test in the main game loop so that any time you ask for support they can make sure you don't have unstable or broken hardware and weed out invalid bug reports.

Link to comment
Share on other sites

@Leamas.5803 said:

When you overclock you should run stability tests to make sure you OC correctly.

Which was done extensively by myself and it was in to the shop where I bought it twice for bench testing. They kept it on the bench for 9 days the second time I brought it in. As I said...I didn't do any overclocking. It was a retail card an eVGA GTX 670 SC. I like to think eVGA knows what they're doing.

Yeah you have a dodgy bit of hardware, overclocking is irrelevant, a cpu will have a max speed before it goes unstable. A game is not sensitive to the speed a cpu runs at, but it can be more/less sensitive to faults generated by dodgy hardware or errors from poorly tested overclocks.

Link to comment
Share on other sites

The first thing to understand is that Benchmarking are fundamentally synthetic, is NOT "real world", because of a need for a "Control sample". The way benchmarking typically works is by artificially loading the hardware with something easily repeatable and stable, because its the only effective way to recognize anomalies. But the flaw in this is the stability of the loop itself..... which is compounded by all the testing software using similar, if not identical methods.

Real world is lot less consistent. The irony here is that well optimized software can either exacerbate a hardware flaw, or completely cover it up... depending on how the hardware flaw behaves. But one thing most under-optimized games tend to have in common is the low efficiency, sometimes roundabout way they accomplish their tasks. A memory leak is situation where a program fails to free up memory when its no longer used; But its entirely possible for game to use a ton of memory poorly, and not be a memory leak. That conceptual difference is critical to understand what I'm about to say next.

Programs, games in particular, do have a habit of loading a system in a consistent way during execution. But 2 different programs typically won't load the same hardware in the same patterns. In the past I've played 5 different game (4 of which were MMOs) that operated in such an inefficient manner, that it could consistantly find hardware flaws that get past benchmarking tools. Even compound issues could be figured out through trial and error. For instance... a memory stability issue persists when swapping channels, but swapping the stick order can clear it up. Swapping back to original order, in the other channel, and it breaks again. And this does make sense, since the OS is more likely to allocate lower memory space first, and eventually the game eats its way into the higher memory space through demand. Since the flaw only manifests with the game using those flawed blocks in a given way, unless you isolate the block and more thoroughly test it, its hard to understand what exactly is wrong, and what the game is doing to trigger it. But the fact putting a different stick clears the problem, and putting the old one back consistently leads to the problem, is pretty clear indicator that something is wrong with the stick (or the stick in conjunction with other hardware elements).

One of those games could also suss out bad PCI bus clocks (especially bad overclocks), noisy Power supplies voltages, and created a lot of exposure on CPU Parking behavior. But the reason a lot of people have trouble understanding and accepting issues is largely due to this stigma of a singular source of fault. "This only happens with this one game, so it HAS to be the game's fault", despite the reality of situation being completely different. What about one game having the same issue across multiple computers that are all different? Well..... thats still a crap shoot. There is a very good reason a lot of support forums ask for an extensive list of hardware when looking into wide spread issues. People tend to focus on CPU, Ram amount, GPU and hard drive size as being the only thing important, as those are "over the minimum requirements".

But I have seen multiple cases where "wide spread issues" actually landing up with a common factor (thats not the game), and would probably had never been found if not for ONE person having gone stupidly deep into troubleshooting. Planetside 2 had on/off issue with a "No damage bug" that only developed over long play sessions(1-2 hours), but seemed to have no common factor. The Devs were of no help, since they could barely observe it, but couldn't figure out how to recreate it. More over, it seemed to be a client side issue that was impossible to get metrics on, since they had no idea what look for. Everyone first assumed it might be a memory leak (because of long play sessions)..... but Dev telemetry didn't support that hypothesis. And because it seemingly appeared and disappeared between patches, they hadn't been able to figure out what they're touching that sets its off.

It wasn't until a player found his way into one of the most insane theories I had ever seen until that point, had stumbled on the common factor almost on accident. In hindsight, I wished had documented it for posterity, since finding all the scattered posts between the forums and reddit are painfully difficult now. In fact, I'm mostly going off memory from around 6 years ago.

The Clif's notes version is this...... Essentially when the game sends data packets to the server, it time stamps everything to help account for latency and secure them against man in the middle hacks (which was an unrelated exploit to this story). Damage is validated client side, and literally tells the server if its hitting things. Damage packets get priority in the data stream, so they move to the top of the queue to be sent to the server. Incoming damage is below that, and movement updates below that. Interaction (using terminals) is somewhere near the bottom. The game also has an internal 500ms delay built into the event resolution process... but I can't remember if that was implemented before or after this bug. Anyway, all of game's event handling and time stamping operates off of an internal time counter hooked into a system clock source. Windows has multiple API hooks to access one of many timing and time services, that are driven by 4 or more clock sources available from hardware. The one we're dealing with here is HPET (https://en.wikipedia.org/wiki/High_Precision_Event_Timer). When HPET is enabled on the system, and used as a time source for the game, it eventually suffers from serious enough clock drift that damage packets start expiring and get dropped by the system. Since this all is happening client side, it does lend credibility to why it nigh impossible to capture in server metrics, but also raised the question as to why "only" damage packets being affected. Some theories where proposed by the community, but (as far as I know) the Devs never weighed in on it.

The way this guy found this was noticing PCI bus errors on his system around the same time the bug was active on his client. Don't ask me how he thought to look, or why he was looking... but I don't think anyone would had thought to even check that. He narrowed down the issue to HPET and found a way to check for drift issues using Wintimertester (which I think came from an audio enthusiast forum trying to figure out DSP timing problems for high end audio systems). HPET has been kind of notorious ever since, as I've seen it come up occasionally in other game forums dealing with FPS issues. Disabling it leads to an FPS drop..... But in PS2's case, it also mitigates the No Damage bug and other dsync issues almost completely. Not all systems were affected by this; but a number of popular motherboards were found to be a trend in this issue, but not entirely consistent. Prior to this discovery, there was community lead research that suggested overclocks may be contributing to the rate of decay. But since underclocking wasn't enough to extend playtime by significant amount, and affected FPS in some systems, it wasn't considered viable as a work around. In hindsight, since HPET works off a chip in the south bridge, it sort of makes sense... but wouldn't be particularly useful.

Keep in mind that popular benchmarking software never came close to discovering any of these kinds of anomalies under their most brutal stress modes, since they probably aren't even looking for them. On top of which, not that long ago it was discovered GPU manufacturers were doctoring drivers to give better Benchmark scores; something that compromises the integrity of Benchmarks and methodology. Benchmarking and Gaming performance were never easy to begin with...... but with the level of trust and quality from once highly regarded studios completely tanking, even seasoned gamers are having a very hard time navigating the seemingly endless stream of issues to get a proper handle on whats going on with hobby. Gaming as an industry has become worth too much money, and its attracted all the problems that come with it.

Link to comment
Share on other sites

I build my own computers, and I STILL don't overclock them. You really have to know what you're doing to overclock, and the people who know what they're doing ... don't.

Overclocking by an OEM is really not acceptable, either. They only do it to meet some crazy benchmark test specs, not because it's right. I would look twice at any card that advertised itself as overclocked. But, since it was the OEM who did it, I apologize for being so blunt earlier.

Link to comment
Share on other sites

As described above, testing software can only simulate and approximate a real game load, it's all about finding a spot where there is a strong probability that your system is stable, and making sure it's not a bit of confirmation bias going on. Especially with old systems, getting a good chip and taking advantage of a 20% boost in speed is worth it (for example) . In new Higher performing systems getting performance boosts may not be nessassary. I allways overclock (carefully) as I find it fun, and 7 years I have not seen issues in GW2 and play for hours on end at times, so it's certainly possible.

To give an idea, at default speeds I'm at 30 fps in lion arch on my old system, 34-35 when overclocked - at that fps, it matters, or at least feels like it's better :)

Link to comment
Share on other sites

  • 3 weeks later...

@"Daddicus.6128" said:How on earth do you expect them to "fix" a problem caused by your computer and its misuse? Overclocking is not a good thing. You void your warranty on the chip; why would you expect another company to "fix" a problem that only exists on one specific PC, when even the manufacturer warns against it?

Overclocking is not a bad thing at all if you know what you're doing. He mentioned his gfx card was factory overclocked, so it doesn't void his warranty as the company itself applied it. Same goes for CPU's, intel K processors are almost meant to be overclocked as they are 'unlocked' chips, which they charge a premium for:).

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...