Jump to content
  • Sign Up

Is Anet ever going to adequately address the poor optimization of the game engine?


Recommended Posts

I've rolled with the 30-40fps average in LA for years now, and I've always thought it was because of my older i7 3770k (which was new when GW2 launched, game still ran awful) and now that I've upgraded to an 8th Gen i7-8700k, I'm sad to say that there is literally no difference in framerate. Getting 36fps on state of the art hardware, particularly for a CPU intensive game, is inexcusable. I really like the new direction with the new game director since PoF's launch, and I'm excited for the new LS changes, but it really takes me out of the immersion when I get constant stuttering and a terrible framerate while playing through them. Can anyone point me to a formal response from an Anet rep on this issue? it seems like it's just kept quiet.

Link to comment
Share on other sites

  • Replies 88
  • Created
  • Last Reply

@Blockhead Magee.3092 said:I've been on an I5 and GTX 670 for a number of years and I double your frame rate. You probably have something else going on if you're not pulling better than 30 fps.

What resolution are you running at? I have a very hard time believing you pull more than 40fps average while in Lions Arch. I run at 2560x1440 & my GPU is a EVGA GTX 970 4GB so my hardware most certainly is not the problem.

Link to comment
Share on other sites

1920X1200

Yes you should be getting better FPS. I completely agree. However, the 'something else' could be a number of things unrelated to the specs of your hardware (especially the two parts you listed). I don't know what settings in the game you have on. I don't know what other programs you're running nor what operating system or other hardware you have.

I'm sure someone else will be along to point out their superior FPS on 'inferior' hardware. You can discount us if you like, but you're more likely to see an improvement if you spend time with your machine than waiting (in vain) for Anet to make a change.

Link to comment
Share on other sites

@Blockhead Magee.3092 said:1920X1200

Yes you should be getting better FPS. I completely agree. However, the 'something else' could be a number of things unrelated to the specs of your hardware (especially the two parts you listed). I don't know what settings in the game you have on. I don't know what other programs you're running nor what operating system or other hardware you have.

I'm sure someone else will be along to point out their superior FPS on 'inferior' hardware. You can discount us if you like, but you're more likely to see an improvement if you spend time with your machine than waiting (in vain) for Anet to make a change.

I've been playing this game since launch and across multiple sets of hardware, the issue persists. In map chat in LA I've also had other people with newer 6th and 7th gen i7's having the same issue. If Anet could at least give us some insight on what to try to change in our settings to make the game run better, that would be great. If I turn off AA, Supersampling, it makes no difference.

Link to comment
Share on other sites

Ultra settings? You're not going to see a high FPS when you have a hundred people on screen with maximum model limit/quality, ultra shadows and supersampling. If not that, check the bus speed of the GPU, using GPU-Z for example. Although it won't affect a lot of other games, if it's being limited to 1x, it'll more than half your FPS here.

Link to comment
Share on other sites

@Healix.5819 said:Ultra settings? You're not going to see a high FPS when you have a hundred people on screen with maximum model limit/quality, ultra shadows and supersampling. If not that, check the bus speed of the GPU, using GPU-Z for example. Although it won't affect a lot of other games, if it's being limited to 1x, it'll more than half your FPS here.

A lot of this is just the fact that there's tons of people. I have the same pro in Heart of the Mists.

Link to comment
Share on other sites

@Wander.5780 said:

@Blockhead Magee.3092 said:1920X1200

Yes you should be getting better FPS. I completely agree. However, the 'something else' could be a number of things unrelated to the specs of your hardware (especially the two parts you listed). I don't know what settings in the game you have on. I don't know what other programs you're running nor what operating system or other hardware you have.

I'm sure someone else will be along to point out their superior FPS on 'inferior' hardware. You can discount us if you like, but you're more likely to see an improvement if you spend time with your machine than waiting (in vain) for Anet to make a change.

I've been playing this game since launch and across multiple sets of hardware, the issue persists. In map chat in LA I've also had other people with newer 6th and 7th gen i7's having the same issue. If Anet could at least give us some insight on what to try to change in our settings to make the game run better, that would be great. If I turn off AA, Supersampling, it makes no difference.

You wouldn't need to guess if you were running a performance analysis while the game is running. Do you have any such data to share with us? If you don't have this information, then you're assuming an awful lot.

Link to comment
Share on other sites

I think the elephant in the room here, is that aside from being a DX9 game in 2018, I get much higher FPS in more graphically intensive games with way more going on at once like I do in ESO. That alone points to this game having optimization issues. It’s shocking that there isn’t at least a DX11 client for this game yet... DX9 is 16 Years old and should not be an API to build a game on in 2012.

Link to comment
Share on other sites

@Wander.5780 said:I think the elephant in the room here, is that aside from being a DX9 game in 2018

This is irrelevant. I know, the persistent meme that bigger numbers magically mean better performance is everywhere, but the truth is, it is no more true now than when people were crying because DX9 was being replaced by DX11 clients, and people didn't want that to change.

I get much higher FPS in more graphically intensive games with way more going on at once like I do in ESO. That alone points to this game having optimization issues.

This is a known problem. The developers at ANet have explicitly acknowledged it, and have been actively working on improving things. Which has happened; a few years back it would use two cores, not it pushes four, because of improvements to the engine.

Anyway, if you believe that DX9 is in any way related to this, you are directly contradicting the stated position of the developers, who agree that this is, in fact, an issue with too much work being stuck on a single thread, making the character model count one of the biggest contributors to performance limits, and single threaded CPU performance the hard blocker if you put every other bit of hardware at a level it stops slowing things down.

Link to comment
Share on other sites

@Leablo.2651 said:

@"Blockhead Magee.3092" said:1920X1200

Yes you should be getting better FPS. I completely agree. However, the 'something else' could be a number of things unrelated to the specs of your hardware (especially the two parts you listed). I don't know what settings in the game you have on. I don't know what other programs you're running nor what operating system or other hardware you have.

I'm sure someone else will be along to point out their superior FPS on 'inferior' hardware. You can discount us if you like, but you're more likely to see an improvement if you spend time with your machine than waiting (in vain) for Anet to make a change.

I've been playing this game since launch and across multiple sets of hardware, the issue persists. In map chat in LA I've also had other people with newer 6th and 7th gen i7's having the same issue. If Anet could at least give us some insight on what to try to change in our settings to make the game run better, that would be great. If I turn off AA, Supersampling, it makes no difference.

You wouldn't need to guess if you were running a performance analysis while the game is running. Do you have any such data to share with us? If you don't have this information, then you're assuming an awful lot.

The answer is simple: the single biggest impact comes from turning down the player model count. A bunch of processing related to them is still stuck on that one, busy, "main" thread. Turn them down, performance increases.

Most everything else is more or less GPU specific stuff, in the sense that it depends on the entire graphics stack including the vendor-specific parts of the DirectX layer; NVIDIA is likely to still perform better than AMD there, for example, because they have that old parallel processing magic inside the stack that AMD never shipped, with DX9.

Anyways, ANet have officially confirmed that, yeah, that single "main" thread is gonna be the bottleneck if you eliminate everything else. So, that is about as formal as you are going to get.

Link to comment
Share on other sites

@SlippyCheeze.5483 said:

@Wander.5780 said:I think the elephant in the room here, is that aside from being a DX9 game in 2018

This is irrelevant. I know, the persistent meme that bigger numbers magically mean better performance is everywhere, but the truth is, it is no more true now than when people were crying because DX9 was being replaced by DX11 clients, and people
didn't
want that to change.

I get much higher FPS in more graphically intensive games with way more going on at once like I do in ESO. That alone points to this game having optimization issues.

This is a known problem. The developers at ANet have explicitly acknowledged it, and have been actively working on improving things. Which has happened; a few years back it would use two cores, not it pushes four, because of improvements to the engine.

Anyway, if you believe that DX9 is in any way related to this, you are directly contradicting the stated position of the developers, who agree that this is, in fact, an issue with too much work being stuck on a single thread, making the character model count one of the biggest contributors to performance limits, and single threaded CPU performance the hard blocker if you put every other bit of hardware at a level it stops slowing things down.

Are you really going to sit here and tell me that DX9 is fine and not a problem? It’s a piece of crap API. Try playing any DX9 game full screeen, and tab out and back in, and time it. The source engine, which valve has managed to drag out for 14 years, was built on DX9, and still has issues that have to be patched from time to time. The devs deflecting blame from the API is not an indication that its without fault, its just them defending their project because they know what a monumental task it would be to move it to something more modern. Hence, why the game should’ve been built on the latest Microsoft API.

Link to comment
Share on other sites

@Wander.5780 said:

@Wander.5780 said:I think the elephant in the room here, is that aside from being a DX9 game in 2018

This is irrelevant. I know, the persistent meme that bigger numbers magically mean better performance is everywhere, but the truth is, it is no more true now than when people were crying because DX9 was being replaced by DX11 clients, and people
didn't
want that to change.

I get much higher FPS in more graphically intensive games with way more going on at once like I do in ESO. That alone points to this game having optimization issues.

This is a known problem. The developers at ANet have explicitly acknowledged it, and have been actively working on improving things. Which has happened; a few years back it would use two cores, not it pushes four, because of improvements to the engine.

Anyway, if you believe that DX9 is in any way related to this, you are directly contradicting the stated position of the developers, who agree that this is, in fact, an issue with too much work being stuck on a single thread, making the character model count one of the biggest contributors to performance limits, and single threaded CPU performance the hard blocker if you put every other bit of hardware at a level it stops slowing things down.

Are you really going to sit here and tell me that DX9 is fine and not a problem? It’s a piece of crap API. Try playing any DX9 game full screeen, and tab out and back in, and time it. The source engine, which valve has managed to drag out for 14 years, was built on DX9, and still has issues that have to be patched from time to time. The devs deflecting blame from the API is not an indication that its without fault, its just them defending their project because they know what a monumental task it would be to move it to something more modern. Hence, why the game should’ve been built on the latest Microsoft API.

Devs have already stated that the issue has nothing to do with DX.

Oh, and I’m also getting 70-90 FPS in LA depending on where I am. I’m getting about 70 around the MF and bank on max settings.

Link to comment
Share on other sites

@Wander.5780 said:

@Wander.5780 said:I think the elephant in the room here, is that aside from being a DX9 game in 2018

This is irrelevant. I know, the persistent meme that bigger numbers magically mean better performance is everywhere, but the truth is, it is no more true now than when people were crying because DX9 was being replaced by DX11 clients, and people
didn't
want that to change.

I get much higher FPS in more graphically intensive games with way more going on at once like I do in ESO. That alone points to this game having optimization issues.

This is a known problem. The developers at ANet have explicitly acknowledged it, and have been actively working on improving things. Which has happened; a few years back it would use two cores, not it pushes four, because of improvements to the engine.

Anyway, if you believe that DX9 is in any way related to this, you are directly contradicting the stated position of the developers, who agree that this is, in fact, an issue with too much work being stuck on a single thread, making the character model count one of the biggest contributors to performance limits, and single threaded CPU performance the hard blocker if you put every other bit of hardware at a level it stops slowing things down.

Are you really going to sit here and tell me that DX9 is fine and not a problem?

I am going to refer you to the post by ANet on the subject and tell you that without any other changes DX9 vs DX11, or DX12, or Vulkan, is not the problem with GW2 at this time.

That is more caveats than your statement, because this is a conditional thing. However, it is also true that simply moving to DX11 offers no significant benefit to the parts of the code that are the problem, and it would require changes that can also be made while using DX9 to fix.

So, I guess the answer is "kind of" to your question, because you made it more general than my statement was.

It’s a piece of crap API.

Software isn't a cabbage: DX9 hasn't "rotted away" in the back of the fridge. It is still as good as it ever was, which is to say that it isn't the horrible burden you seem to imagine, but I wouldn't advise people to develop new software on top of that.

Try playing any DX9 game full screeen, and tab out and back in, and time it. The source engine, which valve has managed to drag out for 14 years, was built on DX9, and still has issues that have to be patched from time to time.

I'm pretty confident that one or two of the DX11 games I have here have gotten the odd patch or two...

As to the time to switch to or from full screen mode, I'm not exactly sure what you are trying to prove with that, but the low performance there isn't exactly proof of any other shortfall in terms of actual use. If you really want to be convincing, it isn't that hard:

Show proof, any proof, of your assertion that DX9 is responsible for the performance loss. Show the high cost of draw calls, or the client blocking on the thread safety lock inside DX9, or ... anything, really, that disproves what the ANet developers literally said in writing, and which also matches observed reality, which is that DX9 is not the heart of the problem at this time.

Fix that over-busy thread, and this may change. DX9 might start being the bottleneck. That'd be great, and that is the day I'll start advocating for changing the version of DirectX in use. This, however, is still not that day.

The devs deflecting blame from the API is not an indication that its without fault, its just them defending their project because they know what a monumental task it would be to move it to something more modern. Hence, why the game should’ve been built on the latest Microsoft API.

DX11 released a year or so before the launch of GW2, and I don't know how your memory is, but mine is clear: at the time people were mad about it. Super mad. Like, "I'll boycott your game don't use DX11 it is evil and ate my kitten" mad about people using that new API. People griped about the terrible performance. They hated it.

DX10, welp, that was just ... nothing. Nobody cared. Not people playing games, not people building them. A nothing release, that went nowhere. Games just sat on DX9 at the time, because why would you move when you gain nothing, but performance sapping bugs?

So, writing to the "latest Microsoft API" would have required ANet to build their software on either (a) a five year old release nobody liked, DX10, or (b) migrate to DX11 a year from launch which, if you are familiar with the timelines of games, is more than a little while after the core of the engine was written.

It is easy to look backwards in time and tell people what they should have done. If I was building a game on the release schedule of GW2, I'd have chosen DX9 as well, and players would have thanked me for that decision. Now, not so much, but back then? Even a year after launch, DX11 still had a bad rep, and wasn't much loved.

Anyway, ultimately, none of this is about "deflecting blame". The ANet post doesn't deflect blame. It up and up says: "hey, this thread is the bottleneck, because we made regrettable life choices in the past. it sucks." That is accepting ownership of the problem -- the main thread being the bottleneck -- is exactly the opposite of that.

In fact, the only part of the discussion here that could deflect attention from the real problem, to an imaginary one, is your push to see the DirectX version changed, when that is not the bottleneck. It is, thankfully, unlikely to have any effect, but it is a misplaced request that would not help.

Link to comment
Share on other sites

Most MMO's share this same issue, the game being heavily single threaded just exacerbates the issue. It will not be fixed.

It's a shame that the majority of new maps perform worse and worse, but if you look at how the maps are populated in terms of assets, the number of models that aren't even characters have gone way up, so at this rate it will only continue to get worse imo.

OC your 8700k to 5ghz, buy a kit of fast DDR4 (3600+), and or turn off shadows and reduce model numbers to low. Those are the only solutions for the poor performance.

Link to comment
Share on other sites

MMOs tend to try and cater to the widest array of systems. Not just the newest most powerful rigs. My i5 runs gw2 on max setting with decent frame rates. Arguing the wisdom of dx9 5 years later seems like wasted effort. Gw2 runs decent. Not perfect but good enough imo. Visual clutter seems like more of an issue then optimization.

Link to comment
Share on other sites

One thing that might help is more dynamic quality adjustments. If I'm out doing PvE largely by myself, I can run on pretty high quality without any issues.However, I get into a big boss fight, and things slow down significantly. I can go in and adjust the settings to 'best performance' in this case, but then have to go in and say 'autodetect' when that fight is over.If there was some option like the settings you set are goals, and if framerate goes below some values, it starts turning things off temporarily, and if they go above some value again, it starts turning them on. That would remove the bother of me having to adjust them back and forth, and may be an easier solution than rewriting the engine. Or even if I could just hot-key it between 2 settings, that would also work.If I'm in some big fight with 50 other PCs, I really don't care much about seeing all the shinies at that point anyways.

Link to comment
Share on other sites

I think the elephant in the room here, is that aside from being a DX9 game in 2018, I get much higher FPS in more graphically intensive games with way more going on at once like I do in ESO. That alone points to this game having optimization issues. It’s shocking that there isn’t at least a DX11 client for this game yet... DX9 is 16 Years old and should not be an API to build a game on in 2012.

It's not an elephant in the room since ANet is well aware that the game is based on DX9. They make no secret of that. But they also point out that moving it to DX11 wouldn't have the performance gain players think it would.

The OP also ignores the fact that ANet has made incremental and sometimes major improvements, so the game runs much smoother even though its graphic needs have increased.

Link to comment
Share on other sites

Here are some related comments by ANet devs:


GW2 stays on DX9 and 32bit client which kinda.. lags a lot, and sometimes even crashes on huge battles/worldbosses (i see you Tequatl).

ANet Johan wrote:For the vast majority of players, the lag is unrelated to both of these.

All software have a thing called threads. You can think of each thread as a highway lane: They run in parallel, and they each perform different tasks simultaneously. They are what applications use to scale with multiple cores on CPUs, as each thread can only run on one CPU core at a time.

Each application has a thread known as the main thread. For games that thread is usually the thread that's in the driver's seat of the frame. It determines what to process and when on a higher level, such as "process OS messages", "update the game state", "process animations", "send state to the render thread", etc. All the different things that go into a game frame. The majority of game engines do some of these on a different thread, but in many cases the main thread still determines when it should happen.

So since threads are useful for scaling things, you'd think that you could simply create more threads and get more work done. But while it's true that you have more computing power with more threads, threads also have downsides. For one, you cannot modify data in memory while another thread is reading that same data. In order to do this one thread has to wait for the other to stop using the data, meaning work is done in serial even if the code itself is running on multiple threads. To make matters worse, the OS is not guaranteed to put a thread back to work the very moment the other thread has finished. It can actually take a long-ish time (long being a very relative term). Due to this, software engineers are forced to properly design their applications to work well in parallel. Doing this after the fact is usually on the range of non-trivial to very hard.

Which brings us to GW2. GW2 does a lot of processing, and much of it is done on the main thread. That is also where its bottleneck tends to be: The main thread. There are conscious efforts in moving things off the main thread and onto other threads (every now and then a patch goes out that does just this), but due to how multi-threading works it's a non-trivial thing that take a lot of effort to do. In a perfect world, we could say "Hey main thread, give the other threads some stuff to do if you're too busy", but sadly this is not that world.

As for DX9 and 32bit: Moving off of DX9 wouldn't buy us a whole lot performance wise, as all interaction with DirectX is happening on the render thread, which is generally not the bottleneck. Moving from 32-bit to 64-bit also does not really buy us a lot performance-wise. There are some optimizations the compiler is able to do with 64-bit that it can't do otherwise, but the actual FPS gain is minimal at best.

And about crashing on Tequatl: Here's one case where a 64-bit client could actually help. Many of the crashes happening on Tequatl (which are still quite few, mind you) are cause of memory fragmentation. The bigger memory address space of 64-bit apps could help prevent that. This becomes more of a problem the longer you keep your client running.


I have intel i3, my friend got i7, which is waaaaaaaaaaaay better than mine. Guess who get more screen spikes? i7. Yes.

Without knowing more of your systems (and what software is running simultaneously to GW2) I really can't even guess the cause of this. All things equal it should not be the case (though I'm not saying that it isn't).

also i heard that GW2 tends to use CPU, not GPU. What the..? :xThe CPU and GPU are good at different things. There's no such thing going on as us using the CPU rather than the GPU. We use both, for the different things they're good at. In simple terms, the GPU just happens to finish its work before the CPU does, causing it to wait for the CPU to catch up.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...