Jump to content
  • Sign Up

Claw of Jormag - R5 1600X & RX 470


Recommended Posts

Would be interesting to see the results at tequatl (i run that almost daily but never jormags claw). I run similar graphic settings (just with character model to medium). Then I can compare it to my i7-8700kOn the other hand comparing results in a MMO is rarely working well due to the players around you always changing :/ But for large player crowds like these it might average out I hopeAlso not forget the fact that simply recording will cost some FPS unless you use extra hardware for the process

EDIT: I just tested recording in South Sun Cove with Fraps and the FPS went from 100+ to just 50. So even on a highend computer it costs tons of performance without seperate recording hardware

Link to comment
Share on other sites

I'm pretty sure that RAM speed & timing are more important to GW2 performance than CPU and GPU. Not say CPU and GPU don't matter, but anything that isn't a Celeron or a FX series should handle fine.

@Malediktus.9250 said:EDIT: I just tested recording in South Sun Cove with Fraps and the FPS went from 100+ to just 50. So even on a highend computer it costs tons of performance without seperate recording hardware

Fraps should not be causing that much of a performance hit. You are not by any chance recording to the drive that gw2 is installed on?

Also when I record I use the GPU to do the encoding rather than the CPU as my GPU is under utilized in most titles.

Link to comment
Share on other sites

@Crinn.7864 said:I'm pretty sure that RAM speed & timing are more important to GW2 performance than CPU and GPU. Not say CPU and GPU don't matter, but anything that isn't a Celeron or a FX series should handle fine.

Actually, the engine GW2 is built in is driven primarily by the CPU. As for RAM and timings- 2933@16-18-18-38 is what I am running which is pretty good for Ryzen.corrected timings

Link to comment
Share on other sites

@Crinn.7864 said:Fraps should not be causing that much of a performance hit. You are not by any chance recording to the drive that gw2 is installed on?

Also when I record I use the GPU to do the encoding rather than the CPU as my GPU is under utilized in most titles.

Yeah 50 fps loss on fraps seem wrong. It was a while since I used it (shadowplay with 5m back-in-time recording on a ramdisk saved to an nvme is vastly superior, with zero performance loss and zero record stutter) but it was only a small performance loss on a worse machine than I use now. Or maybe fraps is just shit now, I dont know.

Either way performance testing PvE bosses is gonna be completely random. What drags down performance is the amount of people and it aint gonna be the same. And generally once you get below 30 it doesnt really matter. like if my 8700K has 25 fps over 18 fps for an 1800x then omg its 40% faster!!! Yeah but neither is smooth cause they are both slow.

You would have to find a place thats static and same for everyone... and we'll probably all get 60+ there because of that.

Link to comment
Share on other sites

Over all I have to say I am happy with the overall performance I get from my setup. I've been watching the RX Vega cards but so far I am not that impressed with their performance. Last year I had considered going with an i7 7700K vs my Ryzen 1600X, but the motherboard cost for a Z270 vs a X370 was almost $100 more at the time. That is why I went stayed with AMD.

Link to comment
Share on other sites

This does sound reasonable. I have an 1700 and get slightly more than that with a 470. Zergs dips CPU performance cause of the ppl around you that the main thread needs to keep track of. There no way a 470 would bottleneck the game unless you were using really high resolutions.

@moonstarmac.4603 said:Last year I had considered going with an i7 7700K vs my Ryzen 1600X, but the motherboard cost for a Z270 vs a X370 was almost $100 more at the time. That is why I went stayed with AMD.

Plus with AMd you have the added plus of socket compatibility until 2020.

Link to comment
Share on other sites

@moonstarmac.4603 said:

@Crinn.7864 said:I'm pretty sure that RAM speed & timing are more important to GW2 performance than CPU and GPU. Not say CPU and GPU don't matter, but anything that isn't a Celeron or a FX series should handle fine.

Actually, the engine GW2 is built in is driven primarily by the CPU. As for RAM and timings- 2933@16-18-18-38 is what I am running which is pretty good for Ryzen.
corrected timings

Neither of those sets of statements are ... entirely wrong, but they are pretty misleading, and not really giving the complete picture.

GW2 uses both the GPU and multiple CPU cores heavily. RAM timings do not have nearly the performance difference that most people like to imagine, though faster memory can help keep a CPU fed with work. The difference between CPU cache and main memory is still big enough that more CPU cache is going to be more significant than more memory ... assuming you are not forcing any disk access because of the main memory shortage. (See http://norvig.com/21-days.html#answers for more specific numbers.)

Anyway, that digression aside: GW2 bottlenecks on single thread CPU performance once you eliminate everything else. The thing you can't just "buy a faster X" for to keep improving performance is that single threaded performance. (This is mostly because every single modern CPU comes with at least 4 cores, and GW2 can fill out 4 cores fully, and take advantage of a couple extra alongside letting the OS do whatever other stuff with them, so those 6 core systems help only a little, but a little.)

So, the executive summary is: with a current, or current - 1 generation "dedicated" GPU, you should have enough horsepower there to eliminate that as a bottleneck. Your CPU comes with enough cores to eliminate that. So, you are going to bottleneck on single core performance at the very top end.

Regarding the RAM timing comments: performance there is enough that the difference is mostly going to be in the level of noise. You already have plenty of slop there because, honestly, if it wasn't in cache, you lose so much performance it doesn't matter. Modern CPU branch and memory prediction is good enough, though, that the latency will vanish due to prefetching of data before it is required, minimizing the effect of this.

(eg: it isn't nothing, but it mostly doesn't matter because the difference between a 14x and 15x slowdown isn't really that noticeable, especially when it happens in the background while the CPU does other stuff anyhow.)

Saying that the engine is "driven primarily by the CPU" is also misleading; if that were true, performance using an "integrated" graphics card would be comparable to a "dedicated" graphics card. Very little forum research will be required to confirm that this is ... not entirely true, shall we say, to put it mildly?

GW2 can happily use 70 percent the power of a 1080Ti card on the best settings, when fed by a significantly overclocked fastest-single-thread CPU; that gives you roughly the benchmark for "a fast enough" GPU that it stops being the bottleneck for the game. If you don't hit that mark, it doesn't matter how fast your CPU is, it'll feed the GPU faster, and the GPU will not be able to keep up, holding down overall performance.

Now, that is definitely an achievable target for many players, sure, in terms of GPU performance (especially if they reduce settings that demand GPU effort), but it is risky to assume that all players are hitting it.

Hope that helps clarify the underlying reasons here; unfortunately for y'all, the AMD CPUs are great for lots of cores, but less effective for single threaded performance than similar Intel CPUs. This means that, for GW2, they are a little more likely to be the bottleneck.

Link to comment
Share on other sites

@"moonstarmac.4603" said:Here's the Teq fight. I wonder if upgrading to a 580 or the upcoming 580X 8G model would help with FPS, but honestly it plays much better than my FX 6300 + R7 270X combo did.

Interesting, you get roughly 1/3rd of the FPS I have with my Intel build. You could download a program like GPU-Z or MSI afterburner to monitor the GPU useage.But of course FX was junk, Ryzen can do about 50% more instructions per cycle than it.For reference: I am using i7-8700k@5GHz @ 4.8GHz Cache, 4266-17-17-17-28 RAM with hand tuned subtimings and a 1080ti@2GHz and 6200 MHz effective memory speed

@Crinn.7864 said:I'm pretty sure that RAM speed & timing are more important to GW2 performance than CPU and GPU. Not say CPU and GPU don't matter, but anything that isn't a Celeron or a FX series should handle fine.

Yes, RAM speed is important. Because the CPU useage windows shows is a rough estimation. The OS gives applications 10ms (i think, might be more or less) of CPU time. But the OS does not know how the CPU time is actually used, so if the CPU is waiting for data all the time it could even show 100% useage while only using 1% of the CPU capability. A better way to measure CPU useage would be by monitoring the power consumption (with hardware tools, software is again just estimating the power consumption).

@Crinn.7864 said:

@Malediktus.9250 said:EDIT: I just tested recording in South Sun Cove with Fraps and the FPS went from 100+ to just 50. So even on a highend computer it costs tons of performance without seperate recording hardware

Fraps should not be causing that much of a performance hit. You are not by any chance recording to the drive that gw2 is installed on?

Also when I record I use the GPU to do the encoding rather than the CPU as my GPU is under utilized in most titles.No, I am recording to a different storage device (regular hard disk with about 170MB/s write). I tried recording with Dxtory now and it is just 5-10% fps loss. So seems like a problem with fraps. But with dxtory I currently have the problem that it wont record any sound.

I wonder how the new generation of Ryzen compares to this. Benchmarks show that AMD managed to improve the memory and cache latencies by quite a good margin for a refresh.

Also I hope the new CPU micro arc Intel is working on is using VISC. It would allow multiple cores to work as one core and essentially deliver really good single core performance. It is so unsatisfying that nothing can run this game at steady 165Hz :(

Link to comment
Share on other sites

@Malediktus.9250 said:Interesting, you get roughly 1/3rd of the FPS I have with my Intel build. You could download a program like GPU-Z or MSI afterburner to monitor the GPU useage.But of course FX was junk, Ryzen can do about 50% more instructions per cycle than it.For reference: I am using i7-8700k@5GHz @ 4.8GHz Cache, 4266-17-17-17-28 RAM with hand tuned subtimings and a 1080ti@2GHz and 6200 MHz effective memory speedI wonder how the new generation of Ryzen compares to this. Benchmarks show that AMD managed to improve the memory and cache latencies by quite a good margin for a refresh.

The 8700K does score 28% higher on single core functions. As for the Ryzen 2600X and the X470 MBs they fixed many issues with RAM speeds and chip OCing. My CPU tends to fail if I try to OC it, which I think is in part due to the Gen1 drawbacks. The Gen2s also OC across all cores with Turbo vs only 2 threads as the Gen1s do.

@SlippyCheeze.5483 said:the AMD CPUs are great for lots of cores, but less effective for single threaded performance than similar Intel CPUs. This means that, for GW2, they are a little more likely to be the bottleneck.

This is sadly something AMD users have been experiencing for a while now. It is why I was considering an i7 7700k at the time, but the motherboard cost was what ultimately drew me away. Where the CPUs were, at the time, the same price, the intel motherboard would have cost me around $100 more. Of course within a month or so after getting my Ryzen build the intel 8th gen launched and would have cost me around the same amount to build as the Ryzen did. All in all though, I will do what you suggested and run GPUZ and CPUID in the background to record my stats while I play. The RX470 isn't that old of a card and shouldn't be a bottleneck so I am betting it is single-core performance issues.

I do know that if I turn Character Limit to Low my FPS increases by about 20. But overall, even with 15 during those massive zergs, the gameplay for me is smooth with no feel of lag.

Link to comment
Share on other sites

@moonstarmac.4603 said:

@"SlippyCheeze.5483" said:the AMD CPUs are great for lots of cores, but less effective for single threaded performance than similar Intel CPUs. This means that, for GW2, they are a little more likely to be the bottleneck.

This is sadly something AMD users have been experiencing for a while now.

The most annoying part being, of course, that this is mostly an issue with older game engines, dating back to before four to six cores was normal; what worked fine in the two core days when single threaded clock speeds just got higher every year ... just stopped, suddenly, when we - and by "we" I mean "Intel and competitors" - ran into the laws of physical, hard. Hence, more cores, no significant per-core speed improvements, and suddenly my whole industry has to learn a whole new way of doing things...

Anyway, point being: this is something that is just gonna hurt. Not just here, everywhere, except for the biggest studios, or for things built on a platform designed to support the new world order of more cores, less fastness. (Thankfully, this finally seems to be happening with, eg, the Unreal engine and Cryengine showing up all over the shop...)

So, if you don't play GW2 exclusively, the AMD CPU is so much better value for money than Intel in most cases. shrug Such is life. The ANet team have done amazing work moving stuff into additional threads, and I have no doubt they feel this even more painfully than we do. (I also know that making those changes is literally the hardest possible thing for a software developer to do. Source, I done it. Ouch.)

Link to comment
Share on other sites

@moonstarmac.4603 said:

@Malediktus.9250 said:Interesting, you get roughly 1/3rd of the FPS I have with my Intel build. You could download a program like GPU-Z or MSI afterburner to monitor the GPU useage.But of course FX was junk, Ryzen can do about 50% more instructions per cycle than it.For reference: I am using i7-8700k@5GHz @ 4.8GHz Cache, 4266-17-17-17-28 RAM with hand tuned subtimings and a 1080ti@2GHz and 6200 MHz effective memory speedI wonder how the new generation of Ryzen compares to this. Benchmarks show that AMD managed to improve the memory and cache latencies by quite a good margin for a refresh.

The 8700K does score 28% higher on single core functions. As for the Ryzen 2600X and the X470 MBs they fixed many issues with RAM speeds and chip OCing. My CPU tends to fail if I try to OC it, which I think is in part due to the Gen1 drawbacks. The Gen2s also OC across all cores with Turbo vs only 2 threads as the Gen1s do.Which is irrelevant for gw2. No desktop processor made in the last two years lacks the single core performance necessary for gw2. It's also worth noting that the 28% number requires some seriously unrealistic workloads, on any typical workload gw2 included the 2700X and the 8700k are going to be trading blows. A 8700k can pull ahead due to having higher overlocking headroom, but getting that headroom requires investing some $$$$ on cooling and pricy motherboards. (pretty much every value priced z370 motherboards have VRMs made out of potatoes, and it's debatable if any z370 motherboard qualifies as "value" anyways.)

The primary bottleneck for GW2 is the CPU's ability to get data from the RAM, not the CPU itself. The reason your CPU is reporting only moderate usage when running GW2 is because the threads are all stalled waiting on the RAM, or waiting on data from storage. (MMORPGs have way more disk utilization than other titles due to the game having to load character models on the fly)

@SlippyCheeze.5483 said:the AMD CPUs are great for lots of cores, but less effective for single threaded performance than similar Intel CPUs. This means that, for GW2, they are a little more likely to be the bottleneck.No Ryzen CPU will bottleneck on gw2. Neither will any modern Intel CPU.

Realistically the only reason to buy anything other than a 2nd gen Ryzen right now is the fact that only the X470 chipset motherboards are available, and for most users the X470 isn't worth the price. Once B450 motherboards come out there will literally be no reason to use Intel outside of extreme overlocking, at least until Intel gets it's next generation of CPUs out late this year.

Link to comment
Share on other sites

So I am sitting waiting for Teq again, but this time with MSI Afterburner giving me a full readout. My GOU is sitting at 100% while my CPU is barely scratching 40%. I will upload the video once Teq is down...should be viewable by 9pm Eastern.

...or not...my mouse disabled itself and the recording stopped...will try something else

Link to comment
Share on other sites

@"Crinn.7864" said:

The 8700K does score 28% higher on single core functions. As for the Ryzen 2600X and the X470 MBs they fixed many issues with RAM speeds and chip OCing. My CPU tends to fail if I try to OC it, which I think is in part due to the Gen1 drawbacks. The Gen2s also OC across all cores with Turbo vs only 2 threads as the Gen1s do.Which is irrelevant for gw2. No desktop processor made in the last two years lacks the single core performance necessary for gw2. It's also worth noting that the 28% number requires some seriously unrealistic workloads, on any typical workload gw2 included the 2700X and the 8700k are going to be trading blows. A 8700k can pull ahead due to having higher overlocking headroom, but getting that headroom requires investing some $$$$ on cooling and pricy motherboards. (pretty much every value priced z370 motherboards have VRMs made out of potatoes, and it's debatable if any z370 motherboard qualifies as "value" anyways.)

The primary bottleneck for GW2 is the CPU's ability to get data from the RAM, not the CPU itself. The reason your CPU is reporting only moderate usage when running GW2 is because the threads are all stalled waiting on the RAM, or waiting on data from storage. (MMORPGs have way more disk utilization than other titles due to the game having to load character models on the fly)

As I said earlier the OS does not know if the CPU is waiting for data. It simply assigns CPU time to applications whenever the app requests it. If I recall right batches of 10ms of CPU time. If the CPU does any kind of work during that time is unknown to the OS.At 5 GHz a CPU cycle takes 0.2 ns, so 10ms means a core is reserved for 50,000,000 cycles for that app. But the useage of those cycles is completly unknown to the OS.Now we come to the latencies.Data taken from my build:L1 cache: 0.8 ns (4 CPU cycles)L2 cache: 2.4ns (12 CPU cycles)L3 cache: 9.8ns (49 CPU cycles)RAM: 36.7ns (184 CPU cycles)

So it can take quite a long time (from the perspective of the CPU) to get the data it needs. Sadly I am not able to achieve much better latencies. I could OC the CPU to 5.2 GHz core / 5 GHz cache but figured it would not be worth the extra voltage. I could improve cooling on the RAM, but current settings already require the RAM to stay max 45°C, which I could only achieve by putting a cooler on the RAM. Higher temps start throwing errors in Memtest. Maybe if I would invest in a better binned CPU, memory and watercooled RAM I could achieve 33ms latency if lucky. But I doubt that investment would pay off.

! h3Ujwji.jpg

As for DDR5 I am not very hopeful. DDR5 will be likely be slower than my OCed DDR4 RAM. It was the same with DDR4. DDR4 was slower than highend DDR3 RAM for quite a while. Yes, DDR5 will probably start with 4400 MT, but the rumored main timings of CL42 (https://www.anandtech.com/show/12710/cadence-micron-demo-ddr5-subsystem) for that sound like disaster to me.And you can already buy DDR4-4600 CL 19 with DDR5 at least a year away (but good luck running that without a prebinned CPU and highend mainboard, silicon lottery applies everywhere).

Link to comment
Share on other sites

Typical performance at world boss... bottlenecked by the CPU

Turn your Character Model Limit to Lowest, to display less visible avatars on your screen, and you will get better FPS

@"Amineo.8951" said:Direct X9 is outdated.

another day, another wannabe smart person complaining about GW2 not running higher DX11/12/Vulkan

DX11/12/Vulkan are all primary graphics APIs, updating GW2 will provide little benefits to the FPS, while it will take anet months, if not years to migrate and stabilise on the new API.https://semiaccurate.com/2016/03/01/investigating-directx-12-cpu-scaling/So the amount of investment vs return is not financially feasible; while continually pushing the current engine while pushing out new contents = revenue

The hurdle majority of programmers needs to overcome is 'multi-threading'unlike synthetic benchmarks or scientific calculations, games are difficult to be multithreaded, it stems from the predictivity of user's interaction, especially with the number of players in an MMO, when too much unpredicted instructions and outcomes hits in pallel, the game will becomes unstable and crash often.http://archive.fortune.com/2008/08/13/technology/microchips_copeland.fortune/index.htm

Kjei91m.png

Link to comment
Share on other sites

@Crinn.7864 said:

@Malediktus.9250 said:Interesting, you get roughly 1/3rd of the FPS I have with my Intel build. You could download a program like GPU-Z or MSI afterburner to monitor the GPU useage.But of course FX was junk, Ryzen can do about 50% more instructions per cycle than it.For reference: I am using i7-8700k@5GHz @ 4.8GHz Cache, 4266-17-17-17-28 RAM with hand tuned subtimings and a 1080ti@2GHz and 6200 MHz effective memory speedI wonder how the new generation of Ryzen compares to this. Benchmarks show that AMD managed to improve the memory and cache latencies by quite a good margin for a refresh.

The 8700K does score 28% higher on single core functions. As for the Ryzen 2600X and the X470 MBs they fixed many issues with RAM speeds and chip OCing. My CPU tends to fail if I try to OC it, which I think is in part due to the Gen1 drawbacks. The Gen2s also OC across all cores with Turbo vs only 2 threads as the Gen1s do.

Which is irrelevant for gw2. No desktop processor made in the last two years lacks the single core performance necessary for gw2. It's also worth noting that the 28% number requires some seriously unrealistic workloads, on any typical workload gw2 included the 2700X and the 8700k are going to be trading blows. A 8700k can pull ahead due to having higher overlocking headroom, but getting that headroom requires investing some $$$$ on cooling and pricy motherboards. (pretty much every value priced z370 motherboards have VRMs made out of potatoes, and it's debatable if any z370 motherboard qualifies as "value" anyways.)

O_o I'm ... going to have to disagree with you there, but that's OK, because so do the ANet developers. Nothing has a shortfall of CPU power to be able to play GW2, yes, but ... none of 'em gonna deliver a nice, steady 144 FPS, right?

So, when I say that is the limiting factor, I don't mean "you will not be able to play because ...", I mean "you will not be able to make it go faster because...", and that comes with the specific caveat that you will hit that limit if, and only if, you eliminate every other bottleneck.

Link to comment
Share on other sites

@Malediktus.9250 said:

@"Crinn.7864" said:

The 8700K does score 28% higher on single core functions. As for the Ryzen 2600X and the X470 MBs they fixed many issues with RAM speeds and chip OCing. My CPU tends to fail if I try to OC it, which I think is in part due to the Gen1 drawbacks. The Gen2s also OC across all cores with Turbo vs only 2 threads as the Gen1s do.Which is irrelevant for gw2. No desktop processor made in the last two years lacks the single core performance necessary for gw2. It's also worth noting that the 28% number requires some seriously unrealistic workloads, on any typical workload gw2 included the 2700X and the 8700k are going to be trading blows. A 8700k can pull ahead due to having higher overlocking headroom, but getting that headroom requires investing some $$$$ on cooling and pricy motherboards. (pretty much every value priced z370 motherboards have VRMs made out of potatoes, and it's debatable if any z370 motherboard qualifies as "value" anyways.)

The primary bottleneck for GW2 is the CPU's ability to get data from the RAM, not the CPU itself. The reason your CPU is reporting only moderate usage when running GW2 is because the threads are all stalled waiting on the RAM, or waiting on data from storage. (MMORPGs have way more disk utilization than other titles due to the game having to load character models on the fly)

As I said earlier the OS does not know if the CPU is waiting for data. It simply assigns CPU time to applications whenever the app requests it. If I recall right batches of 10ms of CPU time. If the CPU does any kind of work during that time is unknown to the OS.

Part of this is true, part of it is ... not really true. Ultimately, though, this is exactly what "hyper-threading" attacks, by allowing more work to be done in those otherwise stalled periods of time. The 10ms you cite is also (most likely) the longest period a purely CPU-bound workload will run for, before being preempted, and something else allowed to run, if it wants to. (I suspect, but have not bothered to check, that this is longer for something flagged as a "game" for Windows 10 "game mode", by the way, as that can help deliver frame completion faster.)

At 5 GHz a CPU cycle takes 0.2 ns, so 10ms means a core is reserved for 50,000,000 cycles for that app. But the useage of those cycles is completly unknown to the OS.Now we come to the latencies.

So it can take quite a long time (from the perspective of the CPU) to get the data it needs. Sadly I am not able to achieve much better latencies.

You are using a static picture of performance to draw conclusions about dynamic behaviour; how much data is prefetched effectively into that faster cache rather than causing a stall? How frequently is a second thread able to consume the execution units?

It is absolutely true that completely random memory read performance is terrible, just like random read performance for storage devices.

It is also absolutely true that if you look at those "bulk read" benchmarks, the data can come in an awful lot faster if it doesn't waste time, no? Same is true of memory.

Performance is ... hard to understand. This is, however, not a useful benchmark to compare. Sure, main memory performance vs cache performance is now comparable to what memory vs disk used to be, and disk ... well, feels like tape used to. shrug That there is life, but it doesn't mean that everything moves at the speed of the slowest element involved. There are way more tricks for performance out there that help hide the data.

(Which, incidentally, is where the meltdown and specter vulnerabilities came from: Intel, and AMD, gave up security for performance, cutting some corners to be able to deal with the fact that memory is slooow, to deliver better performance per cycle.)

Link to comment
Share on other sites

@moonstarmac.4603 said:Okay, so here's my readout on the Svanir Shaman fight with MSI Afterburner capturing GPU and CPU performance. I did notice at times my GPU would go 0%, not sure if it was a bug or what.

I'd strongly suspect an issue capturing performance accurately, yeah. None of that is especially shocking, and mirrors the shape of performance I see on my system, and that in turn mirrors the expected shape from what ANet have said about where the performance limits lie in the engine.

If I had to guess, I'd guess that you are actually capping out on GPU performance in there, with those 100 percent busy periods. If that is the case, that'd explain the 80-ish rather than 90-ish percent utilization on your busiest cores. :)

Link to comment
Share on other sites

@crepuscular.9047 said:Typical performance at world boss... bottlenecked by the CPUTurn your Character Model Limit to Lowest, to display less visible avatars on your screen, and you will get better FPS

Yup. They are the thing that most bottlenecks on the main thread, right now, in the engine.

@"Amineo.8951" said:Direct X9 is outdated.

another day, another wannabe smart person complaining about GW2 not running higher DX11/12/Vulkan

DX11/12/Vulkan are all primary graphics APIs, updating GW2 will provide little benefits to the FPS, while it will take anet months, if not years to migrate and stabilise on the new API.
So the amount of investment vs return is not financially feasible; while continually pushing the current engine while pushing out new contents = revenue

It is also worth noting that they really only improve CPU scaling if and only if the performance bottleneck is parallel scene setup, which doesn't seem to be the case in GW2: it isn't spending forever waiting on locks, it is spending forever crunching through all those models and other geometry prior to submission of the scene.

So, yah, DX9 vs the others? No benefit. Better threading with DX9? Benefit. Exactly as you say.

The hurdle majority of programmers needs to overcome is 'multi-threading'unlike synthetic benchmarks or scientific calculations, games are difficult to be multithreaded, it stems from the predictivity of user's interaction, especially with the number of players in an MMO, when too much unpredicted instructions and outcomes hits in pallel, the game will becomes unstable and crash often.

This isn't a strictly accurate statement, though your conclusion is more or less correct: there is nothing inherent to multithreading that causes the game, or anything else, to become unstable or crash.

In the real world, though, it is the most common cause of that sort of thing, because people just ... don't think about it well, and because it is super-easy to mess up and cause a crash a second later, in unrelated code, apparently at random. So, while multithreading itself isn't inherently unstable, it is a cause of bugs because it is very, very hard to get right.

The only thing worse than trying to write multithreaded code, in fact, is trying to make previously single threaded code multithreaded. You get to discover just how many assumptions about "X happens before Y" people can make, and that are invisible until Y can happen before X...

The only thing worse is when you also have latency for communicating across a slow network between a client and a server to take into account, on top of everything else... ;)

Link to comment
Share on other sites

All I can say is even though my FPS is 15-20 during these fights the game still plays really smooth. Right after the fight, I pop back up to 30+ FPS. I normally find myself soloing zones, so I pull even higher frames in the long run.

I guess, all in all, my system does great for what I want it to.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...