Jump to content
  • Sign Up

Gw2 DX12pxy needs to be implemented directly


Recommended Posts

7 hours ago, Infusion.7149 said:

What I wrote is not hypothetical, you're theorizing 10% per thread when that isn't the case as seen with those applications mentioned in the study which have far more financial resources at their disposal. Anyone that overclocks (Intel or AMD) knows you will hit lower clocks on AVX2 than SSE or AVX. In fact, Intel CPUs stuck on 14nm process are rather bad with this since if your cooling is stock or not water you will drop over 10% of your clocks. That's why we have AVX offsets.

see https://www.techpowerup.com/review/intel-core-i9-9900k/18.html
https://www.techpowerup.com/review/intel-core-i7-10700k/20.html
https://www.techpowerup.com/review/intel-core-i7-11700kf/23.html

You do know that 10% gain for them to implement is basically not worthwhile right? It's 10% total, not per core. Don't conflate threading with vectorization, because your gain is mostly going to come from threading.
A good example is the Last of Us Remaster engine rework: https://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine

Threading is something they've been working on for a while but some things are inherently serial in nature. Due to amdahl's law it is highly unlikely to scale well past six cores as a result.

Well I simply said you can do it the lazy way there is no much work to be done if the compiler agrees with you.

 

The point is the problem isn't new and well known so there are automatic solution at this point in time.

 

If people want to know far you really can drive this can see this here :(all 3 parts)

 

Link to post
Share on other sites
  • Replies 159
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Still shooting down everything I see....what a contribution.   They should implement it. Part of what makes games better is optimization over time. There is a reason WoW switched over. There

At the cost of making the game look very very ugly. Players can make the game look even worse than Guild Wars 1 to have better performance, but is that even worth it on systems that can run the latest

So reducing the quality (and number) of players around doesn't "impact graphics". Good to know. And no, just playing with model limits will never be enough because the game has horrible performance ev

The point is , hypothetical (more theory) vs actual.

Do you know even last year when CERN (the research center working on large hadron collider) queried their CPUs in use, only 61.5% have the avx2 flag?
https://indico.cern.ch/event/883512/contributions/3722989/attachments/1978627/3294056/20200130-WLCGOpsCoord-AVX2collection_jschovan.pdf

https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes200130

 

Quote

 

The Worldwide LHC Computing Grid (WLCG) project is a global collaboration of around 170 computing centres in more than 40 countries, linking up national and international grid infrastructures.  The mission of the WLCG project is to provide global computing resources to store, distribute and analyse the ~50-70 Petabytes of data expected every year of operations from the Large Hadron Collider (LHC) at CERN on the Franco-Swiss border.

 


Meanwhile you're proposing they implement it on a 9 year old game?

Link to post
Share on other sites
17 hours ago, Zuldari.3940 said:

So wait is this for windows or linux? because i thought DXVK didnt work on win10

 

It works fabulously (and better and faster than d912pxy) on my Windows 10 computer.

  • Like 1
  • Thanks 1
Link to post
Share on other sites
17 hours ago, Zuldari.3940 said:

So wait is this for windows or linux? because i thought DXVK didnt work on win10


It's not officially supported on Windows and Linux uses Fossilize to prebuild shader cache. Other than that it is usable and still faster and more stable than d912pxy.

  • Like 1
Link to post
Share on other sites
  • 4 weeks later...

Our whining paid off, everyone!

https://www.guildwars2.com/en/news/arenanet-studio-update-july-2021/

 

Quote

 

Client Performance Optimization

We’ve heard you loud and clear—Guild Wars 2 needs better frame rates. We agree. We’re actively working on upgrading our engine to DirectX 11, and we expect to be able to roll it out in an opt-in beta later this year. An important note is that the upgrade to DX11 itself isn’t a magical fix for frame rates on its own. Some players may not notice a difference at all. However, upgrading to DX11 opens a lot of doors for improving performance—CPU multithreading for instance. It also paves the way for some potential graphics upgrades down the road.

We’re investing in our infrastructure, engine, and graphics because we’re looking to the future. This is a long-term effort.

 

 

  • Like 2
  • Confused 1
Link to post
Share on other sites

I mean, anything is an upgrade to DX9, but why not take the performance benefits of DX12 vs DX11 and recompile for that API instead? ESPECIALLY for AMD Radeon users---I realize Nvidia doesn't get a whole lot of bump from 11>12 due to their fantastically optimized DX11 driver, but AMD's cards favor DX12.

 

They can always just offer a selectable DX9/DX12 mode like WoW (which even supports Mac Metal, lol), if they're that afraid of losing pre-win 10 customers.

 

Heck, Win 11 is coming out in supposedly a few months. If this is really such a huge undertaking, why not go for the latest API, since its been out for 5-6 years now?

 

Or, better yet, use Vulkan.

  • Like 1
Link to post
Share on other sites
2 minutes ago, blppt.6042 said:

I mean, anything is an upgrade to DX9, but why not take the performance benefits of DX12 vs DX11 and recompile for that API instead? ESPECIALLY for AMD Radeon users---I realize Nvidia doesn't get a whole lot of bump from 11>12 due to their fantastically optimized DX11 driver, but AMD's cards favor DX12.

 

They can always just offer a selectable DX9/DX12 mode like WoW (which even supports Mac Metal, lol), if they're that afraid of losing pre-win 10 customers.

 

Heck, Win 11 is coming out in supposedly a few months. If this is really such a huge undertaking, why not go for the latest API, since its been out for 5-6 years now?

 

Or, better yet, use Vulkan.

IKR as a AMD user we can dream, cant we.

  • Haha 1
Link to post
Share on other sites
Quote

It's not officially supported on Windows and Linux uses Fossilize to prebuild shader cache. Other than that it is usable and still faster and more stable than d912pxy.

 

Just out of curiosity, are you using a Nvidia or AMD video card? In my experience, dxvk on windows for gw2 on my AMD card keeps giving the stupid "client crashed" error in WvW, although amusingly, it still runs. 

  • Like 1
Link to post
Share on other sites
1 minute ago, blppt.6042 said:

 

Just out of curiosity, are you using a Nvidia or AMD video card? In my experience, dxvk on windows for gw2 on my AMD card keeps giving the stupid "client crashed" error in WvW, although amusingly, it still runs. 

I get that too.

  • Like 1
Link to post
Share on other sites

The problem is, that even after I move that client crash window off screen, there starts being some really annoying hitching in mobs /zergs in WvW. My guess is that the shader cache isn't caching the player models properly, has to keep loading them from scratch, and it never seems to improve with time, which most dx12/vulkan wrappers require.

 

Are you using AMD or Nvidia? I'd use d912pxy, but it has a strange issue after a few days of usage where it kills overlays for other games and won't let Windows shutdown/reboot without a hard reset. I'd love to make dxvk usable on win 10.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...