Jump to content
  • Sign Up

Gw2 DX12pxy needs to be implemented directly


Recommended Posts

7 hours ago, Infusion.7149 said:

What I wrote is not hypothetical, you're theorizing 10% per thread when that isn't the case as seen with those applications mentioned in the study which have far more financial resources at their disposal. Anyone that overclocks (Intel or AMD) knows you will hit lower clocks on AVX2 than SSE or AVX. In fact, Intel CPUs stuck on 14nm process are rather bad with this since if your cooling is stock or not water you will drop over 10% of your clocks. That's why we have AVX offsets.

see https://www.techpowerup.com/review/intel-core-i9-9900k/18.html
https://www.techpowerup.com/review/intel-core-i7-10700k/20.html
https://www.techpowerup.com/review/intel-core-i7-11700kf/23.html

You do know that 10% gain for them to implement is basically not worthwhile right? It's 10% total, not per core. Don't conflate threading with vectorization, because your gain is mostly going to come from threading.
A good example is the Last of Us Remaster engine rework: https://www.gdcvault.com/play/1022186/Parallelizing-the-Naughty-Dog-Engine

Threading is something they've been working on for a while but some things are inherently serial in nature. Due to amdahl's law it is highly unlikely to scale well past six cores as a result.

Well I simply said you can do it the lazy way there is no much work to be done if the compiler agrees with you.

 

The point is the problem isn't new and well known so there are automatic solution at this point in time.

 

If people want to know far you really can drive this can see this here :(all 3 parts)

 

Link to post
Share on other sites
  • Replies 153
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Still shooting down everything I see....what a contribution.   They should implement it. Part of what makes games better is optimization over time. There is a reason WoW switched over. There

At the cost of making the game look very very ugly. Players can make the game look even worse than Guild Wars 1 to have better performance, but is that even worth it on systems that can run the latest

So reducing the quality (and number) of players around doesn't "impact graphics". Good to know. And no, just playing with model limits will never be enough because the game has horrible performance ev

The point is , hypothetical (more theory) vs actual.

Do you know even last year when CERN (the research center working on large hadron collider) queried their CPUs in use, only 61.5% have the avx2 flag?
https://indico.cern.ch/event/883512/contributions/3722989/attachments/1978627/3294056/20200130-WLCGOpsCoord-AVX2collection_jschovan.pdf

https://twiki.cern.ch/twiki/bin/view/LCG/WLCGOpsMinutes200130

 

Quote

 

The Worldwide LHC Computing Grid (WLCG) project is a global collaboration of around 170 computing centres in more than 40 countries, linking up national and international grid infrastructures.  The mission of the WLCG project is to provide global computing resources to store, distribute and analyse the ~50-70 Petabytes of data expected every year of operations from the Large Hadron Collider (LHC) at CERN on the Franco-Swiss border.

 


Meanwhile you're proposing they implement it on a 9 year old game?

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...