Jump to content
  • Sign Up

Where Did Prot Holo Come From?


Recommended Posts

guys wtf..? 

this is a game forum. Not a masters degree in philosophy or competition on who can appear the most intelligent .. 

all these posts with fancy words and obvious intent on looking intellectual superior are cringe af. 

noone is gonna hand u the nobel prize for any of this

 

pls stop urselves

Edited by toxic.3648
  • Thanks 3
  • Confused 3
  • Sad 1
Link to comment
Share on other sites

31 minutes ago, Crozame.4098 said:

But you were talking about relations between elements in some previous posts, and now you are saying simple addition would work... 

 

You need to ask yourself this question : How long does it take you to solve a problem. This is the primordial essence for what complexity theory is and why it exists at all as a serious field. If you are a computer, and you had to solve a problem how long does that problem take to solve... it's going to be based on the number of elements, how well these elements are connected (by a function) and the function those elements are obeying...

 

so again it just boils down to a math problem and you have to be thinking in terms of how hard a problem is to solve. that's what complexity is (formally).

 

If you have an addition problem that has 1000 variables, where each variable is just a unary skill type like Empowering Flame...x+y+z+a+b+c+d+e and so on... that's a hard problem...but it's not that hard...you can solve the problem pretty easily, especially when you are only comparing two elements at a time. x+y = 10, y+z = 15,  and so on...this is not complex.

 

In contrast You can imagine a math problem that has only 4 variables, again where each variable is just a skill, x,y,z,a, and the function they obey is polynoimial or something... it would be really hard to actually do that equation, and so the relationship between these 4 variables is complex.

 

In gw2, the problem that is being solved is an optimality problem...is skill A better then skill B...is build A the best build to play...so on and so on. If the problem is easy, well the meta will reflect that (People are running around playing the best build). If the problem is hard, then the meta should reflect that (people havn't figured it out yet...people are running around with a variety of builds).

 

Gw2 sits in this area between...there's a too many elements with mediocre relationships. It's got the "bad" kind of complexity and not the "good" kind of complexity.

  • Like 1
Link to comment
Share on other sites

9 hours ago, toxic.3648 said:

guys wtf..? 

this is a game forum. Not a masters degree in philosophy or competition on who can appear the most intelligent .. 

all these posts with fancy words and obvious intent on looking intellectual superior are cringe af. 

noone is gonna hand u the nobel prize for any of this

 

pls stop urselves

Nobody forces you to read it.

Edited by razaelll.8324
  • Like 3
Link to comment
Share on other sites

There's a place for this kind of discussion. It's a welcome change of pace after a year of "Nerf X. Nerf Y". These guys are attempting to suggest alternate methods of balancing the game. Simply deleting builds/options from the game did not work as well in practice as it did in theory, so that leaves us with the question, "What is the best method of balancing Gw2"?

 

Normally I'd make an attempt to break down the discussion into layman's terms so more of us can follow the discussion, but once we start getting into data sets, functions, variables, and complex equations, I get PTSD from AP calc and my brain shuts down. 

 

Regardless, they are staying on topic. Dissusions about how to develop a meta tie into how Prot Holo in particular became strong in the first place and what should be done about it. These guys seem to understand each other just fine. If using fancy language is the easiest way for them to get their point across to one another, more power to them. 

  • Like 5
Link to comment
Share on other sites

4 hours ago, Kuma.1503 said:

Normally I'd make an attempt to break down the discussion into layman's terms so more of us can follow the discussion, but once we start getting into data sets, functions, variables, and complex equations, I get PTSD from AP calc and my brain shuts down. 

 

Lol unfortunately the conversation isn't over just yet because I still have...really important questions to ask Kolzar. but, I can maybe summarize the most important aspects of what was said over the past 5-ish pages into a few paragraphs:

 

 

Kolzar presented a feasible model of gw2. From that, the fixed point (equilibrium) and its size (number of builds that it has) as a description of the diversity of the game... Not only do I agree with that, but it is valid mathematically in terms of describing gw2 as a system that evolves toward a meta. However he mentioned that complex computation has nothing to do with the size of this fixed point, and this bothers me for a number of reasons. If Kolzar is right, then his model as it stands, would work and it would mean that the following must be true: That the size of equilibrium point determines the diversity of gw2 and that there are only 2 solutions that have the "largest size" : A) completely homogenized game, or B) a completely rock paper scissors game.

 

The above is true regardless actually...with or without complexity time involved. However without complexity time involved, then the two options above are the only options we have, and both of them are dead-ends, meaning that there is no real way to balance a game like gw2...no real answer to the diversity problem. The best compromise, is to go for a fixed point somewhere in the middle...it's just that means that some builds will just be destined to be good, and others will not. This is essentially the equivalent to being stripped of freewill...You are no longer free to play the build you want...because in essence, you will eventually play the builds that are in the fixed point solution, any other solution is in the "non viable" section by design.
 

Currently in my free time I'm doing research on everything Kolzar said thus far, and I'm going to see if there is a way to express this model to account for complex computation time, because so far I have not  encountered any notion for why such is not the case.

  • Like 2
Link to comment
Share on other sites

29 minutes ago, JusticeRetroHunter.7684 said:

 

 That the size of equilibrium point determines the diversity of gw2 and that there are only 2 solutions that have the "largest size" : A) completely homogenized game, or B) a completely rock paper scissors game.

 

The above is true regardless actually...with or without complexity time involved. However without complexity time involved, then the two options above are the only options we have, and both of them are dead-ends, meaning that there is no real way to balance a game like gw2...no real answer to the diversity problem. The best compromise, is to go for a fixed point somewhere in the middle...it's just that means that some builds will just be destined to be good, and others will not. This is essentially the equivalent to being stripped of freewill...You are no longer free to play the build you want...because in essence, you will eventually play the builds that are in the fixed point solution, any other solution is in the "non viable" section by design.

 

 

As far as i remember i said the same thing couple of pages ago, basically you have 2 types of balance.

 

1. Homogenous where everything is the same;

2. Heterogenous or with other words diverse with everything having a counter. (Rock,paper, scissors)

 

Higher complexity only increases the time needed to find the optimal builds. Atleast that is my understanding and oppinion on the topic.

 

The problem with the second type comes when 1 build counter many others (Overpowered build).

Edited by razaelll.8324
Link to comment
Share on other sites

36 minutes ago, razaelll.8324 said:

As far as i remember i said the same thing couple of pages ago, basically you have 2 types of balance.

 

1. Homogenous where everything is the same;

2. Heterogenous or with other words diverse with everything having a counter. (Rock,paper, scissors)

 

Higher complexity only increases the time needed to find the optimal builds. Atleast that is my understanding and oppinion on the topic.

 

The problem with the second type comes when 1 build counter many others (Overpowered build).

 

The last one, isn't as bad as the first one...its the most ideal solution if one were to pick between the two because that's kind of what we want in the end (heterogeneity) but at the same time when you think about, it's not possible to truly attain. It would have to be a perfect rock paper scissors game...like the real rock paper scissors....otherwise you just get fixed points that are in between and you fall victim to the above loss of free will...where some classes are destined to be bad or good by design. 

 

There's also inconsistencies with the model...like the behavior of the fix point not truly describing what happens at the extreme ends of the model...like if you had a perfect rock paper scissors game, and you removed one build that countered someone, then you have one build that has no counter, and everyone technically should flock to that build...but the fixed point "remains" large when it should be small, and the model doesn't properly describe that kind of behavior.

 

 

Quote

Higher complexity only increases the time needed to find the optimal builds. Atleast that is my understanding and oppinion on the topic.

And ya, this is my position. It should be just possible to just chop the system into discrete time intervals, and then each time interval is a computation and this defines a notion of complexity time. I've yet to come across a reason why that's not possible to do in the maths but im working through it

Edited by JusticeRetroHunter.7684
  • Like 1
Link to comment
Share on other sites

Right now as far as I've come along here with the fixed point theorems, this is @Kolzar.9567 Fixed point model :

 

https://i.imgur.com/ZgkbNND.png

 

the vector x is just one build, where f(x) is the vector for the "next" build. The smaller fixed point is akin to a completely homogenous game, while the larger fixed point is akin to a rock paper scissors game. 

 

One doesn't need to have vectors within vectors either for this, because you can imagine that the vector x is just the core Necromancer build, while f(x) is just core necromancer build but with one skill changed for another. So instead of having Set S with subsets inside, you can just have a very large set of S that encompasses every possible configuration of all possible builds.

 

Looking at this model, you can imagine how one would make the model discrete, by just making each function a derivative with respect to time (dx/dt), boom you have discrete time, and then you can just count how much time it takes to reach the fixed point.

 

Now I understand that f:S->S has to be continuous to satisfy an equilibrium solution but again you can make discrete time infinitely precise to get the continuous solution...so I'm at a loss here for why such a model doesn't effect computation time as a relation to the the size of S.

Edited by JusticeRetroHunter.7684
  • Like 1
Link to comment
Share on other sites

If it comes down to one or the other, I believe heterogeneity is the better choice

 

This got me thinking back to my time playing other competetive games.  A very important lesson I've learned from each is that even when something seems like it comes down to simple counterpicks and counter-strategies.  Rock, paper, scissors.  The actual interaction between players is a lot more nuanced than that. 

 

I'll use League of Legends as an example since it's a popular game and it's been around for over a decade.  If you've played league for a few years, you probably remember (or have heard of) the era of teemo toplane and how notoriously hated he was. Teemo was  paper while your typical low-mobility, melee  toplaner was rock. 

 

If you tried to gapclose onto him, he would run away with move quick. If you actually managed to catch him and you fought him in melee range, a scenario where you'd think "this is when I finally win". He'd blind you with blinding dart and laugh in your face. Think you can ask for assistance from your jungler? He turns the river into a minefield of shrooms and either runs away when the jungler approaches or he 1v2s you. 

 

That was the theory behind most matchups. In practice, teemo had flaws just like any other champion. His scaling was atrocious. Simply going even with him, or even not falling too far behind him in terms of resources was considered winning. Eventually you would be more useful than him. In this matchup you were the ticking time bomb. The teemo needed to contain it for as long as possible. Hopefully, long enough for his team to win. 

 

There were holes in his defenses. When his blinding dart was on cooldown, you could go for an all-in with flash and 100-0 him.

For some champions (I frequently played volibear in this matchup) the simple act of landing your crowd control on him was enough to sequre the kill even if he burned his own flash. 

 

You could manipulate the minion wave to move closer to the bush so you could dip in and out as cover.  The act of manipulating a wave is has it's own set of rules and principles that players need to follow. It's a deep topic that I could write an entire novel on. For the purpose of this post, all you need to know is that if you know how to manipulate a wave better than your opponent, this gives you a substantial advantage over them. . 

 

When a game has elements that require a high level of execution and/or knowledge to perform optimally, it prevents us from reaching a true state of rock paper scissors, because the counterpick only exists when both players perform optimally. The harder it is to achieve optimal play, the less relevant the counterpick becomes. 

 

With that in mind. It would seem that heterogeneity is the better of the two choices, since there are ways in which you can preserve player agency "Free will" depending on how you design the combat of your game. It's simply a matter of giving players options. Means of gaining an advantage over the other, either through superior tactics or execution. The end result should be the better player emerging victorious even in a theoretical counter matchup. 

 

 

Edited by Kuma.1503
  • Like 3
Link to comment
Share on other sites

3 hours ago, Kuma.1503 said:

If it comes down to one or the other, I believe heterogeneity is the better choice

 

This got me thinking back to my time playing other competetive games.  A very important lesson I've learned from each is that even when something seems like it comes down to simple counterpicks and counter-strategies.  Rock, paper, scissors.  The actual interaction between players is a lot more nuanced than that. 

 

I'll use League of Legends as an example since it's a popular game and it's been around for over a decade.  If you've played league for a few years, you probably remember (or have heard of) the era of teemo toplane and how notoriously hated he was. Teemo was  paper while your typical low-mobility, melee ranged toplaner was rock. 

 

If you tried to gapclose onto him, he would run away with move quick. If you actually managed to catch him and you fought him in melee range, a scenario where you'd think "this is when I finally win". He'd blind you with blinding dart and laugh in your face. Think you can ask for assistance from your jungler? He turns the river into a minefield of shrooms and either runs away when the jungler approaches or he 1v2s you. 

 

That was the theory behind most matchups. In practice, teemo had flaws just like any other champion. His scaling was atrocious. Simply going even with him, or even not falling too far behind him in terms of resources was considered winning. Eventually you would be more useful than him. In this matchup you were the ticking time bomb. The teemo needed to contain it for as long as possible. Hopefully, long enough for his team to win. 

 

There were holes in his defenses. When his blinding dart was on cooldown, you could go for an all-in with flash and 100-0 him.

For some champions (I frequently played volibear in this matchup) the simple act of landing your crowd control on him was enough to sequre the kill even if he burned his own flash. 

 

You could manipulate the minion wave to move closer to the bush so you could dip in and out as cover.  The act of manipulating a wave is has it's own set of rules and principles that players need to follow. It's a deep topic that I could write an entire novel on. For the purpose of this post, all you need to know is that if you know how to manipulate a wave better than your opponent, this gives you a substantial advantage over them. . 

 

When a game has elements that require a high level of execution and/or knowledge to perform optimally, it prevents us from reaching a true state of rock paper scissors, because the counterpick only exists when both players perform optimally. The harder it is to achieve optimal play, the less relevant the counterpick becomes. 

 

With that in mind. It would seem that heterogeneity is the better of the two choices, since there are ways in which you can preserve player agency "Free will" depending on how you design the combat of your game. It's simply a matter of giving players options. Means of gaining an advantage over the other, either through superior tactics or execution. The end result should be the better player emerging victorious even in a theoretical counter matchup. 

 

 

 

Interesting comment. I have a couple things to add to it here.

 

Technically speaking, the size of the fixed point for a completely homogenous game is the smallest point (because it contains one build), and heterogeneity is the largest fixed point (because it contains the largest number of builds). This means that basically in all cases, Heterogeneity is the only operation that will increase the size of the fixed point. So for us diversity people, it's a win-win situation, even if it's not the most ideal solution. Death to homogeneity!

 

I still maintain the same thought process as you about the existence of counters as well. The more counters that exist, the more counters will exist to other counters, and by means of that anthropics, there should exist a counter somewhere in the build space to some OP build that becomes popular, and this helps sedate outlier builds, and that's a balancing mechanism.

 

Quote

When a game has elements that require a high level of execution and/or knowledge to perform optimally, it prevents us from reaching a true state of rock paper scissors, because the counterpick only exists when both players perform optimally. The harder it is to achieve optimal play, the less relevant the counterpick becomes. 

I'll point out that this is a good analysis of the skill floor and skill ceiling and how it adds another dimension of analysis to the problem (gw2 has many problems lol). That the equilibrium solution, more then likely has different solution for every person playing the game because of their unique level of knowledge over the execution of skills in combat, If people think hard about it, the skill floor/ceiling is one of the things that give us diversity in options too (like you said). A bad build at high tier, might be good in low tier...and vice versa where a good build at high tier, might be bad in low tier.

 

That information isn't in Kolzar's model  since the model is an analysis of just builds in their purest numerical form rather then an analysis of the agents that play them, but its a very valid point to bring up, and worthy of discussion too....Or really I think we should all be smart enough here to get past discussion about it, because what you said here should be unequivocally known without debate!

 

So ya every time  nerf X or Y thread pops up, another level of analysis to apply : a build that's strong at bottom level, might be weak at the top level or vice versa... but those builds might actually be perfectly balanced and what isn't balanced is the level of knowledge over the build itself. 

Edited by JusticeRetroHunter.7684
  • Like 2
Link to comment
Share on other sites

10 hours ago, JusticeRetroHunter.7684 said:

 

The last one, isn't as bad as the first one...its the most ideal solution if one were to pick between the two because that's kind of what we want in the end (heterogeneity) but at the same time when you think about, it's not possible to truly attain. It would have to be a perfect rock paper scissors game...like the real rock paper scissors....otherwise you just get fixed points that are in between and you fall victim to the above loss of free will...where some classes are destined to be bad or good by design. 

 

There's also inconsistencies with the model...like the behavior of the fix point not truly describing what happens at the extreme ends of the model...like if you had a perfect rock paper scissors game, and you removed one build that countered someone, then you have one build that has no counter, and everyone technically should flock to that build...but the fixed point "remains" large when it should be small, and the model doesn't properly describe that kind of behavior.

 


Exactly , this is not an easy task, specially for classes which are not very interactive as thief for example. The more uninteractive the class is , hardest to make counter to it. Thats why i said previously that it is not good to combine stealth, frontloaded burst and mobility in the same class. Please dont get me wrong i am not saying that thief is OP , i am saying that such classes are hard to balance properly, because they dont have many interactions with others. Also here the synergies between classes has to be taken into consideration and also the mode you are playing , because they reflect a lot on what the meta (optimal setup) is. So the connections and interactions between different builds are very important which makes the balance task even harder and more complex. (Example scourge on its own is not that big of a problem, but when you combine scourge with another support it becomes dominant)

 

Quote

And ya, this is my position. It should be just possible to just chop the system into discrete time intervals, and then each time interval is a computation and this defines a notion of complexity time. I've yet to come across a reason why that's not possible to do in the maths but im working through it

I am not sure that i understand this quote correctly , can you please give me a bit more info about what you mean. Thank you in advance!

Edited by razaelll.8324
Link to comment
Share on other sites

1 hour ago, razaelll.8324 said:

I am not sure that i understand this quote correctly , can you please give me a bit more info about what you mean. Thank you in advance!

 

Kolzar presented a model of gw2, that models meta evolution in gw2. He's stated explicitly that computational complexity is not a factor in how fast the system reaches a fixed point. The fixed point is a notion for how much diversity the system has.

 

His model as far as I've worked it out is this https://i.imgur.com/ZgkbNND.png

 

Now, my position is that computational complexity must have an impact on the time it takes to reach the fixed point... but he's stated reasons for why it's not the case...I'm trying to work that out but it requires some research on my end.

 

Based on my knowledge of the model so far, if you were to imagine hopping from one build to the next build to the next build, this takes some finite time to do each step. If each step takes some finite time, then the time should scale with the size of Set S. If Set S is a million possible configuration of builds, and each step takes the playerbase 1 second to compute the cardinality of that build, then if their is only 1 build in the fixed point, it should take time, comparable to a million steps to reach that build.

 

This turns out to be basically the same derivation that was made by Susskind in that lecture i linked earlier, and you and I had a previous thread dedicated to that derivation.

 

Anyway i have to do more research into Kolzar's model in order to see where he's getting the notion that complexity time doesn't factor into the model. I've yet to come across such a notion.

Link to comment
Share on other sites

4 hours ago, JusticeRetroHunter.7684 said:

 

Kolzar presented a model of gw2, that models meta evolution in gw2. He's stated explicitly that computational complexity is not a factor in how fast the system reaches a fixed point. The fixed point is a notion for how much diversity the system has.

 

His model as far as I've worked it out is this https://i.imgur.com/ZgkbNND.png

 

Now, my position is that computational complexity must have an impact on the time it takes to reach the fixed point... but he's stated reasons for why it's not the case...I'm trying to work that out but it requires some research on my end.

 

Based on my knowledge of the model so far, if you were to imagine hopping from one build to the next build to the next build, this takes some finite time to do each step. If each step takes some finite time, then the time should scale with the size of Set S. If Set S is a million possible configuration of builds, and each step takes the playerbase 1 second to compute the cardinality of that build, then if their is only 1 build in the fixed point, it should take time, comparable to a million steps to reach that build.

 

This turns out to be basically the same derivation that was made by Susskind in that lecture i linked earlier, and you and I had a previous thread dedicated to that derivation.

 

Anyway i have to do more research into Kolzar's model in order to see where he's getting the notion that complexity time doesn't factor into the model. I've yet to come across such a notion.

Got it thanks for the explanation. 

To be honest i agree to his statement and the reason is the following. From our conversation we talked that to go from State A to State F it requires fixed number of computations(operations) depending on the complexity, while for changing the build the case is not that , because you can find the optimal state from the First try or from the n-th try. So the number of tries to find the optimal build of a specific class in not fixed and it can be a random number between 1 try and Maximum number of changes possible to the build.

To be a bit more clear on what i mean lets say for Necro build F is the best option (optimal build) and you are currently using build A you can change the build randomly to build F in the first try (because you can make more than 1 operation between tryes/tests). SO the number of tries(tests) to reach the optimal build is not linear to the complexity. 


This also i have seen in another test which i was working on few years ago. So i used a bacteria evolution algorithm to optimize the performance of line tracing robot car on a specific race track. The output of this algorithm was few parameters(settings) which needed to be setup in the car. So i made a simulator and physical model of the car and a map of the track and let the algorithm to give me the best settings for the car for that specific track. What the algorithm was doing simply is generate the settings put them into the simulator , run a simulation and see how fast the car finishes the track and then compute some changes generate new settings and so on. i started the algorithm 5 times with the same initial settings and all of the times it gave me the same optimal output at the end but the time needed to find that optimal output was different between each full test , because of some randomized changes between each simulation, because it could do more than 1 simple operation between this simulations. So maybe the conclusion here might be that finding the optimal builds is more like an evolution process then an complexity computation, I was thinking about that after out last discussion namely: the process of finding the best/optimal builds can be modeled as an evolution process. At least to me the more i think of it more it looks like an evolution process than a complexity computation one.

I am interested on your thoughts about this, also i will be happy to see Kuma's , Kolzar's and Crozame's thoughts too.

 

Edited by razaelll.8324
Link to comment
Share on other sites

21 hours ago, Kuma.1503 said:

There's a place for this kind of discussion. It's a welcome change of pace after a year of "Nerf X. Nerf Y". These guys are attempting to suggest alternate methods of balancing the game. Simply deleting builds/options from the game did not work as well in practice as it did in theory, so that leaves us with the question, "What is the best method of balancing Gw2"?

 

Normally I'd make an attempt to break down the discussion into layman's terms so more of us can follow the discussion, but once we start getting into data sets, functions, variables, and complex equations, I get PTSD from AP calc and my brain shuts down. 

 

Regardless, they are staying on topic. Dissusions about how to develop a meta tie into how Prot Holo in particular became strong in the first place and what should be done about it. These guys seem to understand each other just fine. If using fancy language is the easiest way for them to get their point across to one another, more power to them. 

There is a place for this kind of discussion, agreed. However, when @JusticeRetroHunter.7684started insulting the intelligence of others, that's where they went too far. They are just a kitten and while I was interested in the discussion at first, at that point I stopped taking him serious.

  • Like 3
Link to comment
Share on other sites

14 minutes ago, Kodama.6453 said:

There is a place for this kind of discussion, agreed. However, when @JusticeRetroHunter.7684started insulting the intelligence of others, that's where they went too far. They are just a kitten and while I was interested in the discussion at first, at that point I stopped taking him serious.

He is a bit too agressive some times, but he have some interesting points and the discussion it self is very interesting one in my opinion. 

Link to comment
Share on other sites

3 hours ago, razaelll.8324 said:

Got it thanks for the explanation. 

To be honest i agree to his statement and the reason is the following. From our conversation we talked that to go from State A to State F it requires fixed number of computations(operations) depending on the complexity, while for changing the build the case is not that , because you can find the optimal state from the First try or from the n-th try. So the number of tries to find the optimal build of a specific class in not fixed and it can be a random number between 1 try and Maximum number of changes possible to the build.

To be a bit more clear on what i mean lets say for Necro build F is the best option (optimal build) and you are currently using build A you can change the build randomly to build F in the first try (because you can make more than 1 operation between tryes/tests). SO the number of tries(tests) to reach the optimal build is not linear to the complexity. 

 

Although this is true, complex computation when we talk about things that are like lotteries, you have to talk about it in terms of statistics. You can get lucky on your first try when buying a lottery ticket, but it's highly unlikely...

 

Example, if you have a build space with 100 trillion builds that has 1 equilibrium point, and you run one test with an algorithm that picks a build at random, and you just happen to find the fixed point in the first go, doesn't mean that the algorithm is fast...it means it just got lucky...so you run it a couple thousand times...on average it will take a time period that is comparable to 100 trillion times.

 

Also you can treat it like it wasn't a lottery too, where you simply pick the fixed point, and it's furthest point away from each other. This is how it's normally done in complex computation where you take the worst case scenario (O) when running an algorythm.

Edited by JusticeRetroHunter.7684
Link to comment
Share on other sites

15 minutes ago, JusticeRetroHunter.7684 said:

 

Although this is true, complex computation when we talk about things that are like lotteries, you have to talk about it in terms of statistics. You can get lucky on your first try when buying a lottery ticket, but it's highly unlikely...

 

Example, if you have a build space with 100 trillion builds that has 1 equilibrium point, and you run one test with an algorithm that picks a build at random, and you just happen to find the fixed point in the first go, doesn't mean that the algorithm is fast...it means it just got lucky...so you run it a couple thousand times...on average it will take a time period that is comparable to 100 trillion times.

 

Also you can treat it like it wasn't a lottery too, where you simply pick the fixed point, and it's furthest point away from each other. This is how it's normally done in complex computation where you take the worst case scenario (O) when running an algorythm.

Exactly. Or you can also use evolution algorithm to find the best build as i gave example in my prevous post

 

"This also i have seen in another test which i was working on few years ago. So i used a bacteria evolution algorithm to optimize the performance of line tracing robot car on a specific race track. The output of this algorithm was few parameters(settings) which needed to be setup in the car. So i made a simulator and physical model of the car and a map of the track and let the algorithm to give me the best settings for the car for that specific track. What the algorithm was doing simply is generate the settings put them into the simulator , run a simulation and see how fast the car finishes the track and then compute some changes generate new settings and so on. i started the algorithm 5 times with the same initial settings and all of the times it gave me the same optimal output at the end but the time needed to find that optimal output was different between each full test , because of some randomized changes between each simulation, because it could do more than 1 simple operation between this simulations. So maybe the conclusion here might be that finding the optimal builds is more like an evolution process then an complexity computation, I was think about that after out last discussion namely: the process of finding the best/optimal builds can be modeled as an evolution process. At least to me the more i think of it more it looks like an evolution process than a complexity computation one."

Edited by razaelll.8324
Link to comment
Share on other sites

2 minutes ago, razaelll.8324 said:

Exactly. Or you can also use evolution algorithm to find the best build as i gave example in my prevous post

 

Quote

So maybe the conclusion here might be that finding the optimal builds is more like an evolution process then an complexity computation, I was think about that after out last discussion namely: the process of finding the best/optimal builds can be modeled as an evolution process. At least to me the more i think of it more it looks like an evolution process than a complexity computation one.

 

 

I see it just slightly slightly differently...that evolution processes are algorithms to a complicated problem. You can kinda see this when you look at the gw2 problem as a decision tree problem. If the best build is somewhere at the end of a very long tree, you have to explore all the paths of this tree. If you are just one person...it's gonna be hard for you to explore all the paths...but with a thousand people taking each path, you are parallel processing that problem, to solve it 1000 times faster. 

Edited by JusticeRetroHunter.7684
Link to comment
Share on other sites

5 minutes ago, JusticeRetroHunter.7684 said:

 

 

 

I see it just slightly slightly differently...that evolution processes are algorithms to a complicated problem. You can kinda see this when you look at the gw2 problem as a decision tree problem. If the best build is somewhere at the end of a very long tree, you have to explore all the paths of this tree. If you are just one person...it's gonna be hard for you to explore all the paths...but with a thousand people taking each path, you are parallel processing that problem, to solve it 1000 times faster. 

Not necessarily . In my case i used it to optimize 3 coefficients, which is much more simple task than finding the optimal builds in game as gw2, so in my opinion it can be used without any kind of a problem. 

Link to comment
Share on other sites

14 minutes ago, razaelll.8324 said:

Not necessarily . In my case i used it to optimize 3 coefficients, which is much more simple task than finding the optimal builds in game as gw2, so in my opinion it can be used without any kind of a problem. 

 

I think there's just a little confusion on what a problem is here. The problem is just a question you want to find an answer too. All questions have answers, and computational complexity is just a measure for how much time it takes you to answer a question. You could have asked any question, and you could have used any algorithm to answer that question.

 

So even if your question is simple, doesn't mean it doesn't have a complex computation time...it just means that computation time will be small. You can have an addition problem like 15+10+83 =? That's a simple problem that takes a few steps...but it still takes steps. the more steps there are, the harder the problem is and the longer it will take for you to solve it. 6+19+1+89+54+16+25+15+14+1+2+5+97... I think you get the idea.

 

 

Edited by JusticeRetroHunter.7684
Link to comment
Share on other sites

4 minutes ago, JusticeRetroHunter.7684 said:

 

I think there's just a little confusion on what a problem is here. The problem is just a question you want to find an answer too. All questions have answers, and computational complexity is just a measure for how much time it takes you to answer a question. You could have asked any question, and you could have used any algorithm to answer that question.

 

So even if your question is simple, doesn't mean it doesn't have a complex computation time...it just means that computation time will be small. You can have an addition problem like 15+10+83 =? That's a simply problem that takes a few steps...but it still takes steps. the more steps there are, the harder the problem is. 6+19+1+89+54+16+25+15+14+1+2+5+97... I think you get the idea.

 

 

yes i understand the idea, what i fail to understand is how is that related to this :

 

Quote

 I was thinking about that after our last discussion namely: the process of finding the best/optimal builds can be modeled as an evolution process. At least to me the more i think of it more it looks like an evolution process than a complexity computation one.



 

Edited by razaelll.8324
Link to comment
Share on other sites

42 minutes ago, razaelll.8324 said:

yes i understand the idea, what i fail to understand is how is that related to this I was think about that after out last discussion namely: the process of finding the best/optimal builds can be modeled as an evolution process. At least to me the more i think of it more it looks like an evolution process than a complexity computation one.

 

 

I can show you what I mean in the form of a picture, using the same example above.

 

https://i.imgur.com/gd8ZZob.png

In this image, I took one of the paths to the end of a tree (marked red), each line representing a simple addition of the values in the box. I tell you now, that the highest number in this tree game is 45...and now you are asked to verify my answer.

 

The complexity of this problem is just the the total computation size of the game...the number of boxes, and the operations taken between each box. 

 

If you were one person, you have to travel down all the paths to verify that my answer is either true or false. Your algorithm in this case is slow (a brute force algorithm) that takes you at least 11 steps and at most 28 steps.

 

If you had 7 people with you to go down the other paths, then you can verify the answer I gave in a much shorter number of steps (4 of them). This is why increasing the number of people acts as an algorithm, because it solves the problem faster. This is the notion for what computational time is and how it changes from algorithm to algorithm.

 

The problem itself is the tree (which has a definite complexity size). The algorithm here is the number of people traveling on a different path of the tree...When you think about evolution in terms of biology, these other people on the path of this tree are the other animals in the animal kingdom...they verify with each other what the "highest number is" by eating each other, competing for resources, and reproducing. The animals that survive are the "highest number" until they get verified by something else

Edited by JusticeRetroHunter.7684
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...