Jump to content
  • Sign Up

unreal Engine 5 mmos hard times for gw2


Recommended Posts

Leave the ultra high res graphics for cutscenes and as for that even ... don't even package them with the game. Allow these cutscenes to stream as necessary over the internet.

As for UR5 being a game changer ... it's probably real uncommon for people to choose an MMO to dedicate their time to simply because of the engine. These are still games afterall; if the underlying game sucks, the engine you use is irrelevant. GW2 'keeps up' simply as a function of the content and how the game works. 

Edited by Obtena.7952
  • Like 4
  • Confused 2
Link to comment
Share on other sites

50 minutes ago, Cuks.8241 said:

To be honest author is talking about ai generated contet and not the graphic engine. Which is not a new concept anyway. Some games will greatly benefit from advances in the field. Dungeon crawler type rpgs for examples

If that's the case, I don't see how GW2 would be affected by the developments in AI they are talking about. GW2 doesn't have randomly generated quests like other MMO's do. Anet would have to implement quest-generation game system in GW2 just to take advantage of that kind of AI development. I don't see why they would do that. If anything, the leveling experience, where most people would benefit from a quest-generation system, already overshoots the basic premise of map completion to level up. 

 

Edited by Obtena.7952
  • Confused 1
Link to comment
Share on other sites

On 5/20/2023 at 12:30 AM, sigmundf.7523 said:

What I feel is that, ArenaNet as a company, is focusing its revenue from GW2 through a small circle / niche community and withdrawing itself from major competition in MMO market. No matter what big MMO out there that releasing big content, or Publisher / Devs releasing new MMO, there will be almost no motivation for ANET to bring GW2 to the competition. As a product that being sold to consumers, just look at the recent WoW expansion and new big patch and compare it what ANET did to GW2, almost nothing being done to GW2 to at least have itself present a sense of competing in MMO market. There have to be strong enough internal drive force from the company, either the upper echelon of ANET or NCSOFT, to make a substantial change in GW2, for the better or worse. 

Compete with who, tell me what other mmo has a better combat system than gw2., ill wait.

Edited by TheRunningSquire.3621
  • Like 2
  • Confused 1
Link to comment
Share on other sites

On 5/21/2023 at 11:47 AM, VAHNeunzehnsechundsiebzig. said:

Also, when will the AI bubble burst? This year? Next? I mean, so far every AI bubble had burst. We are at iteration 5 or 6?

This time is different. You might want to do some research.

For most of the last 70 years AI has been like usable fusion power... always optimistically just around the corner. Consistently AI scientists have speculated that they will have X level of AI in 10 years and consistently they have been wrong with very disappointing results.

Not any more. For the last 5 years it's flipped on it's head. Now AI scientists are thinking they will have something in a few years and it just takes a few months. The pace of advance has become very, very fast. This isn't entirely unexpected - AI is something which is capable of self-acceleration.

The top scientists in the field have admitted they don't entirely understand how today's top AI is doing everything it can do, which is one reason that it's racing ahead of their expectations. It's still pretty dumb in some ways, ChatGPT has a lot of knowledge and is extremely good at applying that knowledge in very flexible ways but it's critical reasoning capability is very low.

We can now see a future where AI gains that reasoning capability, exceeding human capabilities, and we don't know whether it's a few months or 10 years away. We just don't know, but we can see it is coming. And we should start thinking very seriously about what it means and how we as a species should use it.

A bit off topic I know, but people shouldn't be thinking that AI is just a bubble that will burst, it won't this time.

 

Edited by Mistwraithe.3106
  • Like 2
  • Confused 3
Link to comment
Share on other sites

1 hour ago, Mistwraithe.3106 said:

This time is different. You might want to do some research.

For most of the last 70 years AI has been like usable fusion power... always optimistically just around the corner. Consistently AI scientists have speculated that they will have X level of AI in 10 years and consistently they have been wrong with very disappointing results.

Not any more. For the last 5 years it's flipped on it's head. Now AI scientists are thinking they will have something in a few years and it just takes a few months. The pace of advance has become very, very fast. This isn't entirely unexpected - AI is something which is capable of self-acceleration.

The top scientists in the field have admitted they don't entirely understand how today's top AI is doing everything it can do, which is one reason that it's racing ahead of their expectations. It's still pretty dumb in some ways, ChatGPT has a lot of knowledge and is extremely good at applying that knowledge in very flexible ways but it's critical reasoning capability is very low.

We can now see a future where AI gains that reasoning capability, exceeding human capabilities, and we don't know whether it's a few months or 10 years away. We just don't know, but we can see it is coming. And we should start thinking very seriously about what it means and how we as a species should use it.

A bit off topic I know, but people shouldn't be thinking that AI is just a bubble that will burst, it won't this time.

 

You aren't going to exceed millions of years of evolution with an AI. Our brains have a low operating frequency (on the order of computers from the 90s), but have the equivalent of billions of cores running simultaneously. All of the computing power in the world right now barely matches a handful of brains.

 

Quantum computing can do it, but its decades, if not a century off from being something usable at home or even in the workplace as much as regular computers are now. We've been developing mechanical computers for 2,000 years and electronic computers for over a century and we've barely scratched the surface of the necessary performance needed to recreate a living being without thousands of abstraction layers and simplifications.

 

Existing AI, even with the most complex learning algorithms, is essentially just that; batches of simplifications that produce Human-like behavior. To put this simply, all that makes it smart is that advanced intelligence already exists and is fed into it. Without us, it would be completely useless and couldn't do anything, and its not going to "self-learn" because there's no advanced AI for other AI to learn from, which means even at its best (which we aren't even close to) it'll bottom out at Human-level intelligence.

 

This is shown by the fact that Humans advanced solely due to one reason: The existence of writing and storing our knowledge, effectively creating a millenia-spanning racial genetic memory except instead of our brains storing the actual knowledge the store the information on how to record and access that knowledge, because despite being infinitely advanced knowing everything we've learned is still far beyond our mental capabilities.

 

People really need to study some real science and not pseudoscience. As advanced as our current level of technology is, its nothing compared to what we're theoretically capable of.

Edited by SoftFootpaws.9134
  • Like 2
  • Confused 2
Link to comment
Share on other sites

2 hours ago, Arianth Moonlight.6453 said:

I don't want photorealism in my cartoons, I don't want photorealism in my CGI movies and I don't want photorealism in my games.

I'm the opposite, I don't want cartoons in my photorealism, I've personally outgrown cartoons.

  • Haha 1
  • Confused 1
  • Sad 1
Link to comment
Share on other sites

4 hours ago, Mistwraithe.3106 said:

This time is different. You might want to do some research.

For most of the last 70 years AI has been like usable fusion power... always optimistically just around the corner. Consistently AI scientists have speculated that they will have X level of AI in 10 years and consistently they have been wrong with very disappointing results.

Not any more. For the last 5 years it's flipped on it's head. Now AI scientists are thinking they will have something in a few years and it just takes a few months. The pace of advance has become very, very fast. This isn't entirely unexpected - AI is something which is capable of self-acceleration.

The top scientists in the field have admitted they don't entirely understand how today's top AI is doing everything it can do, which is one reason that it's racing ahead of their expectations. It's still pretty dumb in some ways, ChatGPT has a lot of knowledge and is extremely good at applying that knowledge in very flexible ways but it's critical reasoning capability is very low.

We can now see a future where AI gains that reasoning capability, exceeding human capabilities, and we don't know whether it's a few months or 10 years away. We just don't know, but we can see it is coming. And we should start thinking very seriously about what it means and how we as a species should use it.

A bit off topic I know, but people shouldn't be thinking that AI is just a bubble that will burst, it won't this time.

 

You might want to learn what the term "AI" actually means in the context of stuff like ChatGPT, because it is something very different from what you think it is. When the "AI" label is being thrown around nowadays, it's not in connection to any real Artificial Intelligence system. ChatGPT is a system of data analysis and collation, nothing more. It's exactly as intelligent (or as stupid) as the data it's being fed. While it's capable of change by analyzing more and more data, this is not really growth - depending on what data it gets access to, it can actually become dumber. And as for gaining a reasoning ability - you can forget about it. It's not part of the design. The "AI"-labelled systems, as i mentioned, are not meant to be true AI, and, as such, are not capable of creating anything new - they can only replicate knowledge.

To better explain it: A ChatGPT-like "AI" can never solve a problem noone has solved before. It can only search through it databanks to find an already known solution that might be the best fit.

Edited by Astralporing.1957
  • Like 3
  • Thanks 1
  • Confused 2
Link to comment
Share on other sites

jfc, there is *so* much uninformed information-spewing going on in this thread. Wow.

First off, AI is not only able to make "randomly generated quests". Think textures. Bump maps. Mesh polishing. Hell, UE5's nanite is basically AI deciding what you can and can't see before the render engine gets to do so, so it can tell the render engine to skip stuff (gross oversimplification, but eh). And then there's QA, stress-testing, code review. Even if you still hold to the "true creativity is the realm of the human mind" belief, the benefits that AI offers in game production are enormous.

Also, quantum computing? It is here. We're not talking "centuries away" from being impact generating forces. There is a retail model on serial production right now, $5,000 and it's yours.

  • Like 1
  • Confused 3
  • Sad 1
Link to comment
Share on other sites

4 minutes ago, Astralporing.1957 said:

The "AI"-labelled systems, as i mentioned, are not meant to be true AI, and, as such, are not capable of creating anything new - they can only replicate knowledge.

The AI that's incapable of generating anything new is currently generating new kitten treatments, diabetes-curing drugs, and alzheimer's medication. Protein folding alone has surpassed all the gains from Folding@Home from the past two decades in, like, a year.

  • Confused 1
Link to comment
Share on other sites

11 minutes ago, The Boz.2038 said:

The AI that's incapable of generating anything new is currently generating new kitten treatments, diabetes-curing drugs, and alzheimer's medication. Protein folding alone has surpassed all the gains from Folding@Home from the past two decades in, like, a year.

It's faster than humans at analyzing already known data, yes. Fast calculations have always been an advantage of Von Neumann's architecture. We may be talking about a change of scope compared to the older days, but it's not really a qualitative innovation.

Notice, that all of the results of AI work must still be verified by humans, because this "AI" is very capable of making glaring mistakes that would be obvious to most of us on the first look. And its a design fault that will not be eliminated just by iterating on current "AI" designs, because it is inherent in them.

Also, notice, how the results you mentioned are being created by AI, but the methods themselves to obtain those results were still made by humans.

Edited by Astralporing.1957
  • Confused 4
Link to comment
Share on other sites

1 minute ago, Gravitron.7982 said:

I don't see how, I just don't care much for cartoons or that kind of stuff these days. I prefer realism more now.

Oh, that's completely fine - beauty is in the eye of the beholder, and all that. It's the notion that you have outgrown that is childish. You have simply changed your tastes, nothing more.

  • Thanks 1
  • Confused 4
Link to comment
Share on other sites

7 minutes ago, Astralporing.1957 said:

Notice, that all of the results of AI work must still be verified by humans, because this "AI" is very capable of making glaring mistakes that would be obvious to most of us on the first look. And its a design fault that will not be eliminated just by iterating on current "AI" designs, because it is inherent in them.

Notice, that all of the results of human work must still be verified by humans, because these "humans" are very capable of making glaring mistakes that would be blah blah etc.

  • Confused 1
Link to comment
Share on other sites

18 minutes ago, Astralporing.1957 said:

Also, notice, how the results you mentioned are being created by AI, but the methods themselves to obtain those results were still made by humans.

Wait...

what?

Is this, like, some sort of rapid sprint of the goalposts to make sure any definition of AI fails until it is literally, physically self-replicating on the entire logistics chain?!

  • Haha 1
Link to comment
Share on other sites

15 minutes ago, The Boz.2038 said:

Wait...

what?

Is this, like, some sort of rapid sprint of the goalposts to make sure any definition of AI fails until it is literally, physically self-replicating on the entire logistics chain?!

..no? It's just another explanation of the fact that what is being called "AI" nowadays has nothing to do with the original meaning of the term. It does not cover systems capable of independent thinking and reasoning, but merely systems that are able to arrive at solutions by collating data.

Let's go again: given two bad choices, a natural response for any human (and for many animals, btw) is to look for a third solution that lies outside of the scope of the original choice. The AI systems we're talking about won't do that, unless the data they have access to show someone else did it before. AIs can only retread known paths, they cannot create new ones.

The examples you gave earlier were just like that - sure, the so-called AI systems can go further (or rather faster) on some paths than humans could, but those are still paths someone else created and decided for them to follow. There's no intelligence in that at all.

Let me give you a recent example from around where i live. In a Science Centre in my city (a sort of new style museum of new and old technologies and sciences) an AI booth was added, with system that was supposed to answer scientific questions of the visitors. On the first day it suffered a spectacular failure - someone asked it to give him the value of PI. After few hours of reciting numbers someone from the staff had to come and shut it down, then create a patch for it to let system know it should round the results. The system would never have arrived at such a solution on its own, because on its own would never know it was even an issue. That's not a failure of that specific system - the inability to look outside starting parameters and coded limitations is a fault that is inherent in the general design of all current "AI" systems.

Edited by Astralporing.1957
  • Confused 4
Link to comment
Share on other sites

8 minutes ago, The Boz.2038 said:

I fail to see how any of that is at all relevant to the definition of AI that is going to have an impact on the gaming industry.

...it's not. It is very relevant however to the post i was originally responding to, which claimed the AI systems that we are talking about here were capable of "self-accelerating" and were going towards gaining critical reasoning and thinking. Which, no, they aren't.

Sometimes it is really worth to  check the context first before responding to someone.

Edited by Astralporing.1957
  • Like 1
  • Confused 3
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...