Nanite technology in Lamberyard engine

Today came the demo of the new engine Unreal Engine 5 in it we were shown the use of high-poly cinematic assets with billions of polygons in the scene. I would like to be surprised and say that this is a breakthrough in the gaming industry! I don’t know how cool this is implemented, but I’ll agree on what to do low-poly models, bake the normal card, etc. it already seems to be a thing of the past. In this regard, I have a question. Can we wait for the introduction of this technology in our amazing engine in the near future, in a year or two?
You must add this to our engine!
and I would also like to say that something needs to be done with global lighting … now it is of little use
That would be great

Hey @didzey this is for cinematics for a reason. You are not able to use it for games even with Unreal, because it just can’t run on a lot of systems when you have multiple of the object. If you plan on making a game for consoles forget about it if you plan on using something like Nanite Technology because its sole purpose is for real time cinematics not 200 fps gameplay. It would be nice to see in Lumberyard if Lumberyard wants to go for cinematics and games.

The Nanite technology looks incredible, for sure! :slight_smile: This is not just for cinematics though, it is going to be a different approach to things. The same as we had virtual texturing, but now for geometry. At this time, details still have to be provided on the exact implementation details. We have to see how much it depends on say the disk speed and required graphics hardware (or CPU as a part is using software rasterization I read).

I am sure the Lumberyard graphics team is closely monitoring this. Even though nothing can be promised I am sure eventually this will make its way in Lumberyard as well if this is really transforming the industry.

Definitely exciting times! :slight_smile:

1 Like

Seems to be a type of REYES rendering.

Have to bear in mind that Unreal also now caters to the archviz and special effects markets, where this could be a big benefit to their workflow. It’s impressive that they’ve got this running in realtime on a PS5, but I’d be surprised if it’s a practical choice for games anytime soon. It’s a very expensive way of doing things.

There has been some work in this area before. I think the main engineer of Nanite also mentioned this in some post he did. He was looking at Geometry Images:: http://hhoppe.com/proj/gim/

Perhaps there are some similarities to this and adjusted to modern hardware perhaps.
We’ll have to see what hardware requirements there are. Also does it work only on static geometry, or also deformable objects like characters. It also might complicate some things like foot IK and placement (which they also did work on), and perhaps even collision detection.

And what is the size of the models on disk. Do games now suddenly be 10x larger as they have way higher poly counts? Or is it using the images and compressing them, as in the link above.

And what’s the required disk speed. Did they choose PS5 because that has special ssd with compression support, which other platforms don’t have?

It will be interesting to get to know more details. Also they aren’t shipping it officially until the end of 2021 I think, with a possible preview at the end of this year.

2 Likes

The next gen is going to be interesting, particularly in regard to the streaming speed of the console SSDs, and how game designs could change. I could imagine that it’s going to be PCs actually holding back cross-platform games in that regard - it will be difficult to rely upon the PC user base having a fast enough storage solution, due to all sorts of different configurations.

I suspect it will be only in PS5 exclusives that we really see that potential pushed.

1 Like