GameLift service process using excessive memory

I’m seeing the gamelift process taking up about 500mb on my servers and would like to know why and what I can do to reduce it.

For context, I’m spawning 8 game servers and I do not specify any log files to ProcessReady.

Here’s the GL process I’m referring to

//bin/java -Djava.util.logging.config.file=/local/whitewater/AuxProxy/Tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Dsun.net.inetaddr.ttl=60 -Dsun.net.inetaddr.negative.ttl=10 -Dnetworkaddress.cache.ttl=60 -Djava.net.preferIPv4Stack=true -Dspring.profiles.active=linux -Djdk.tls.ephemeralDHKeySize=2048 -Djava.endorsed.dirs=/local/whitewater/AuxProxy/Tomcat/endorsed -classpath /local/whitewater/AuxProxy/Tomcat/bin/bootstrap.jar:/local/whitewater/AuxProxy/Tomcat/bin/tomcat-juli.jar -Dcatalina.base=/local/whitewater/AuxProxy/Tomcat -Dcatalina.home=/local/whitewater/AuxProxy/Tomcat -Djava.io.tmpdir=/local/whitewater/AuxProxy/Tomcat/temp org.apache.catalina.startup.Bootstrap start

Can you provide a fleet-id or instance-id plus region? With that I can get the GameLift team to take a look.

I don’t believe theres any public facing documentation about the GameLift Process.

fleet-7e0204f8-8c8c-4506-89ae-d56c19871084

Do you have the region/home region as well? That will make it much faster to find the fleet. Thanks in advance.

Have notified the GameLift service team of your question and hopefully someone will respond soon.

us-west-2. Thanks Pip :slight_smile:

Hi!

If possible could you also provide the instance you were monitoring? Additionally, what timestamps are you seeing high memory overhead at/during (please also add time zone)?

We can then take a look to understand what’s happening there to understand where the overhead is coming from!

Easy. It’s omnipresent. All instances, all fleets, all the time. I just checked my latest fleet (fleet-df84041e-16a1-4541-95e5-7979e036f31b, us-west-2 and there’s only one instance) and it’s sitting at 530mb now, having never even spawned a single game session.

@AlexE-aws got any more info on this?

Sorry about the delay here! We’ve taken a look and seems like that footprint is about what is expected for the on the box process at the moment, even without active game servers present. For that process we are always looking to optimize its footprint, but please let us know if the amount starts to increase over time and we can take another look to see what has changed.

I’m trying to budget out server instances and at this rate this single process takes nearly 2x as much as my game server and about 20% of my total memory usage.

  • Should I expect the memory usage to grow with each server process or stay flat around 500m?
  • Is there really nothing I can do to lower this? Or at least something that will assure me it doesn’t go over some limit (and start causing OOM’s on my servers)?
  • Should I expect the memory usage to grow with each server process or stay flat around 500m?

You should expect the process to use about the same amount of memory. The memory size of the process may increase or decrease small amounts as it’s running, but it shouldn’t increase as your process count increases.

  • Is there really nothing I can do to lower this? Or at least something that will assure me it doesn’t go over some limit (and start causing OOM’s on my servers)?

At this point, there’s no immediate action that can be taken to decrease the process size. Significant increases in memory usage of the process would be indicative of a problem on our end, so you should let us know/contact support if this happens. But you shouldn’t need to put any guardrails in place to prevent the process from increasing exponentially or leaking memory resources - you should be able to budget memory usage to account for the process size (with a small buffer) and not have to worry about it causing OOMs.