Learning Machine
-based simulators

A while ago, I was browsing through arXiv’s recent paper submissions in Machine Learning when I came across an interesting title.
I decided to dive deeper into it, and found out that the authors successfully combine and use several machine learning models to create a framework called "Graph Network-based Simulators" (GNS).
As you can see on the image above, the predicted water particle movement managed to behave similarly with the ground truth. It also produced comparable result for different starting conditions and other particles such as goop and sand too.
Contrary to existing simulation that requires re-rendering for any change in starting conditions, this model only needs to be trained once and can successfully predict how the particles would behave in different conditions.
Here’s my attempt at a really simplified explanation of how they do it.

The GNS framework is composed of three major blocks, the encoder, processor, and the decoder.
Encoder
The encoder would construct an initial graph based on the current position of each particle. Particles will act as the nodes in graph. Meanwhile, the edges will be built between the neighbouring nodes within a certain connectivity radius.
On the next and each timestep, the graph’s edges will be reconstructed using nearest neighbor algorithm.
The input vector is composed of the following:
- particle’s position
- particle’s previous velocities (5 timestep)
- material properties (water, sand, goop, etc.)
Processor
The processor passes "messages" between node through the edges. This is the part that is learned by the model. There are M graph networks (GN) that is used to learn (and eventually predict) the interactions between particles.
In Physics, these interactions should be the exchange of energy and momentum between particles. In the paper, the task of modelling these interactions are given to the stack of M GNs.
Decoder
The decoder’s task is to extract information from the graph, specifically the position of each node. However, the decoder does not output the coordinates of each particle.
The output is the average acceleration for each particle in that timestep.
Once you have the information of average acceleration per particle, you can calculate the velocity and ultimately the position of each particle.
If we look back at the input of the encoder, the current position and velocity along with 5 previous velocities will be used to calculate the next timestep’s average acceleration.
Just like a normal neural network, the process is repeated for t timestep.
For the sake of visualising the Simulation result, information about the position of each particle is used to draw the particles on each timestep.
Apparently, for all three modules (encoder, processor, and decoder), the authors simply used multi-layer perceptron (MLP) with 2 hidden layers (ReLU) and 1 output layer (no activation function) with 128 nodes each.
What amazes me the most is how seemingly simple neural network architecture composed of 2-layers MLP with ReLU coupled with nearest neighbour algorithm could capture seemingly complex interactions.

The above image depicts different starting conditions (a-h) and the result of both the prediction and ground truth after several timesteps.
- (a) shows interaction of goop particles
- (b) shows interaction of water particles
- (c) shows interaction of sand particles
- (d) shows the particle’s interaction with rigid obstacle
- (e) shows interaction between different particles
- (f) shows how the model handles much more particles than in training
- (g) shows interaction with unseen objects
- (h) shows generalisation on larger domains
Our main findings are that our GNS model can learn accurate, high-resolution, long-term simulations of different fluids, deformables, and rigid solids, and it can generalize well beyond training to much longer, larger, and challenging settings (source)
The authors also stated that the GNS model could handle generalisation quite well. In figure (f), the model can handle interactions for up to 28k particles, which is more than 10x the number they used in training, which is 2.5k particles.
Figure (g) shows the model can also handle interaction of various materials while having continuous incoming flow of water in every timestep.
Last but not least, figure (h) shows the state of the simulation after 5,000 steps, 8x more than the steps they use for training, with 85k particles in an area 32x larger than the one the model is trained on.
You can see more examples generated by the model here.
They also included some failure cases where solids could become deformed on longer simulation timesteps and goop particles would sometimes stick to the walls instead of falling down.
At the current state, the model might not be 100% accurate to completely replace manual rendering for particle simulation. However, it already manages to simulate most of the physics correctly. A model that could achieve near perfect simulation might just be in the near future.
The physics simulation domain may not be as popular as other domains in machine learning such as Natural Language Processing or Computer Vision. However, imagine the impact this model would bring if it could replace the existing particle simulation.
Current particle simulation needs a long time to be created since it manually calculates each particle’s state on every timestep. If you move just one single particle out of place, you will need to recalculate everything again.
On the other hand, a machine learning model will only take a lot of time to process in training phase. Once the model is trained, the simulation can be created by simply passing the information through the model. This process needs much less computing power since it is far simpler than the current particle simulation.
This possibly saves a lot of energy needed for future particle simulation as the model only needs to be trained once and can be reused for a fraction of the power needed to run a non machine learning powered physics simulation.
References
[1] Sanchez-Gonzalez et al., Learning to Simulate Complex Physics with Graph Networks (2020), arXiv:2002.09405 [cs.LG]
Learning Machine is a series of stories about things happening in the world of Machine Learning that I found interesting enough to share. Oh, and sometimes it will be about the fundamentals of Machine Learning too. Follow me to get regular updates on new stories.