A Functional, If Limited, Life Simulation.
I’ve actually had a functional system for a couple days now, but I keep tinkering with it. There’s always one more thing I want to do before I post my progress or put together a YouTube video and put it out for everybody to see. That being said, I do intend to post a video this weekend. Rather than cluttering that video with a bunch of details about how this Life Sim is different than some of the works that inspired me, I decided to take the time here to run through a lot of that here. For the purposes of this post, I’m going to be focusing on the major differences between what I have (and what I’m building) and the system laid out by David Randall Miller in this YouTube Video.
The most obvious difference, is my system is very much a work in progress while David’s system appears to be a fully implemented LifeSim. Based on his silence on YouTube after achieving a huge number of views over the past couple years on his video, he appears to have simply disappeared, which implies that his system won’t advance from where it was when he presented it.
From a technical perspective though, there are a couple differences in implementation that will drastically impact how any simulations will ultimately play out. Though it may seem trivial at first blush, the composition of the “world” has a significant impact. David’s world was a simple 128 x 128 grid where each life form takes up exactly one grid location. In contrast, my word separates (slightly) the visual representation of the world and the actual location of each life form. Visually, the world is 1980 x 1080 pixels. In the underlying implementation however, the location of each life form is represented by a floating point number. Thus, it’s true location might be (143.8702, 9.3451) rather than (143, 9). This complicates collision detection between different life forms among other things and is definitely a performance hit.
Although I like the extra freedom of movement this provides, the design decision leads to some technical challenges in my implementation pertaining to performance. With a 128 x 128 grid where each life form takes up exactly one grid location, around 300 steps per generation is more than adequate. In a 1980 x 1080 world with each life form being a circle of 4 pixels diameter and allowing movement to be as fast as 2 pixels per cycle, a life form moving as quickly as it can from one edge of the world to the other would take ( 1980 - 5 ) / 2 or 988 cycles forcing me to have at least 988 steps per generation, though I’ve rounded that up to 1000… because reasons. As mentioned above, this freedom of movement also complicates collision detection. Taken together, this has a significant performance impact.
In David’s simulation, the life forms stack up in neat columns on the right side of the screen since their movement is limited to locations that fall exactly on grid spaces. The extra freedom I’ve given the life forms leads to interesting tree or coral-like structures. This also gives interesting pathways, for example, “to the right” that life forms with appropriate behavior can wiggle through.
Next, David’s simulation allows for recurrent neural networks, where neurons in the hidden layer can send their output back into their own input. This style of neural network essentially creates a form of memory that it can call on to make decisions in the next activation cycle. While this is an interesting concept that I may implement in the future, I’ve decided against this for the time being. It may though be an interesting experiment in the future to implement and see if there’s interesting emergent behavior because of it.
There are many other implementation details that differ, for example how mutations are computed, how reproduction is handled and the number and kinds of input and output neurons just to name a few.
All in all though, I’m happy with the progress I’ve made in just a few days, though I have many more things I want to implement and play with in this simulation.