Chrome’s dinosaur game seems simple. A cactus approaches, you press space, the dino jumps. But timing matters. Jump too early and you land on the obstacle. Jump too late and you collide. As the game speeds up, human reflexes struggle to keep pace.
What if instead of programming exact rules for when to jump, a neural network could figure it out? Not by being told the answer, but by playing thousands of games and evolving better strategies over generations. This experiment combines two classic AI techniques: perceptrons and genetic algorithms.
The result is Dino AI. Open it in your browser and watch 300 dinosaurs attempt to survive. Most fail immediately. But a few get lucky. Their “brains” get passed to the next generation. Within minutes, the population learns to play the game flawlessly.
What It Does
Dino AI simulates Chrome’s offline dinosaur game with a twist. Instead of one player, 300 AI-controlled dinosaurs run simultaneously. Each has its own neural network brain that decides when to jump, duck, or do nothing.
When an obstacle approaches, each dinosaur’s brain receives two inputs: the horizontal distance to the nearest obstacle and the vertical distance. The brain processes these inputs through weighted connections and outputs one of three actions: jump, duck, or keep running.
Every dinosaur that hits an obstacle dies. The survivors pass their brains to the next generation. Over time, the population evolves better and better strategies. You can watch this happen in real time.
Dinosaurio
The Perceptron Brain
Each dinosaur has a perceptron, the simplest possible neural network. A perceptron takes inputs, multiplies each by a weight, sums the results, and applies an activation function.
For this game, the perceptron works like this:
- Two inputs - distance to obstacle on X axis and Y axis
- Two weights - learned values that determine input importance
- One bias - an offset term that shifts the decision boundary
- One output - determines the action (jump, duck, or nothing)
The weights start random. Most random weights produce terrible behavior. But genetic algorithms fix that.
Genetic Algorithm
The genetic algorithm mimics natural selection. Here is how it works:
- Population - 300 dinosaurs spawn with random brain weights
- Fitness - Each dinosaur’s score is how many frames it survived
- Selection - The top 10% of performers survive
- Reproduction - Survivors pair up and produce offspring
- Crossover - Each offspring inherits weights randomly from both parents
- Mutation - With 50% probability, weights change by a small amount
- Repeat - A new generation begins
This cycle runs continuously. Bad strategies die out. Good strategies spread. The 50% mutation rate keeps the population exploring new possibilities.
The Decision Process
When a dinosaur needs to decide what to do, its brain computes a weighted sum of the inputs. The result passes through an activation function that squashes it into a usable range.
A transfer function then converts the continuous output into one of three discrete actions:
- Output below a threshold means duck
- Output in the middle range means do nothing
- Output above a threshold means jump
The weights determine which obstacles trigger which responses. A dinosaur might learn that nearby ground obstacles require jumping, while flying obstacles require ducking.
Technical Implementation
The game runs entirely in the browser using HTML5 Canvas. No server required. No dependencies.
The architecture breaks into clear components:
- Game loop - Runs 60 times per second, updates all game state
- Player entities - 300 dinosaur instances with independent brains
- Obstacle spawner - Creates cacti at random intervals
- Collision detection - Checks each dinosaur against obstacles
- Generation manager - Handles selection and reproduction
Each frame, every living dinosaur calls its brain with the current obstacle position. The brain returns an action. The dinosaur executes it. Physics updates positions. Collisions check for deaths.
When all dinosaurs die, the generation ends. The top 30 survivors (10% of 300) become parents. They produce 270 offspring to refill the population. Weights get copied, crossed, and mutated.
Parameters That Matter
The algorithm’s behavior depends on a few key numbers:
- Population: 300 - Large enough for diversity, small enough to run smoothly
- Survival rate: 10% - Keeps only the best performers
- Mutation rate: 50% - High rate encourages exploration
- Step size: 0.01 - Small weight changes prevent wild swings
These values came from experimentation. Lower mutation rates led to stagnation. Higher step sizes caused instability. The current settings balance exploration and stability.
Watching Evolution Happen
First generation dinosaurs are hopeless. They jump randomly, or not at all. Most die within seconds.
By generation 5, some dinosaurs have figured out jumping. They clear early obstacles but fail as speed increases.
By generation 20, the population handles normal gameplay. Deaths become rare during the first minute.
By generation 50, the dinosaurs play almost perfectly. They jump with precise timing, duck under flying obstacles, and adapt to increasing speed.
The learning curve is visible. You can literally watch intelligence emerge from random noise.
What I Learned
This project taught me how simple components combine into complex behavior. A perceptron is just multiplication and addition. A genetic algorithm is just selection and randomness. Together, they produce agents that learn to play a game.
The key insight is that evolution does not need to understand the problem. It only needs a way to measure fitness. Good solutions survive. Bad solutions die. Given enough generations, remarkably sophisticated behavior emerges.
Building everything from scratch, without machine learning libraries, forced me to understand each piece. Why does crossover help? Because it combines successful traits. Why does mutation matter? Because it introduces new possibilities. Why does selection work? Because it concentrates the population around good solutions.


