Contagion spread

All models are wrong, but some are useful

Agent-based models have the potential to inform policy decisions, for example in response to an outbreak of a unknown contagion. Rather than experimenting with different policies, policy-makers can predict the effects of different scenarios and interventions through agent-based simulation. Of course, the usefulness of such predictions relies on the accuracy of the model and the estimations of its parameter. Agent-based models need to carefully select a level of detail. Too little detail, and the model is unable to reliably predict the effect of policies. Too much detail, and accurately estimating all necessary parameters in reasonable time become prohibitively difficult.

On this page, we take a look at the SIR model, one of the simplest models of the spread of contagion, and consider different ways of increasing the level of detail (and thereby the complexity) of this model. The result is a simplified model of the spread of COVID-19, shown in the script below (open script in a new tab), which extends the SIR model with small-world networks and contagion modelling.
The scripts on this page make use of HTML5.

SIR model

The SIR model (Kermack & McKendrick, 1927) provides a convenient starting point for constructing an agent-based model of the spread of contagion. This model separates the population into three classes: susceptible individuals who can be infected, infected individuals who spread the contagion to susceptible individuals, and recovered individuals who have become immune to the contagion.

The model is controlled by up to three parameters. The advantage of having very few parameters is that the model is more easy to evaluate and understand, and that it is relatively easy to find suitable values for these parameters. The downside is that the conclusions of the model for the purpose of formulating policy decisions are rather limited. To slow down or even stop the spread of the contagion, the SIR model suggests that policies should aim to reduce the infection rate or increase the recovery rate. However, how policies can achieve these effects is beyond the scope of the SIR model.

Small world network

To draw more fine-grained conclusions, the SIR model needs to be extended. For example, one of the more restrictive assumptions in the SIR model is that the population is well-mixed. This means that every interaction between agents is equally likely. While this may be a reasonable assumption for a small community, it is difficult to defend that people that live in the same street are equally likely to interact as people that live at opposite ends of the country.

In real life, social networks are highly clustered. For example, many people have friends in common with their friends. In addition, social networks tend to have the small world property, so that it takes relatively few steps to move from one individual to another in the network (cf. six degrees of separation.) While it’s not entirely certain that physical interactions can also be modelled as such a small-world network as well, it seems more reasonable than assuming the population is well-mixed.

In the script above, interactions are modelled through a small-world network, where agents are arranged on a circle. When the clustering coefficient is maximal, agents are only connected to neighbours on this circle. The lower the clustering coefficient, the more random the connections between agents become.

In addition to the small world network, the daily infection rate has been separated into the infection probability per interaction and the average number of interactions per agent per day. By making these two changes, the conclusions drawn from the model can be more fine-grained than for the SIR model. This extended model suggests that to slow down or even stop the spread of the contagion, policies should aim to limit the connectivity of people, reduce the number of interactions per person per day, reduce the infection rate per interaction, or increase the recovery rate.

Contagion modelling

Until now, the contagion has been rather abstract. The progression from susceptible to infected to recovered assumes that individuals are contagious directly after exposure to the contagion. Also, the contagion model ignores any symptoms altogether. In particular, individuals may behave differently depending on symptoms, and policy may be based on these observable symptoms.

We extend the model by representing the contagion as a Markov process. That is, the contagion is divided into discrete states. Once an individual has been exposed, the path of progression through the states of the infection is determined by a transition table. This transition table should then reflect the contagion being modelled. On this page, the default characteristics and transition rates have been set to COVID-19 characteristics (see Section 2.2 of Kerr et al., 2020).

Beyond the scope of this page

Of course, the model can be extended in many different ways. For example, CovaSim (Kerr et al., 2020) takes into account that COVID-19 has a different effect on different age groups. In addition, SynthPops takes a multi-layer approach to interactions, where interactions of households, schools, and workplaces are modelled separately and combined to create a more accurate model of interactions across different age groups. While these extension have the potential for making more accurate predictions of the spread of the contagion and its effects on society, they are beyond the scope of the scripts on this page.

Boids: Flocking made simple

Birds of a feather flock together

In 1986, Craig Reynolds developed a model of flocking, in which agents group together through the interaction of a few simple rules (see also Reynolds, 1987). In this model, agents form groups even though they do not have a group identity, or even a concept of what a group is. The behaviour of a group as a whole is determined entirely by the interaction of individual agent choices based on the local situation. The script on this page (open script in separate tab) allows you to experiment with different interaction rules for boids. The controls for this script are described at the bottom of the page.
The JavaScript on this page makes use of HTML5.

Boids: Bird-like objects

Boids, or bird-like objects, are simple agents that react to their local environment based on a few simple rules. In its simplest form, boids’ flocking behaviour is the result of the three rules avoid, align, and approach. In addition to these basic flocking rules, the script on this page implements two more rules, which cause boids to flee from predators and return to the center of the screen.

Figure 1: Boids exhibit flocking behaviour through a combination of three rules. Boids avoid collision with others, align their direction with nearby flockmates, and approach to distant boids.

The avoid rule is meant to prevent boids from colliding with their flockmates. Every flockmate within the avoidance range forces the boid to move away from that flockmate. Figure 1 shows a situation with a boid that has one flockmate in its avoidance range, indicated by a red circle. In general, the avoid rule is the rule with the shortest range and the highest impact on the boid’s behaviour. In the script on this page, the avoidance force is also the only force that varies with the distance to the flockmate. The closer the boid is to a flockmate, the stronger the avoidance force felt by the boid. In addition, a boid that feels an avoidance force will ignore align or approach forces.

The align rule causes boids that are part of the same flock to have the same general direction. For every flockmate within the alignment range, a boid will feel a force to match its heading to that of the flockmate. If there are multiple flockmates in the alignment range, the boid tries to move towards the average direction of those flockmates. The situation in Figure 1 shows four flockmates in the alignment range (indicated by a blue circle) that are not in the avoidance range. The boid in the center wants to align itself with each of these flockmates.

The aproach rule makes boids move towards the center of the group of flockmates that they can see. Each boids feels a gravitational force towards the center of all flockmates in its approach range. This rule makes sure that boids will not drift out of the group. Among the flocking rules, the approach rule is typically the one with the highest range, which means that any boid outside the approach range is usually ignored entirely. However, a group may be much larger than the approach range, and the actions of a boid may have effects that

By changing the strength and range of the flocking rules, it is possible to vary flocking behaviour. For example, boids will group together even without any alignment forces. Such groups have much less coordinated behaviour than groups that do experience alignment forces, however.

Predators

Figure 2: Boids flee from predators that they detect in their fleeing range.

In addition to flocking, the script on this page implements two more behavioural rules for boids. The first of these is the flee rule, which causes boids to flee from predators. In the script, predators are larger boids that only experience one type of force: the predation force. Predators always move towards the nearest boid, unhindered by flocking behaviour. In addition, boids also consider the mouse cursor to be a predator.

The flee rule is similar to the avoid rule, except that the fleeing force is not dependent on the distance to the predator. As soon as a boid detects a predator in its fleeing range (depicted as a magenta coloured circle in the script), it will try to move in the opposite direction. The fleeing force is applied after the flocking forces, which also means that fleeing behaviour overrides flocking.

Although boids only flee from predators that appear in their fleeing range, boids can experience contagious fleeing. When a boid changes its heading due to fleeing, flockmates in its alignment range experience a force to head into the same direction. Because of this behaviour, it can seem as if boids flee from predators that they never saw.

Home range

Figure 3: Boids experience a force to return to the center of the screen.

Finally, boids can also experience an attractive force from their home range, which causes them to return to the center of the screen. In the script, the home range of the boids is indicated by a cyan area. Unlike other behavioural rules, which have a maximum range, the return rule has a minimum range. Boids only experience a force to return to the center of the screen when they leave the home range.

Returning to the center of the screen is applied before flocking and fleeing. This means that although boids prefer to stay within the home range, they may leave the home range because of the presence of flockmates or predators.

Controls

  • Zoom slider: Determines the magnification of the script area.
  • Boids sliders: Determines the number and movement speed of boids in the simulation.
  • Predators sliders: Determines the number and movement speed of predators in the simulation.
  • Lightbulb slider: This determines the number of lightbulbs in the arena. For performance reasons, the number of lightbulbs has been limited to four.
  • Behaviour sliders: For each behavioural rule avoid, align, approach, flee, and return, these sliders control the range of the rule as well as the strength of the rule.
  • Behaviour checkboxes: In this script, one boid is selected at random to visualize the range of behavioural rules. When the checkboxes of the behavioural rules avoid, align, approach, or flee are checked, the corresponding range is shown around this boid as a coloured ring. For the return rule, this checkbox determines whether the home range is indicated as a solid coloured circle.

Braitenberg vehicles

Simple rules can appear to be smart

Braitenberg vehicles are simple vehicles that show how the interaction between simple rules can lead to complex behaviour. Although these vehicles have no ability to make decisions or even to remember the past, a Braitenberg vehicle may show flexible behaviour and appear to be goal-directed. The script on this page (open script in separate tab) allows you to experiment with Braitenberg vehicles and see how these vehicles act and interact in a simple environment. The controls for this script are described at the bottom of the page.
The JavaScript on this page makes use of HTML5.

Braitenberg vehicles

Braitenberg vehicles are simple autonomous agents that move around based on sensor input. On this page, we consider vehicles that have two wheels, each controlled by its own motor. Sensors that measure the light intensity can affect the output of these motors, either positively or negatively. Depending on how where the sensors are located and how these sensors are connected to the motors, the behaviour displayed by the vehicle can differ greatly.

Fear Aggression Love Exploration
Figure 1: Four simple Braitenberg vehicles.

Figure 1 shows an example of four simple Braitenberg vehicles. These four vehicles are also preprogrammed in the script. Although these vehicles are quite similar and have the same light-sensitive sensors at the same locations, the way these sensors are attached to the motors causes strong differences in their behaviour.

The first vehicle, called Fear, connects the leftmost sensor to the left wheel motor and the rightmost sensor to the right wheel motor. In both cases, the forward speed of the motor is increased when the sensor detects more light. This means that when this vehicle detects light to the left, the left motor increases speed and the vehicle veers to the right. That is, Fear tries to move away from the light.

The second vehicle, Aggression, is similar to Fear, except that the sensors are wired to motor at the opposite end of the vehicle. That is, the leftmost sensor is connected to the right wheel motor while the rightmost sensor connects to the left wheel motor. Like Fear, the sensor connections of Aggression are positive. This means that if the vehicle detects light to the left, the right motor increases speed and the vehicle veers to the left. That is, Aggression moves towards the light. The closer the vehicle gets to the light, the stronger the increase in speed of the motors, until the vehicle speeds through the light.

Love is a vehicle that is similar to Fear, except that the sensors decrease the forward speed of the motor to which they are connected. This means that when the vehicle detects light to the left, the left motor decreases speed or even reverses, and the vehicle moves to the left. Like Aggression, Love moves towards the light. Unlike Aggression, however, the closer Love gets to the light, the slower it moves. As a result, Love moves towards the light until it reaches the perfect distance.

The final example is Exploration. Exploration has the same crossed wiring of Aggression, except that the sensors decrease the speed of the motors to which they are attached. This means that if the vehicle detects light to the left, the right motor decreases speed or even reverses, and the vehicle veers to the right. Like Fear, Exploration avoids the light. However, Exploration slows down in light areas, almost as if it is cautiously exploring the light.

In the script above, you can test out the different basic Braitenberg vehicles by loading them into the selected vehicle. By varying the number of lightbulbs and dragging and dropping them to different locations, you can experiment with the way they react to different environments.

More complex vehicles

In addition to the basic vehicles, the script above allows you to construct your own vehicle design. Each vehicle has a base speed for each motor, which means that vehicles can move forward, backward, or in circles when left in the dark. In addition to the lightbulbs in the environment, each Braitenberg vehicle is also equipped with a lightbulb of its own. This addition of light onto vehicles themselves can result in surprising new behaviour.

The lower part of the script shows a larger Braitenberg vehicle in a black box, which allows you to customize a vehicle. Each vehicle has up to eight sensors, which are depicted as coloured dots. By dragging and dropping these sensors, you can place them anywhere you want along the exterior of the vehicle. Sensors are colour coded to indicate that they are connected to the left wheel (white sensors), the right wheel (black sensors), or the lightbulb (yellow sensors). Note that a vehicle’s sensors are never affected by its own lightbulb.

Even though Braitenberg vehicles are purely mechanical, you may notice that it is easier to describe these vehicles as if they had intentions and goals. This shows that in some cases, it is easier to understand these vehicles through theory of mind, by pretending that they have unobservable mental content.

Controls

  • Arena: In the arena, vehicles and lightbulbs can be moved by dragging and dropping them in their desired location.
  • Lightbulb slider: This determines the number of lightbulbs in the arena. For performance reasons, the number of lightbulbs has been limited to four.
  • Vehicle slider: This determines the number of Braitenberg vehicles in the arena. For performance reasons, the number of vehicles has been limited to four.
  • Load a basic vehicle select box and button: These controls allow a user to load one of the four basic Braitenberg vehicles shown in Figure 1 into the vehicle selected from the arena. In addition, it also allows a user to load a custom Braitenberg vehicle. These controls can also be used to save a Braitenberg vehicle design for later use.
  • Default speed of left/right motor slider: This slider determines the speed of the left and right wheels for a vehicle in the dark.
  • Default illumination slider: This slider determines the brightness of a lightbulb of the vehicle when left in the dark.
  • Braitenberg vehicle bay: This control shows the selected Braitenberg vehicle, and allows the user to move around sensors through dragging and dropping.
  • Number of sensors on vehicle slider: This slider determines the number of sensors on the vehicle.
  • Selected sensor slider and select boxes: This determines what the selected sensor is connected to (left motor, right motor, or lightbulb), the input of the sensor (light or distance) and the strength of this connection.
  • Pause/Continue simulation button: Allows a user to pause and continue execution of the simulation.
  • Save simulation setup button: Downloads the current Braitenberg arena state as a text file. This can be loaded into the current arena state with the Load simulation setup button.
  • Load simulation setup button: Allows a user to change the code of the Braitenberg arena. This can be used to load a previously saved arena state.

Repeated simple spatial games

It’s all about who you know

Why do people cooperate with each other? Why do animals work together? How can cooperation survive when there are opportunities to cheat? An important observation in these questions is that it matters who you interact with. If you are likely to interact multiple times with the same people, it may be better to cooperate rather than to cheat (see also Axelrod and Hamilton, 1981). The script on this page (open script in separate tab) shows an implementation of a small number of repeated simple spatial games, both cooperative and competitive, and allows you to experiment with some of the spatial features. The controls for this script are explained at the bottom of this page.
The JavaScript on this page makes use of HTML5.

Repeated simple spatial games

The script on this page simulates agents that repeatedly play simple spatial games. These spatial games are called simple because they are:

  • two-player, since every interaction is between two individuals;
  • symmetric, since the outcome of each interaction depends only on the actions of the players, and not on who these players are; and
  • simultaneous, since both players select their action at the same time so that no player can react to the action selected by the other.

Individuals are organized on a grid, which may or may not include empty spots. Each individual has a colour that identifies what strategy it is playing. In the Prisoner’s Dilemma, for example, red individuals defect while blue individuals cooperate. At every time step, every individual plays the simple game with each neighbour within its interaction neighbourhood. Both the range and the shape of the interaction neighbourhood can be set within the script. The outcome of each game depends only on the strategies used by the two players, which can be seen in the payoff matrix in the bottom right of the script. In the Prisoner’s Dilemma, for example, when a cooperator plays against a neighbouring defector, the cooperator gets a score of -1, while the defector receives a score of 4.

All individuals play the game with each neighbour in their interaction neigbourhood. Once all games have been played, individuals look around to determine the neighbour in their vision neighbourhood that has the highest average score. If that score is higher than the individual’s own score, the individual switches its strategy to match the one of their neighbour. If there happens to be more than one neighbour with the highest average score, the individual will randomly pick one of those neighbours’ strategies. However, if one of the strategies that yields the highest observed score is the agent’s own strategy, the agent will always decide to keep its own strategy. This means that a cooperator will choose to stay a cooperator as long as the highest average score that he can see is obtained by another cooperator, even if the individual himself gets a very low score.

The script allows you to simulate a number of different games. The Prisoner’s dilemma, Hawks and doves, and Hawks, doves, and retaliators games are cooperative dilemmas. In these games, all individuals would benefit from the cooperation of other agents, but experience an individual incentive to defect on cooperation. In these cooperative dilemma’s, you can see how “clusters” of cooperation can grow to overcome defectors. What allows these clusters to survive is that in the center of such a cluster, cooperators only interact with other cooperators. That is, individuals inside a cluster cannot be cheated by faraway defectors.

In contrast, Rock-paper-scissors, Rock-paper-scissors-Spock-lizard, and Elemental RPS are purely competitive settings. In each of these setting, each action is the best response to another action so that there is no single optimal action to play. As long as all strategies continue to exist in the population, agents keep changing their strategy to keep up with other agents changing their strategy. In the simulation, this will show itself as waves moving through the population. This kind of behaviour is also the basis of the reason why theory of mind can be helpful in settings such as Rock-paper-scissors (see also theory of mind in Rock-paper-scissors).

Finally, the Coordination and the Stag hunt are purely cooperative settings. In both of these settings, agents want to cooperate, but either do not know how to cooperate (Coordination game), or are afraid that the other may not cooperate (Stag hunt game). In these games, the simulation tends to quickly settle into an equilibrium, where no agent wants to change its strategy. Whether or not there are multiple actions that survive in this stable solution depends on the number of empty spots, the vision neighbourhood, and the interaction neighbourhood.

Controls

  • Individuals on the grid: You can manually switch the strategy of an agent by clicking on it.
  • Strategy frequency graph: After every change in the situation in the grid, the graph updates to show the new frequencies of each strategy among agents.
  • Game selector: Selects the game that will be played by the agents on the grid. Changing the game will also reset the field.
  • Agent count slider: This determines the number of agents on the grid, between 1500 and 2500 individuals. Since the grid is 50 by 50 agents in size, 2500 agents fills up the grid entirely. When you change the number of agents, the strategies of remaining agents are not changed.
  • Interaction slider: This slider determines the size of the interaction neighbourhood of every agent .
  • Interaction neighbourhood shape toggle: By clicking on the interaction neighbourhood shape icon, you can switch between a cross-shaped (Von Neumann) and a square-shaped (Moore) interaction neighbourhood.
  • Vision slider: This slider determines the vision range of every agent.
  • Vision neighbourhood shape toggle: By clicking on the vision neighbourhood shape icon, you can switch between a cross-shaped (Von Neumann) and a square-shaped (Moore) vision neighbourhood.
  • Simulation speed slider: This shows how quickly changes occur in the grid, varying from one change per 2 seconds to up to 50 changes per second.
  • Play one round: Plays a single round of simulation.
  • Start/stop simulation: The simulation will automatically stop when after a round, no agents changed their strategy. Starting the simulation does not reset the board. This means you can pause the simulation to make some manual changes if you want, and continue from the situation you have created.
  • Reset field: Randomly re-assigns empty spots across the board, and strategies across agents.
  • Change payoff matrix: By clicking on the star (*) near the payoff matrix, you can change the payoff matrix as you see fit. Note: the new matrix is not checked for validity. Make sure you enter a valid payoff matrix.

Phantom jams

Heading for a world-wide traffic jam

Some traffic jams have clear causes, such as an accident, road works, or an upcoming on-ramp. However, some traffic jams seem to occur for reason at all. Such phantom jams can even occur on completely visible roads at low speeds (see also the paper by Yuki Sugiyama et al., 2008 and their video). The script on this page (open script in separate tab) simulates traffic flow on a six-lane circular road. Cars are exactly the same in the way they accelerate and decelerate, but differ in the speed at which they want to travel. The script allows you to experiment to determine what causes phantom jams and how they can be dissolved. The controls for this script are explained at the bottom of the page.
The JavaScript on this page makes use of HTML5.

Going around in circles

The script on this page simulates cars that drive along a circular road while keeping a safe distance from other cars. Each simulated car has a preferred top speed that it will try to achieve. However, since some cars have a higher preferred speed than others, the fast driving cars will occasionally get too close to a car ahead, and will have to hit the brakes. In the simulation, cars always brake with the same intensity. This means that cars will sometimes brake more intensely than necessary, and drop its speed below the speed of the car ahead. Together with the small variation in the preferred top speed of cars, these effects can cause a ripple in the traffic flow that cause a phantom jam.

In addition to variations in speed and braking too intensely, the simulation has a number of other features that can make phantom jams worse. Firstly, cars that have stopped moving will not pull up immediately when the car ahead of them starts moving. Instead, cars wait a little while until the distance is large enough. Additionally, when cars want to accelerate but cannot because of a car ahead of them, they may decide to change lanes. Although a car can help to clear up a phantom jam in a lane by changing to another lane, changing lanes can cause another jam in the lane they are entering.

The script above has a graph that allows you to keep track of phantom jams by plotting the percentage of jammed cars, which are cars that have come to a complete halt, and the average speed of cars. The average speed of cars is plotted as a percentage of the speed at which these cars prefer to drive. By running the script, you will find that phantom jams can occur for a lot of different settings of the sliders. But are there ways to prevent phantom jams, or to dissolve them after they have already happened? For example, does it help to lower the speed limit, or to force cars to stay in their lanes? And what can drivers do themselves to avoid and solve phantom jams? Are the solutions that avoid phantom jams the same as those that avoid these jams?

Controls

  • Summary graph: After every change in the situation on the road, the graph updates to show the new percentage of jammed cars, which are currently standing still, and the average speed of cars, as a percentage of their top speed.
  • Car count slider: This determines the number of cars on the road, with a range of 75 to 175 cars.
  • Average top speed slider: This slider determines the preferred driving speed or top speed of cars. The value represented by this slider is the average top speed across all cars. That is, individual cars differ in their actual top speed.
  • Variation top speed slider: This slider determines the range of individual variation in the top speed of cars. More variation means that faster cars more often have to brake due to slower cars ahead.
  • Acceleration slider: This slider determines how quickly cars accelerate to their top speed.
  • Preferred deceleration slider: This slider determines at what speed cars prefer to brake. A lower value of the preferred deceleration means that cars that encounter a slower car ahead will be faster to brake and bring their own speed down more gradually. A higher value means that cars tend to brake more abruptly.
  • Lane changing slider: This slider determines the likelihood that a car will change lanes. A car will only consider changing lanes when it is prevented from accelerating because of a car ahead.
  • Zoom slider: This slider controls the magnification of the road.
  • Simulation speed slider: This shows how quickly time passes on the road.
  • Reset field: Resets the cars by spreading them out approximately equally across the tarmac.
  • Road: By double-clicking the road, you can switch representations of the road. In the more schematic representation with white cars, you can additionally select a car to follow its behaviour more easily.

The sprites for this script have been adapted from the sprites of the 1999 computer game Grand Theft Auto 2 by Rockstar Games.

Schelling’s model of segregation

A little bit of a preference goes a long way

As the saying goes: “birds of a feather flock together”. People with similar backgrounds tend to end up in the same social group. This effect can be so extreme that it eventually leads to the kind of segregation in which the only people of the same background interact with each other. But how much of such a “preference for similarity” is needed to get a segregated society? In 1971, Thomas Schelling tried to answer this question using multi-agent simulations (Schelling, 1971). The script on this page (open script in separate tab) shows an implementation of Schelling’s model of segregation. The controls for this script are outlined at the bottom of the page.
The JavaScript on this page makes use of HTML5.

AgentVille supporters

The script on this page simulates how segregation occurs on the bleachers of AgentVille. The fictional town of AgentVille is known for its annual sports event, which draws many supporters of both Team Orange and Team Blue. When a supporter arrives at the stadium, he randomly decides where to sit and watch the event. But that does not mean that AgentVille supporters have no preferences on where they want to sit. Each supporter wants to sit close to someone that supports the same team. In fact, each supporter agent wants that among neighboring seats, there is some minimum percentage of people that supports the same team as he does. The way this works is demonstrated in Figure 1. As you can see by moving the agents around these nine spaces, each supporter agent has up to eight neighboring seats. An agent only shows a wide grin of happiness when at least 50% of the people in these neighboring seats supports his team.

Figure 1: Agents want to have a minimum percentage (50% in this case) of people in neighboring seats to support the same team as they do. In this interactive figure, you can drag agents to a new position to see how this works. Only happy supporters show a wide grin.

The pursuit of happiness

If a supporter is unhappy with his seat, he will eventually get up and move to one of the empty seats. It is not hard to see that if every agent demands that 100% of the people in neighboring seats supports the same team as he does, this process of switching seats will only stop when the supporters for both teams are completely separated. But does a lower similarity preference also cause segregation on the bleachers? Is the number of empty seats important? If each supporter wants 50% of his neighbours to support their team, will the average similarity be 50% when all supporters are happy, or would it be much higher?

The script on this page allows you to play around with different situations on the bleachers and see what will happen to the way supporters seat themselves. The graph keeps track of the number of happy supporters (green line) and the average percentage of people in neighboring seats that support the same team (red line). One thing you may want to keep an eye on, is the difference between the preferences of individual supporters (as shown by the desired similarity slider) and what the average similarity of neighboring supporters ends up being.

Controls

  • Support agents on the bleachers: You can manually drag agents to empty seats. An agent that is happy with his current situation will show a broad grin. Agents without such a grin will eventually want to move to another seat.
  • Happiness / similarity graph: After every change in the situation on the bleachers, the graph shows the current percentage of agents that is happy (green) and the average percentage of agents in neighboring seats that support the same team (red).
  • Number of agents slider: This determines the number of agents trying to find a spot on the bleachers. There are 200 seats available, and the number of agents can vary between 1 and If the number of agents is even, there will be as many Blue Team supporters as Orange Team supporters. If the number of agents is odd, there will be one more supporter for Orange Team.
  • Desired similarity slider: This slider determines for each agent what percentage of neighoring agents that support the same team will make him happy. All agents have exactly the same desired similarity.
  • Simulation speed slider: This shows how quickly changes occur on the bleachers, varying from one change per second to up to 2500 changes per second.
  • Start/stop simulation: The simulation will automatically stop when 100% of agents is happy. Starting the simulation does not reset the board. This means you can pause the simulation to make some manual changes if you want, and continue from the situation you have created.
  • Reset field: Removes all agents from the board and lets each of them choose a new seat.