In this post I am going to briefly carry out the method chosen to be implemented in E.P.I.C.S., and my first demo. Some videos of the demo are recorded and attached in the middle.
The method is based on what is named “the Principle of Least Effort (PLE)”, which is a important characteristic of human motion. The principle was proposed by G. Keith Still in his phd thesis. It basically shows people prefer to spend the smallest amount of energy during moving. Two points can be summarized as follows:
- A person tends to reach his or her goal via a short path.
- A person tends to move at his preferred velocity, and tends not to change the velocity frequently.
Optimize PLE Function
Although these points seem intuitive, they account for the smooth trajectories we take in daily life because smooth trajectories require less energy and changes of direction and velocity. PLE leads to some further interesting consequences, such as lanes, in which people tend to walking in the same direction when closely followed by each other, and congestion, in which people prefer to reduce speed.
One could come up with ideas of crowd simulation from PLE. If the effort of an agent can be quantized and represented as a function of status of the agent, then by optimizing the function we could figure out the way it moves that matches PLE. In other words, the problem of crowd simulation can be transformed to the problem of optimizing the PLE function. This is exactly the essence of our method.
We adopt the “PLEdestian” method proposed by Stephen. J. Guy et al., in which they present a novel effort function and an efficient optimization process. The effort function looks like this:
Here, denotes energy, or “effort”, denotes mass, denotes velocity, which is a function of t, time. In addition, and are two constants related to human’s biomechanical features. The PLE function is an integration of time from the beginning to the end of travel. It can be shown that traveling along the shortest path with the constant velocity minimizes the PLE function, which is consistent with the principle itself: move through a short path and move at preferred speed. On the other hand, if we fix the path length with , the minimal energy spent to traverse it is simply
That being said, how do we find the minima of PLE function? Stephen. J. Guy et al. presented an efficient greedy strategy to simplify the function. The greedy strategy decomposes the whole travel of an agent into two parts: the part during the first seconds and the remaining part. The is so small that the speed of the agent can be considered as nearly constant. We therefore need to compute effort spent in both part respectively. The former contribution is easy to obtain since we assume velocity is constant. For the latter contribution, we simply pick up the smallest possible value, as the strategy is called “greedy”. In other words, we choose the distance between the position after seconds and the final goal as path length, and as moving speed. The PLE function now can be represented as
where now is the speed of agent during the first seconds, is the final goal of travel, and is the initial position of the agent. Standard approach can now be applied to minimize the PLE function by setting the gradient of to zero and solve the equations.
Leaving the PLE function for a while, now I introduce the really simple agent model. An agent in E.P.I.C.S. is similar to a particle, with those basic parameters:
- position: the current position of the agent.
- velocity: the current velocity of the agent.
- max velocity: velocity cannot exceed this upper limit.
- radius: the agent is modeled as a sphere (or circle in 2-d cases), and this is the radius.
- goal: the position where the agent wants to go to.
Combining the agent model with the optimization process of PLE function, the simulation is performed in discrete time step every seconds. Initially, all agents are placed at initial positions with their goals set. In each step, all agents update their velocities which would be valid in next seconds. The simulation will not stop until all agents reach their goals.
These are the basic ideas of how the method works. Note that there are still lots of details not covered. For instance, how do agents safely pass each other and obstacles? For another instance, we assume straight line distance as the path length of the second term in greedy simplification, which would be naive if the simulation is performed in a large, complex environment. We will address those problems in future posts, but now, let’s watch the first demo.
Two scenes are set to illustrate the method. The first one is simple two agents swapping their positions. In the second scene, 20 agents form a circle initially, and they all intend to reach the opposite position on the circle. Both scenes are free from obstacles since so far, we have not considered dealing with them.