Update 4: Integrate PLE into Unity

Now that I have implemented the algorithm, I decide to use it to animate some virtual characters in a scenario. Since I am familiar with Unity and there are existing resources that I can make use of, I wrapped my code into a native library and plugged it into the engine. Though it would be cool to build a web player, unfortunately Unity forbids me from doing that because of some security reasons ( for native plugins). So there is only a video for demonstration.

The virtual scene, character, and animation are from a previous student project: https://tribestar.wordpress.com/category/perceptually-varied-crowd/. I spawn a number of characters different position in the scene and move based on PLE. Note that the frame rate seems to be low but that is due to the problem of my screen recorder. Actually it can be seen on the left top corner that the real frame rate is always 60fps.

Update3

So I started to evaluate the algorithm by SteerSuite. SteerSuite is a suite of test cases, tools, and libraries for steering behaviors. It took me several hours to address all the compiling and linking issues. SteerSuite provides APIs to load its test cases, record agents’ locomotion every frame, and generate record files, which are then used for benchmarking. Here is a video of the algorithm running on the “3-Squeeze” test case:

This test case seems to be pretty challenging to the PLE method. If it is a real-world case, two people from the left side should merge into one line so that they would only take up a half of the corridor, and the third person from the right side would pass easily. However, the PLE method lacks communication mechanism between agents so the agents from the left side cannot perform such cooperation.

For convenience, I have also added the feature of displaying agent trajectory (fading as time goes).

Here is another test case testing the crowd behavior in dense environment. Agents are spawned at three concentric circles and try to reach the diametrical opposite position. There are more or less inevitable collisions at the center of concentric circles, as there is no enough space for collision free movement for each agent. Displaying of trajectory is disturbing in this case and thus disabled.

Collision Avoidance against other agents & obstacles

After another week of work, I have integrated local collision avoidance and global planner module into E.P.I.C.S. Now an agent is able to avoid both other agents and static obstacles while moving like real pedestrian as before. Here is a video for the newest update:

The scene consists of 20 agent initially spawned in a disc as before, and one rectangular obstacle placed at the center. Since the goal is at diametrical opposite point, each agent will encounter the obstacle and try to walk around it. Note that although the result is successful, with no agent failed to reach its goal, there are still artifacts exposed in the demo video. Agents seem to have difficulty in passing the corners of the obstacle, with lower speed. Also, as the scene becomes more complicated, we can observe local deadlocks between several agents.

Local Collision Avoidance

The local avoidance module is an essential part of the project. In real world,  a person definitely do not want collide with others or obstacles as it will cause much more extra effort. What we do is to walk around obstacles. So do virtual agents.

The local collision algorithm adapted in E.P.I.C.S. is a so-called reciprocal method (“Reciprocal n-body collision avoidance” by Jur van den Berg et al.). The method can be seen as an incremental one because it only considers possible collision in the following t seconds. For each agent, the neighboring agents of it add linear constraints to its feasible future velocity in the velocity space. All linear constraints thus form a convex region.

To combine the collision avoidance module with the PLE, we need to find the velocity that minimize the PLE inside the feasible region. Thanks to the convex property of the PLE function (see last post) and feasible region, the complexity of this problem is largely reduced. There are generally two cases: the optimal velocity is included in the feasible region / the optimal velocity lies outside the feasible region. In the former case, we simply choose the optimal velocity. In the latter case, the solution is bound to be on the border of the feasible region. We then perform linear programming to find the solution.

Global Planner

A virtual agent is not capable of navigating itself to the goal in a complicated environment with only local collision avoidance module. A typical example is a U-shaped obstacle. The global planner provides high-level path to agent, ensuring that agent could reach its goal through the path.

The technique used here is pretty standard. The configuration space is first computed based on the obstacles in the scene. A visibility graph is then generated, serving as a map for agent. Once we get a graph, various graph search algorithm could be applied to get a path. Here we simply choose the A* algorithm. Governed by the PLE and local collision avoidance module, an agent keeps moving to the next path node until it finally reaches goal.

It is important to realize that a path does not have to be the shortest, but instead it should cost least effort to traverse. Therefore, the edge weight of our graph is not simply euclidean distance, but  effort needed to traverse it.

What’s Next…

So far, I have finished building the basic framework of E.P.I.C.S. and doing fundamental test on it. The next stage of my project should be evaluation. To judge whether the crowd behavior is realistic, we should find some appropriate metrics. There are several alternatives, either related to real-world data or not, yet I have not decided to use which of them:

In addition, the performance of the algorithm has not been profiled. Maybe it is good to do stress testing by running the algorithm on a complex urban scene with thousands of agents. Of course, that will largely likely be the final demo.

Background & First Demo

In this post I am going to briefly carry out the method chosen to be implemented in E.P.I.C.S., and my first demo. Some videos of the demo are recorded and attached in the middle.

PLE

The method is based on what is named “the Principle of Least Effort (PLE)”, which is a important characteristic of human motion. The principle was proposed by G. Keith Still in his phd thesis. It basically shows people prefer to spend the smallest amount of energy during moving. Two points can be summarized as follows:

  • A person tends to reach his or her goal via a short path.
  • A person tends to move at his preferred velocity, and tends not to change the velocity frequently.

Optimize PLE Function

Although these points seem intuitive, they account for the smooth trajectories we take in daily life because smooth trajectories require less energy and changes of direction and velocity. PLE leads to some further interesting consequences, such as lanes, in which people tend to walking in the same direction when closely followed by each other, and congestion, in which people prefer to reduce speed.

One could come up with ideas of crowd simulation from PLE. If the effort of an agent can be quantized and represented as a function of status of the agent, then by optimizing the function we could figure out the way it moves that matches PLE. In other words, the problem of crowd simulation can be transformed to the problem of optimizing the PLE function. This is exactly the essence of our method.

We adopt the “PLEdestian” method proposed by Stephen. J. Guy et al., in which they present a novel effort function and an efficient optimization process. The effort function looks like this:

E = m\int e_s + e_w|\textbf{v(t)}|^2 dt

Here, E denotes energy, or “effort”, m denotes mass, \textbf{v(t)} denotes velocity, which is a function of t, time. In addition, e_s and   e_w are two constants related to human’s biomechanical features. The PLE function is an integration of time from the beginning to the end of travel. It can be shown that traveling along the shortest path with the constant velocity \sqrt{\frac{e_s}{e_w}} minimizes the PLE function, which is consistent with the principle itself: move through a short path and move at preferred speed. On the other hand, if we fix the path length with L, the minimal energy spent to traverse it is simply

min(E) = m(e_s + e_w\frac{e_s}{e_w})\frac{L}{\sqrt{\frac{e_s}{e_w}}} = 2mL\sqrt{e_s e_w}

That being said, how do we find the minima of PLE function? Stephen. J. Guy et al. presented an efficient greedy strategy to simplify the function. The greedy strategy decomposes the whole travel of an agent into two parts: the part during the first t seconds and the remaining part. The t is so small that the speed of the agent can be considered as nearly constant. We therefore need to compute effort spent in both part respectively.  The former contribution is easy to obtain since we assume velocity is constant. For the latter contribution, we simply pick up the smallest possible value, as the strategy is called “greedy”. In other words, we choose the distance between the position after t seconds and the final goal as path length, and \sqrt{\frac{e_s}{e_w}} as moving speed. The PLE function now can be represented as

E = m(e_s + e_w|\textbf{v}|^2)t + 2m|\textbf{g} - \textbf{p} - t\textbf{v}|\sqrt{e_s e_w}

where \textbf{v} now is the speed of agent during the first t seconds, \textbf{g} is the final goal of travel, and \textbf{p} is the initial position of the agent. Standard approach can now be applied to minimize the PLE function by setting the gradient of \textbf{v} to zero and solve the equations.

Agent

Leaving the PLE function for a while, now I introduce the really simple agent model. An agent in E.P.I.C.S. is similar to a particle, with those basic parameters:

  • position: the current position of the agent.
  • velocity: the current velocity of the agent.
  • max velocity: velocity cannot exceed this upper limit.
  • radius: the agent is modeled as a sphere (or circle in 2-d cases), and this is the radius.
  • goal: the position where the agent wants to go to.

Combining the agent model with the optimization process of PLE function, the simulation is performed in discrete time step every t seconds. Initially, all agents are placed at initial positions with their goals set. In each step, all agents update their velocities which would be valid in next t seconds. The simulation will not stop until all agents reach their goals.

These are the basic ideas of how the method works. Note that there are still lots of details not covered. For instance, how do agents safely pass each other and obstacles? For another instance, we assume straight line distance as the path length of the second term in greedy simplification, which would be naive if the simulation is performed in a large, complex environment. We will address those problems in future posts, but now, let’s watch the first demo.

First Demo

Two scenes are set to illustrate the method. The first one is simple two agents swapping their positions. In the second scene, 20 agents form a circle initially, and they all intend to reach the opposite position on the circle. Both scenes are free from obstacles since so far, we have not considered dealing with them.

Swap scene:

Circle scene:

Welcome!

This blog is set up for my degree project (course dd143x) in Kungliga Tekniska  Högskolan from Feb to May, 2015. I will post latest progress and related materials on it.

E.P.I.C.S., the cool nickname I give to my degree project, is the abbreviation of “Emulation of Pedestrian Intelligence & Crowd Simulation”, which indicates the subject is crowd simulation. Indeed, crowd simulation is the process of simulating large numbers of  agents, and the main purpose of E.P.I.C.S. is to explore how virtual agents should move around, reach their goals, and avoid collisions with other agents and obstacles in an environment.

That being said, agents in this project are not meant to be complicated, but only simplified particles with its own position, velocity, and radius. E.P.I.C.S. does not aim at complicated character modeling, rendering or animation. Instead, we are interested in the realism of crowd behavior. The whole simulation will proceed in a discrete manner. At each time step, the agent uses its current position, goal position, and information about it’s neighbors to computes a new velocity for the time step. The position of each agent will then be updated based on new velocity. A number of challenging issues should be considered during computation, such as collision avoidance, path planning and providing human-like behaviors.

One important aspect of E.P.I.C.S. is evaluation because we need to know how realistic results the simulation can provide, and where is the difference between simulation and real-world experience.  Evaluation is made based on comparison between real pedestrian behaviors and simulated ones. Therefore, it is important to find appropriate datasets as well as metrics that measure collective behaviors. Ideally, a simulation’s results would be consistent with datasets collected from field observations or video footage of real crowds. In practice, however, there will inevitably be difference, which should be analyzed for further improvement.

Visualization will be used to demonstrate simulation results. A real-time demonstration will be made in which agents update their trajectories and velocities in pre-defined scenarios. Since I decide to program the whole project in c++, the demonstration is planned to be OpenGL based. Don’t expect too much about graphical stuffs, as they are not part of the project.

In next post I am going to collect background materials and related work, and maybe there will be a simple first demo scene.