reaction-diffusion

Reaction-diffusion is a model used to simulate the interaction of two chemical substances 'A' and 'B'. 'A' transforms into 'B' when it is in contact with it. Additionally, a little 'A' is continuously injected, and a fraction of 'B' slowly destroys itself.

This is a GPU implementation of the Gray Scott model. It exhibits natural-looking patterns, reminiscent of corals or some animal coats. Use the left mouse button to interact with the simulation.

See it live here.

Donate

Preview

Black and white mode: cat

Color mode: joconde

Illustration 1

Illustration 2

Illustration 3

Reaction-diffusion

Model

Reaction-diffusion can be seen as a description of a simple chemical reaction between two substances 'A' and 'B'. I like to imagine they are liquid.

The rules are quite simple:

This system can easily be discretized by turning it into a cellular automaton:

Implementation

The values for A and B are stored in a texture. Unfortunately, the default precision of 8 bits per channel is not enough for this simulation. A RG16F texture format would be ideal, however it is only available in WebGL 2. This is why I have to store each 16 bits value on 2x8 bits channels of a RGBA texture: the value for A is stored in red and green, and the value for B in blue and alpha.

This also prevents me from using a little trick to make the blurring part cheaper. The bluring is currently performed by applying a 3x3 kernel, which means 9 texture fetches. A common technique to make that faster is to take advantage of the linear interpolation performed by the GPU, in order to go down to only 5 fetches. However because the values are stored in 2 channels, this leads to numerical imprecisions, which are fine for displaying but unsuited for the computing part.

Another optimisation for the tricolor mode would be to use the WEBGL_draw_buffers extension to allow the fragment shader to write to all 3 textures (red, green, blue) at once. This would reduce by 3 the number of uniforms binding, of calls to gl.draw, and of texture fetches in the shader.

Image mode

In image mode, the feed and kill rates are not uniform, they vary locally based on the source image. They are interpolated between 2 presets, for white and black: Illustration of interpolation values

Results

Using reaction-diffusion to approximate images is computationally expensive, but it gives results that are visually interesting. The output is quite trippy, but after blurring it a bit, you can see that is is pretty good. In this example, the hue is a bit off, but notice how many fine details are preserved.

Bird in color mode

Left: original image. Middle: approximation with reaction-diffusion. Right: blurred version of the middle image.

Black and white

For the black and white mode, the local brightness is sampled. I used the perceived brightness, computed as: (0.21 × red) + (0.72 × green) + (0.07 × blue).

Color

For the color mode, there are 3 simulations that run in parallel, one for each channel. They are combined at drawing time using additive compositing.

Bird in color mode

Here is a bird in color mode.

Bird decomposition

Here the decomposition of the bird image.