The three methods that Celestia uses to draw stars now--points, fuzzy points, and scaled discs--all have some problems:

1. In 'fuzzy points' mode, stars appear too blurry; point stars mode gives crisp looking stars, but there's also no anti-aliasing

2. None of the modes except scaled discs draws stars over a wide range of brightnesses in such a way that stars of different brightness are obviously distinguishable; the familiar asterisms tend to get 'washed out', lost among a mass of mid-brightness stars

3. Qualitatively, the stars just don't 'pop' they way they do in a good astrophoto when you look at the sky on a dark night; this is probably the result of 1 & 2

4. There's no documentation on how Celestia translates apparent stellar brightness into pixels

I've developed a new technique for drawing stars that addresses all of the problems above. Like almost everything in interactive graphics these days, it involves shaders.

The point spread function for stars is approximated as a Gaussian. This is integrated over the area of a pixel. Rather than treating the pixel as a square, the pixel is given a Gaussian 'response'. This could be thought of as leakage between elements of the detector, either the pixels of a CCD or the photoreceptor cells in the eye. The choice to not use square pixels is also driven by ease of implementation: the convolution of two Gaussians is another Gaussian.

The PSF/pixel function is summed with a second Gaussian introduced to simulate the effect of light scattering in the optical system. The aim of this is not to replicate all the flaws in a real eye or camera, but to give the appearance of brilliance for brighter stars. This effect is responsible for the halos that we see in astrophotos such as this one:

http://images.astronet.ru/pubd/2003/02/07/0001186521/orion_spinelli_c1.jpg

The expression giving the brightness of a pixel is:

b * (1/(2*pi*s) * exp(-r^2 / (2 * s^2)) + G * (1/(2*pi*Gs) * exp(-r^2 / (2 * Gs^2)))

Where:

r is the distance in pixels from the pixel center to the star center

s^2 is the variance of the convolution of the PSF and pixel function

Gs^2 is the variance of the glare Gaussian

G is the brightness of the glare Gaussian

b is the brightness of the star

Gs is set so that the glare function is much broader than the PSF; the G factor is necessary to keep the glare dim relative to the star. The brightness of the star is calculated from the magnitude of the star, the limiting magnitude, and the 'saturation magnitude'. The saturation magnitude is the point at which a star is bright enough that it is clipped to the maximum displayable brightness at the center.

Here is the implementation in GLSL... It is cleaned up version of the actual code, so no guarantees that it will run as-is. First the vertex shader:

Code: Select all

`attribute magnitude;`

uniform vec2 viewportSize;

varying vec2 pointCenter;

varying vec4 starColor;

varying float brightness;

uniform float limitingMagnitude;

uniform float saturationMagnitude;

uniform float s2;

uniform float Gs2;

uniform float G;

uniform float exposure;

const float thresholdBrightness = 1.0 / 512.0;

void main()

{

vec4 projPosition = gl_ModelViewProjectionMatrix * gl_Vertex;

// Perspective projection to get the star position in normalized device coordinates

vec2 ndcPosition = projPosition.xy / projectedPosition.w;

// Convert to viewport coordinates (the same coordinate system as the gl_FragCoord register)

pointCenter = (ndcPosition * 0.5 + vec2(0.5, 0.5)) * viewportSize;

// Get the star color from the gl_Color attribute

color = gl_Color;

brightness = pow(2.512, (limitingMagnitude - saturationMagnitude) * (saturationMagnitude - magnitude));

// Calculate the minimum size of a point large enough to contain both the glare function and PSF; these

// functions are both infinite in extent, but we can clip them when they fall off to an indiscernable level

float r2 = -log(thresholdBrightness / brightness) * 2.0 * s2;

float rG2 = -log(thresholdBrightness / (glareBrightness * brightness)) * 2.0 * Gs2;

gl_PointSize = 2.0 * sqrt(max(r2, rG2));

gl_Position = projectedPosition;

}

And the pixel shader:

Code: Select all

`varying vec2 pointCenter;`

varying vec4 color;

uniform float sigma2;

uniform float glareSigma2;

uniform float glareBrightness;

varying float brightness;

void main()

{

// gl_FragCoord contains the position of the pixel center in viewport coordinates. Compute

// the distance to the center of the star.

vec2 offset = gl_FragCoord.xy - pointCenter;

float r2 = dot(offset, offset);

float b = exp(-r2 / (2.0 * sigma2));

b += glareBrightness * exp(-r2 / (2.0 * glareSigma2));

gl_FragColor = vec4(linearToSRGB(b * color.rgb * brightness), 1.0)"

}

There's a very important step in the pixel shader that hasn't been discussed yet: the linear to sRGB mapping. Most monitors are use the sRGB color space, which has a nonlinear mapping from pixel value to the amount of light emitted at the pixel. If we want stars to look as natural as possible, we can't neglect this (we shouldn't ignore it when drawing planets, etc. either, but that's another discussion...) The function that maps from linear to sRGB colors has two parts: a linear part on the low end, and a nonlinear part on the high end. The following GLSL function handles the mapping:

Code: Select all

`vec3 linearToSRGB(vec3 color)`

{

vec3 linearPart = 12.92 * color;

vec3 nonlinearPart = (1.0 + 0.055) * pow(color, vec3(1.0 / 2.4)) - vec3(0.055);

return mix(linearPart, nonlinearPart, color < vec3(0.0031308));

}

Newer OpenGL drivers (with appropriate hardware) offers the extension EXT_framebuffer_sRGB, which automatically converts linear pixel shader output values to sRGB. It also fixes alpha blending: sRGB pixels are read back from the frame buffer, linearized, blended, then converted back to sRGB and written back. This should further improve the appearance of overlapping stars--the Alpha Centauri system looks noticeable unrealistic when the A and B stars are blended in nonlinear space.

--Chris