Here are a few images from my real-time raytracer (taken on a 900 MHz Athlon):
It supports temporal supersampling, where only a fraction of the pixels are rendered in any given frame, so the image is rendered at interactive rates with degraded quality when being moved, but it converges to an optimal solution if the camera is left alone for a second or so (not enabled on these pictures).
I intend to add adaptive sub-sampling to increase speed without much loss in quality, and as an extension, the level of subdivision can be increased when the camera is still. This should give the speed advantages of sub-sampling without the problems in static images (missing small objects), although it will still have aliasing in animation if an object projects to something smaller than the initial grid resolution and falls fully inside of a grid cell.
It currently only supports spheres and planes, another area for expansion.
Note: The scene files are from an computer graphics course I saw online a long time ago, but I don’t remember exactly where they came from. If anyone has contact information, please let me know and I’ll add it here.