3D Medical Consultation

An introduction to the project is available here.

I am developing a person-portable telepresence device using a PDA. Max Smolens wrote the original PDA client/server code, designed for the HiBall trackers. Since Max graduated, I’ve taken over the code base and converted it to track a fiducial marker with a greyscale camera, eliminating the need for the HiBall infrastructure. The two-handed interaction with patient surrogate and PDA has proven quite useful. I also spent a few months working with inertial measuring units and learning about tracking and filtering.

Future research directions include studying the effectiveness of the user interface and how many tracked degrees of freedom are really necessary in this application, and making the surrogate active in tracking, perhaps via five sequenced IR LEDs or magnetics.

A WMV encoded movie of the PDA system in action is available here.

Update:

Some of the interaction research went on to be published at IEEE VR 2007.

NDS Homebrew SDK

The Nintendo DS is the newest handheld console from Nintendo, released in the U.S. in November, 2004. In general, console manufacturers do not support independant development (homebrew), and the DS is no exception. Everything must be reverse engineered and documented, and free headers, libraries, and tools created

I have been involved in the community since before launch and have made significant contributions to making homebrew on the DS possible.  I co-authored the first homebrew development library ndslib with Jason Rogers (dovoto), which provides startup files, link scripts, and a library of functions to utilize the DS hardware.  The library was renamed to libnds when it merged repositories with devkitpro, and I continue to maintain and contribute to it there.  I also maintain the NDSTech Wiki (posting as Joat), a central repository for all homebrew knowledge about the Nintendo DS.

Below are a few pictures of my early development environment.

 

Writing out the touch screen
First homebrew use of the touch screen

 

My passthrough unit, which enables me to bypass the normal boot sequence and execute code in DS mode from a GBA cartridge.



 

First visual test. Everything before this was non-visual code dumping registers and probing memory. The count shows the number of command transactions observed on the DS bus (in hex).

 

 

Capstone project (Arcade Game System)

All CS students at the University of Missouri – Columbia are required to do a capstone project in their senior year.  My group decided to make a game console for an arcade cabinet and a game to demonstrate it.

The capstone project is broken into two semesters, with the first semester spent on the design process and various state educational requirements they couldn’t fit into any other class like IP law and ethics.  The class doesn’t meet during the second semester, with the time instead allocated to group meetings.

Our team (EONGames / Arkanerdz) consisted of:

  • Adam Crume
  • Dustin Culbertson
  • Michael Noland
  • Remy Nsengiyumva
  • Tim Vette

The console was based around a BlueStreak ARM SOC running at 77 MHz, with a 512×384 framebuffer (visible display region is smaller), and plugged into any standard arcade cabinet with a JAMMA connector.

The game we made for our system was ‘Super Magical Happy Fun Kill Time II’, which is a Leprechaun themed 2D shmup written in C.

I have placed the project presentation from semester 1 online, but the prototype hardware ate itself after the semester 2 final presentation when someone plugged the JAMMA connector in backwards.  The polarity key in the JAMMA connector of my old cabinet fell out at some point, and no-one noticed, but thankfully it happened after we had presented for the class!  When I get around to it, I was thinking about making an emulator for it to show off the game, or just buying a new SOC board.

Gallery of Art and Technology

The Gallery of Art and Technology is a permanent installation at the University of Missouri – Columbia Museum of Art and Archaeology, created by the Computer Human Interaction Laboratory under the direction of Prof. Ali Hussam.

Gallery Exhibits

It features five kiosks running various exhibits, including:

  • Lewis and Clark Adventure (Nathan Bleigh and Zach Fischer)
  • Harriet Tubman Educational Game (Justin Satterley)
  • Virtual Exhibits (Michael Noland)
  • 1804 St. Louis (Sunny Chauhan, Eli Kerry, Michael Noland, Remy Nsengiyumva, Tyler Robertson)
  • Women in History: Josephine Baker Game (?)
  • Interactive Painting: Architectural Capriccio (Zach Fischer)
  • Interactive Painting: Dido in Resolve (Nathan Bleigh)

There was also a news story about the exhibits published in the Columbia Missourian.

1804 St. Louis

We constructed a virtual environment modeled on downtown St. Louis in 1804, reconstructed from materials provided by the museum. It includes a number of informative signposts and audio cues throughout downtown St. Louis in 1804, as well as special cue points. At each special point, the user can press a button to view modern St. Louis through a 3D panorama. I did troubleshooting and debugging on the project, as well as taking pictures on-site in St. Louis.

Virutal Exhibits

Many museums have far more artifacts than space to display them in; in this case, more are in storage than on display. To solve this problem, Adel Al-Fayez built a scanning device to digitize the artifacts and I built the virtual display case to show them. There are six screens to display individual artifacts on, and a touch screen to control the exhibit. An object can be selected and rotated or zoomed on the touch screen, or via an attached spin-wheel on the side of the pedestal. Whenever a new object is selected, a description of the object (as would be displayed on a placard in a real display case) appears on the screen. A different ‘gallery’ of six objects can be selected at any time from the list on the touch screen.

Adding new artifacts to the kiosk does require some technical expertise due to the digitizing steps (image capture is fully automatic, but removing outliers is a manual process), but once the object has been produced, it can be added to the display either via a small tool or by copy-pasting a template in the XML database and filling in the name, description, and object URL.

This was my project, from concept to finish, I did all of the specification, programming, and testing.

Virtual display case
Touch screen interface

In addition to work on the individual projects, I was responsible for the specification, ordering, and wiring of all of the kiosk controls. I also created the wrapper program to display the interactive paintings used for the gallery opening, and a VR goggles based panorama viewer (cut from the exhibit as our magnetic tracker died).

Viking Ship Building

I took a very interesting upper level course on Viking History and for my term project, I researched Viking Ship building technology and archaeological ship finds.

Examining the archaeological evidence of ships in Norway and Denmark, and its impact on our understanding of Viking nautical technology.

Speedy Gonzales the robot

For my Building Intelligent Robots class at the University of Missouri – Columbia, we worked in teams of two to create (and recreate) a robot for a number of different tasks.

The robot’s brain is a MIT Handyboard, and we used LEGO bricks and motors to actually build the thing.  We designed him for high torque, and as a consequence he was one of the slowest robots in the class, earning him the moniker Speedy.  The sombrero came during a late night build session after taping out some boundaries on the floor…

The final project report is available, and it includes more pictures, prose, and the source code.

Graphics research framework

This is pretty much just a laundry-list of features with some pretty pictures.


Useful stuff

  • Flexible scene-graph with multiple render targets
  • Virtual File System for seamless loading from regular directories, Quake PAK archives, or ZIP files.
  • Limited GUI support (transparent text windows which can be dragged around or typed into, great for debugging)

Level formats supported

  • Quake level loading (.BSP, version 0x1D)
  • Quake II level loading (.BSP, version 0x26)
  • Quake III level loading (.BSP, version 0x2E)

Model formats supported

  • Quake II model loading (.MD2)
  • Quake III model loading (.MD3)
  • Molecule loader (.M3D)
  • 3DS loading (.3DS, incomplete)

Textures/materials supported

  • JPG
  • PNG
  • TGA
  • PCX
  • BMP
  • Q2 .WAL
  • Q3 .shader

Demo effects:

  • Tunnel
  • Infinite 3D grid
  • Tie-dye (composite effect)
  • Sinus Scanlines
  • Copperbars
  • Iterated function systems with multiple morph modes and pre-defined matricies for the morphers: Binary, Coral, Crystal, Dragon, Fern, Floor, Spiral, Swirl, Tree, Triangle, and Zig-zag
  • Particle systems: Snow, rain, grid-bugs, explosion debris

Full-screen processing:

  • Radial blur
  • Roto blur
  • Motion blur
  • Glow blur

Procedural surfaces:

  • Sphere
  • Ellipsoid
  • Cylinder
  • Rectangular prism
  • Torus
  • Superellipsoid
  • Supertoroid
  • Elliptic Torus
  • PQ torus knots
  • Springs
  • Bezier curves
  • Supershapes
  • Spherical harmonics

Misc. features:

  • Texture-mapped fonts
  • For a neat effect, text strings can be bound to any of the path objects, such as the PQ torus knot.
  • Skydome (including real sun position and CIE clear/cloudy sky luminace)
  • FBm generated heightmaps
  • Heightmap from image
  • Skybox

Generated surfaces (no parameters):

  • Pisot Triaxial
  • Triaxial Tritorus
  • Pillow Shape
  • Whitney Umbrella

These are all generated using a general purpose parameterized-surface generator with different parameter matricies.

Real-time ray tracer

Here are a few images from my real-time raytracer (taken on a 900 MHz Athlon):

It supports temporal supersampling, where only a fraction of the pixels are rendered in any given frame, so the image is rendered at interactive rates with degraded quality when being moved, but it converges to an optimal solution if the camera is left alone for a second or so (not enabled on these pictures).

I intend to add adaptive sub-sampling to increase speed without much loss in quality, and as an extension, the level of subdivision can be increased when the camera is still. This should give the speed advantages of sub-sampling without the problems in static images (missing small objects), although it will still have aliasing in animation if an object projects to something smaller than the initial grid resolution and falls fully inside of a grid cell.

It currently only supports spheres and planes, another area for expansion.

Note: The scene files are from an computer graphics course I saw online a long time ago, but I don’t remember exactly where they came from. If anyone has contact information, please let me know and I’ll add it here.