by Adam Finkelstein and Lee Markosian, Princeton Dept. Computer Science

Summary

Modeling for 3D computer graphics is a painstaking task that is typically relegated to trained experts. In contrast, drawing is easy for many people. One reason for the difficulty of 3D modeling is that existing tools trade off ease of use for very precise control (which is often undesirable). A second reason stems from the emphasis in computer graphics on photorealism. When the subject matter is a natural scene, realism demands a vast amount of detail. In practice, artists and illustrators nearly always choose some degree of stylization (non-photorealism) to evoke the complexity of the scene indirectly. The result can be much simpler than a literal representation, yet also more expressive. Currently no such option is available to computer graphics designers – no tools exist to model both a geometric shape and the stylized look to be applied to it.

We are developing tools for stylized content creation, via interfaces for (1) easily sketching general free-form shapes, and then (2) directly annotating those shapes with hand-drawn strokes resembling pencil, pen, pastel, or other media. The resulting 3D scene will look much like a drawing, even as it is animated or viewed interactively. Applications of this technology include technical illustration, architecture, education, virtual reality, animation, advertising, and games – any context in which dynamic imagery is used to tell stories, communicate, explain, or inform.

Research Objectives

In this work we address the problem of content creation for 3D computer graphics applications. Despite rapid advances in computer graphics technology that now make possible the depiction of scenes of great realism and detail, the problem of creating rich new scenes from scratch remains challenging, especially for non-professionals. This is true despite much research into problems of modeling, animation and rendering, and despite many software products specializing in creating 3D content.

The problem of generating new 3D content is especially difficult when the resulting imagery must be photorealistic. Our key observation is that with techniques of illustration, much less input may be required, even to depict complex, organic scenes. The illustrator can evoke complexity with relatively few carefully stylized strokes. While artists have applied this principle for centuries, it has been largely overlooked in 3D computer graphics. Current modeling tools do not offer many options for creating stylized 3D scenes with the visual qualities of hand-drawn animations.

 

In addition to simplifying the input required of a designer, there are a number of other advantages to creating stylized scenes. For example, illustrators use stylized imagery in order to focus the viewer’s attention on important features while downplaying extraneous details. Architects often prefer drawings in order to imbue a scene with a specific mood or to convey a quality of unfinishedness. Because stylized renderings are often easier to understand, they are used in contexts ranging from medical illustrations to portraits in the Wall Street Journal. Finally, stylized imagery can be more engaging, as evidenced by its prevalence in books, games, and movies targeted for children.

A recent branch of research generally described as non-photorealistic rendering (NPR) brings principles of illustration to bear in computer graphics. For an survey of NPR methods see the books by Gooch and Gooch and Strothotte and Schlechtweg or this web page by Craig Reynolds. The bulk of the effort in this area has focused on rendering algorithms. However, few tools have been developed to make these algorithms available to designers.

Our hypothesis is that giving designers direct access to NPR algorithms through a drawing interface will empower them to make complex, organic scenes more easily and quickly than can be done with conventional modeling tools. We intend to verify the hypothesis by constructing a system based on these principles and demonstrating its utility in constructing natural scenes. Our goal is to put test our prototypes in the hands of a variety of users – both novices and experts, both artists and non-artists. Please watch this page for links to downloadable software.

Progress

Prior to undertaking this project at Princeton we have tackled many related research problems in NPR (see our research pages at here and here). Indeed it was these previous efforts that led to the inception of this project. Since undertaking this project in Autumn 2001 we have written three papers specifically tied to the objectives described here:

Software downloads

     

    • Our prototype software called JOT implements much of the functionality described in these papers and is available here.

 

Project funded by Intel Research Labs (AIM program)

 

Source: Princeton