lies in the interplay between the rhythms in the two
domains. The key to establishing this interplay is the
ability to synchronize musical and visual rhythms.
Sonnet furnishes an ideal platform upon which to build
and combine the necessary components. In particular,
interface components can be constructed (e.g. for MIDI,
data glove, dance suits) to allow data flows from external
sources to modulate visual parameters. Likewise, because
data packets can trigger component execution, these same
interface components act to trigger Imager visuals or to
synchronize activity between Imager and external sources
(e.g. of MIDI music content).
4.4. Orchestration
The orchestration of visual (and musical) segments and
modalities is central to organizing a coherent, structured
performance from basic visual forms and rhythms .
Satisfying these needs generally requires real-time
facilities. In particular, where visual rhythms are triggered
sympathetically by musical events, real-time support is
necessary for timely response. Orchestrating segments of
the visual performance, on the other hand, requires both
high-level and fine-grained sequencing support (e.g.,
“show this visual five seconds into the second section of
the piece”). Sequencing at both of these levels is
accomplished using Sonnet’s event flow and real-time
support to create and propagate events that trigger activity
at the appropriate times.
5. Tools for Complex Compositions
Sonnet+Imager offers three additional tools for dealing
with complex compositions. The first two of these aid the
navigation of long compositions; the last allows artists to
construct performances containing arbitrarily complex
visuals, regardless of their computational complexity.
5.1. Navigating long compositions
Sonnet+Imager offers two techniques for navigating
longer compositions: variable-speed fast-forward, and
random access (seek). This is accomplished by triggering
Imager’s display refresh from a virtual clock, whose rate
can be controlled using a Sonnet+Imager component.
Under ordinary circumstances (e.g. during performances),
the virtual clock runs in real-time. During editing, the
clock can be sped up (and the normal retrace interlocking
bypassed) to effect fast-forward. If the rendering is
already running at full speed, then the clock can be made
to advance virtual time in larger increments.
The second feature, seek, is trivially implemented by
setting the Imager virtual clock to the desired time
(relative to the beginning of the performance).
Our current design for the seek feature poses a minor
problem, however. Specifically, we have found that a
significant class of visuals derives value from “artifacts”
that are produced as moving visual objects interact in
various ways, as shown in Figure 8. These artifacts are
critically dependent on the history of pixel-level drawing
operations used to render the visual objects. Thus, moving
directly to a different temporal location elides the
intermediate drawing operations, thus losing the artifacts.
As a result, continuing the performance from that point
will produce somewhat (or perhaps radically) different
results from those achieved when starting from the
beginning of the composition. As of this time, we have no
solution to this problem.