2012-2013 Interns

SPIN intern Robert Cheung

Robert Cheung - computer science and finance

Project scope: Optical Music Recognition: Applications on Mobile Devices
Mentors: Colleen Bushell and Michael Welge

SPIN intern Nikoli Dryden

Nikoli Dryden - computer science

Project scope: A Parallelized GDH-based Debugger
Mentor: Daniel LaPine
Proposal abstract: This will be a continuation of a project started in the High-Energy-Density Physics summer scholar program of Lawrence Livermore National Laboratory's Weapons and Complex Integration directorate. Therein, I developed the initial version of an open-source parallel debugger based upon GDB designed for debugging very large C++ applications running on large-scale clusters. The source is presently available at Github. As of now, the debugger has successfully been tested with the Kull ICF multi-physics package with hundreds of processes.

PGDB aims to provide a light-weight, free, open-source alternative to heavyweight debuggers such as DDT and TotalView, and in particular to be more scalable (ideally to millions of processes) while supporting full C++ debugging (which in particular TotalView does not), through a simple interface that aims to be similar to GDB's. In essence, the goal was to have "GDB on every node", but in an easy to manage way.

The project with NCSA would aim to improve the scalability and performance of the debugger, specifically with regard to aggregation of debug output into equivalence classes through tree-based reduction networks and efficient loading of debug symbols without impacting parallel filesystems. Additionally, porting and testing PGDB to different platforms would be a goal; it presently has been tested on LLNL Linux clusters running TOSS 2 and CHAOS 3. Additional target platforms include, depending upon feasibility, other Linux clusters, IBM BlueGene systems (in particular BlueGene P and Q systems like Dawn and Sequoia), Cray systems such as Blue Waters, and potentially other systems.

SPIN intern Kiersten Jabusch

Kiersten Jabusch - independent program of study for anatomy and computer science

Project scope: Visualizing Moving Stories
Mentor: Guy Garnett
Proposal Abstract: There is a lot of room for improvement with technology's ability to accurately read human movement. Devices like the Xbox Kinect are a useful advance in technology, but they are inaccurate—prone to misreading body locations unless you are precisely positioned, and easily distracted by background interference. This project will create detailed mapping of body movements using many different types of sensors, from accelerometers and visual tracking to biometric sensors to create a detailed and accurate ability to read and even predict human movement.

Using Laban Movement Analysis (LMA) in conjunction with machine learning algorithms, such as Factored Conditional Restricted Boltzmann Machines (FCRBMs), not only can the numerical data such as acceleration and directionality be analyzed, but also details that are traditionally harder to quantify, such as the style of movement and eventually, potentially some of the feeling behind the movement. A system called EffortDetect analyzes Laban Efforts using heuristic algorithms to measure the parameters of space, time, weight, and flow in each movement. It then provides confidence values for each style in each frame of the recorded movement, and an overall weighing of the confidence values provides the most likely Laban Effort for the overall movement.

This in turn will provide data for the hidden layers of the FCRBMs, which will be able to use the style of movement together with the data from the sensors to analyze and eventually predict human movement far more accurately. The methodology behind EffortDetect is outlined in the paper, "Recognizing Movement Quality: An Expertise-Centered Approach for Evaluating Laban Effort," and the FCRBMs' function is outlined in "Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style."

SPIN intern Jonathan Kirby

Jonathan Kirby - computer science

Project scope: Logging and Synchronization in Virtual Director
Mentor: Donna Cox
Proposal Abstract: Virtual Director is a tool used by researchers at Illinois and Adler Planetarium to generate impressive animations for the dome at Adler Planetarium. People at both the Planetarium and Illinois create and share in real time both flight paths and shading settings for the animations. VDSCAE will add functionality to both compile the changes made in real time (so that if a unit disconnects from the network, it can be reconnected and brought back up to speed), and export changes made (so that they can be saved and don't have to be manually resubmitted again).

SPIN intern Austin Lin

Austin Lin - theater

Project scope: Making Art Happen
Mentor: Donna Cox
Proposal Abstract: Today's technological world is probably best summed up by the now famous phrase: "there's an app for that". We now have so many applications (desktop, web, mobile, etc.) that end users are forced to use application after application to accomplish anything. Consider photography, we have applications for importing photos from your camera, for sharing photos, for editing photos, for printing photos, for creating movies from photos and so on. This situation is just as true in the "advanced" areas of computing like HPC and visualization. The next step in computing is not creating more systems but rather linking the systems we already have to create more powerful, more accessible and more intelligent systems. There is no magic bullet to this problem, and solving it is certainly beyond the scope of a one semester project or any one organization, but we can begin to create standards that move us towards more connected systems.

To this end, I propose to research and create recommendations for a control standard for interactive systems. The control standard would define how control end points are advertised, how messages are passed and how control devices or applications associate with the interactive system. In my researching I would focus on building upon existing standards when possible and work closely with my mentors at NCSA as well as others in the campus community who work with interactive systems. One example of an existing standard that I would utilize is the Open Sound Control (OSC) standard which is a message passing standard widely used in the performing arts world. OSC has an easy to understand address syntax, a large base of existing applications and a flexible specification making it an ideal starting place. I have also chosen to begin working with OSC because of my familiarity with it and the existing interest within NCSA's Advanced Visualization Lab (AVL).

The potential uses of such a control standard are far reaching, but a few that are specific to NCSA include: controlling AVL's vMaya from an iPad; feeding data from a performance venue to a simulation running on HPC resources at NCSA which in turn creates the graphics used in the performance; and synchronized showings of an interactive simulation in which control signals are fed into the simulation from multiple geographically distant locations. These are somewhat dry technical examples but the impact that the technology offers is profound. It means that the next Steven Hawking or Carl Sagan might be inspired by making galaxies collide using an iPad at Adler Planetarium. It means that using the same simulation, a dance choreographer might make galaxies collide using dancer's bodies and a Kinect, inspiring an entire audience. Suddenly a single simulation or application can do so much more and affect so many more people. This is what is possible when a control standard exists for interactive applications; it allows a community of people to build systems that can connect and in doing so become more than the sum of their parts. It democratizes the technology making it available to those who don't have the resources of NCSA.

SPIN intern Sanny Lin

Sanny Lin - graphic design

Project scope: Visualizing Moving Stories
Mentor: Guy Garnett
Proposal Abstract: Most consumer-facing human-computer interaction today occurs through in interface of a monitor, cold and non-intuitive. In order to live life with more meaningful, visceral experiences, interaction design must move beyond flat, two-dimensional screen displays and into rich, three-dimensional environments that respond to sound and gestures. Experiences that capitalize on the capabilities of the human body, especially in movement, that's not simply tapping and swiping on glass surfaces.

While turning every desk and wall and refrigerator into a touchable interface may seem like the next logical progression in the future of interfaces, I would like to explore another pathway into the future with Guy Garnett's Moving Stories project. Specifically, how movement can affect aesthetic experience. EffortDetect's current method of data visualization results in web-like abstractions that are difficult to understand and difficult to decipher without prior understanding the nuances of Laban Movement Analysis (LMA).

It is a challenge in capturing qualitative movement of the dancer as quantitative data by the computer and translating the information objectively through visualization that will then be experienced by the audience subjectively. I will focus on the latter half of this process in creating a visual experience that is informed by movement.

In order to make these visualizations accessible and relevant to the audience, I would first seek out discussions with dancers and LMA experts in order to better understand the motions and emotions behind dance movement. Using LMA as a guide for qualitative movement analysis, I will map new methods of finding meaning in gestural interaction that is applicable to an audience of non-dancers. These understandings will drive an iterative process of revisualizing the data already collected by EffortDetect to create more distinctions in visual aesthetic quality based on the movement quality, focusing on the following basic movements: press, punch, slash, wring, dab, flick, float, and glide. Visualizations will be revised by studying how non-experts (of movement) perceive the emotive quality of dance through the visualizations and how their experiences match with that of the dancer.

SPIN intern David Zmick

David Zmick - computer science

Project scope: A Story about Twitter and Innovation
Mentor: Colleen Bushell and Michael Welge
Proposal Abstract: Using Twitter and other social media to make predictions and business decisions could be valuable, but transforming a complex social network into useful data is not an easy task. To leverage the data, we must find a way to extract some signal from all the noise present in the social network. The exact definition of signal is project dependent, but, regardless of the project, it may be possible to find signal by uncovering how different people are connected and how information moves through these connections. With an understanding of a this structure, analysts can focus specifically on areas of the network through which important information moves.

Effective visualization of a social network can help create insight into this structure. To create effective visualizations, it is useful to break the network down into "layers."

The first layer of interest is the information layer. Here, we examines how a single piece of information moves through the network. On Twitter, the unit of information is a Tweet, and the information moves when a Tweet is retweeted. This visualization creates some insight into the nature of the system, and helps pinpoint Tweets that may be interesting to examine. For example, generally the number of retweets per hour for a Tweet is initially high, but quickly drops off. This indicates that information on Twitter generally moves the most when it is introduced into the system, then quickly becomes uninteresting.

The next layer is a logical grouping of pieces of information. In Twitter, this is an account. A visualization of this layer examines which Tweets posted by someone are actually signal, and which Tweets are ultimately unimportant. This visualization exposes something about the nature of the accounts (Do they post many meaningless Tweets and a few important ones, or a small number of important Tweets).

The final layer is the network as a whole, or some subset of it. In Twitter, this means finding a compact way to visualize many accounts simultaneously. This visualization makes it possible to select accounts of interest out of a list of accounts and find accounts that have similar behavior.

With these visualizations it should be possible to explore the Twitter network and gain some insight into the structure of social networks so that predictions can be made looking at more signal than noise.

Past SPIN interns

See the current list of interns

See the interns who participated from: Fall 2014, Fall 2013