Motion capture is commonly used in visual effects production for films and video games, and VR is often known as a solitary experience. But Sam and Andy Rolfes are taking these tools developed primarily for behind-the-scenes work and repurposing them for live, interactive, and crowd-driven shows. As Team Rolfes, the brothers work together as a design studio that leans heavily into abstraction and symbolism to create performative art that everyone in the room can enjoy.
At a Team Rolfes show, you’ll see at least one model or dancer donned in cutting-edge fashion laced with motion capture tech. Their movements drive digital actors on-screen while they perform in real time with live music. The scene is a bizarre, hyper-stimulating one, captured through VR-controlled cameras. Everything is live and reactive. In a recent MoMA performance, they allowed audience members to upload photos directly from phones, which would retexture models in an apocalyptic wasteland in real time.
Aside from their personal work creating live shows that integrate dance, fashion, and music, they also work with brands, creating visuals for Nike, Adult Swim, and Super Deluxe. The duo currently has a residency at the Superchief Gallery in Brooklyn where we caught up with them to find out more about the theatrics of bringing hardware and software together.
This interview has been lightly edited for clarity and brevity.
What’s your background, and how did you get into this?
Sam Rolfes: Andy and I both come from kind of a painting background for one. We started out screen printing and doing mixed media painting, and then that developed into semi-digital things. Our mother ran a 3D studio when we were kids, so we got introduced briefly to Blender and things like that. But we found the wireframe meshes and stuff to be a little bit inexpressive and just kind of sterile and boring. I don’t think our interest lasted more than a couple of years.
Andy Rolfes: No, because you had to move vertices around to make swords. They’re like, “Well, 3D is math.” What is this thing, and why is it so hard to make something cool?
SR: Yeah, so we kind of lost interest off that. But then after graduating art school and being part of different music scenes and doing album art and flyers with certain amounts of digital elements — but always coming from a painting background — I found a program for 3D sculpting that was basically like modeling digital clay. So that made it a lot more expressive than just the engineering feel that we had been kind of messing with.
AR: Name check. It’s ZBrush.
SR: ZBrush. So then we were able to kind of bring it back into 3D and start making things. And once you’re able to start creating assets, characters, objects, and iterative development in there, it’s a bit of work but not that much work to then rig it up and start animating it. I was misusing it, which was kind of a theme that comes back continually in our work. I was using it for live shows pretty immediately.
There are a couple of things I wanted to dive into there. Andy, you said you do a lot of the modeling. What’s the breakdown of the process between you two?
SR: It trades off a bit because Andy and I have different modeling styles. His is more classical and fashion-inspired. It’s more realistic.
AR: Yeah, like one style is more romanticized, humanist, surreal stuff, and then another one is I’m making 3D brush strokes to create the form and mold. So it’s a lot more fashion-inspired.
A post shared by (@sam.rolfes) on May 6, 2019 at 10:11am PDT
SR: And mine’s a little bit more destroyed and abstract or kind of gnarled generally, which I’m trying to actually get better about, not fully saturating it.
Generally, with all of our stuff, the character personalities arise from the technical limitation. This suit can’t move around the stage very well, so we generally have been locked in the space. And because of the sensors, it atrophies over time, so they get more and more gnarled. So mine are less human, generally.
Does that lead into you playing the more gnarled, less human characters?
SR: Yeah. We have this other suit that’s more accurate and can move around the stage. It doesn’t get gnarled nearly as easily. So that one is more human characters, more representative characters.
I’ll also perform as the puppeting characters as well, which are generally the least human. We build a lot of different kind of character types based on the input controls.
A post shared by (@sam.rolfes) on Jul 29, 2019 at 9:14am PDT
Can you play a character, record the animation, have it be live, and then go back and interact with that character yourself? Or is this kind of like one performer per character at a time?
SR: You can see there are multiple characters moving based off my motion. We can set which suit controls which characters. I’m moving around, but we’ve got it tethered so that they just are locked at the hip. That’s partially because the sensors in the suit are so sensitive. When I first started doing it, I didn’t realize that. It would send the characters just flying into the stratosphere. And so I’d walk into the next room, and there would be nobody there. And it’d be like, “What am I supposed to do?”
So in the past, all these characters were controlled by me at the same time, but we’re moving from space to space and that’s how I control the progression. I start simpler and then go bigger, then go smaller.
Now, we’ve got two suits, and they can move around a bit more. So now we can have a dialogue and have multiple different pairs of characters interacting. But in terms of the playing against myself thing, that’s what I did for Adult Swim where I played the main characters to be recorded here, and then I put on the VR headset and wore the thing I could see myself. And I was like, playing against myself.
That’s not exactly in real time because we recorded it, and we play it back. But it’s created live in a capacity, but they’re not both happening live at the same time. Theoretically, you could potentially loop animation and play it back, which is an interesting idea. I would shy away from that personally onstage because then there becomes a question of what is live? And what is recorded?
Can you talk about the hardware stuff you use and how you set that up for a show?
SR: So live, we’re now using two suits. It depends on the format, but we’ve got this Shadow mocap suit. They sponsored us, and this is the baseline one we started with. They kind of helped us get going. We just got this Xsens’ mocap suit that is the one that’s able to kind of move around the stage more. It’s a little more shielded from the electromagnetic frequencies and stuff. And that’s for all the body control.
We also use the Vive, which is what we use for all the other spatial stuff. We’ll use these Vive trackers for different props onstage, increasingly. But the key elements are primarily just these motion controls. I don’t use the headset at all. I don’t like it.
My issue is with the emerging experimental tech art world and its relation to increasingly vertically or integrated mega forms as the benefactor. And that relationship ends up being a nearly uncritical one when it doesn’t really take into account the, I don’t know, the conflict of interest maybe. But I haven’t seen anybody doing anti-capitalist work for Google.
Having our practice arise entirely from picking these tools specifically because they arise from the body and expression and not just because they’re the newest thing, I think, is like a big tenet of our studio.
I want to ask about the software. I see you use Unreal Engine. Was that an aesthetic choice against Unity?
SR: With that first computer scanner video, I tried both. And I realized pretty quickly that one, the visual scripting of Unreal is really intuitive to me because I learned Max ASP and other visual scripting things in art school.
AR: Well, to put it more concisely, it’s a lot faster to make something beautiful in Unreal than it is Unity. I worked for, I think, a couple of years in Unity, and it was nice and it’s very stable and I can do a lot with it. God knows there’s a lot of indie people who use it. But just getting it to this level, to me that’s at least good to look at. You have to buy so many plug-ins, and just to get Unity up to that level, it’s like, “Okay. I made it pretty with all this post-processing, and now I can actually do visual scripting,” which I don’t think that should have… I think they just got visual scripting for the materials maybe, but they still have extra plug-ins for just based on all the stuff they have in Unreal. So it is that ease of use, a lot of it.
SR: Now granted, getting it to run efficiently from that point is way harder in Unreal than Unity.
A post shared by (@sam.rolfes) on Apr 21, 2018 at 1:55pm PDT
You’ve done for work for Adult Swim and Super Deluxe. When a project like that comes along, do they come to you and say, “Make us something”?
SR: Yeah, they come to us in different ways. Off the air, they’ll be like, “We have a format. Can you make a quick animation for that?”
Super Deluxe, we began a relationship with them and tested a lot of this live stuff out two years ago when we barely had a handle on how to do any of it, and they were super open to just experimenting.
Music is maybe a more typical example where it’s like, “We’ve got a song. We’ve got a motif of some sort. Do you want to develop something for that?” But the way it’s been, my preference has often been having the musician bring us in because they feel something evocative in our work that matches theirs. Rather than being just part of the Rolodex of the video commissioner where they just kind of hit up everyone when they feel like, “We need a weird video. Let’s get one of the weird guys.”
So this is all visually scripted?
SR: Yeah, all of it.
Okay. So it’s not like C# or anything like that? It’s all no editor?
SR: Entirely visually scripted, yeah. If I had the money and time, I’d hire my more frequent developer, Eric, who works at Meow Wolf. They’re a big bunker out in Santa Fe. They basically hired half my team. Our developer, our networks guy, our producer, all of them moved out to Santa Fe and started working with them. And we’ve done a little bit of work with them, but they’re all out there because they can have a steady income.
What are you doing while you’re at Superchief? What are you doing next?
SR: We recorded all of our motion capture for the Adult Swim video here. We rehearsed with Justin for the MoMA thing here. Superchief has been an incredibly kind benefactor by allowing us to use all the space.
We just got back from touring a couple of dates in Australia. We played Dark Mofo Fest with Marshstepper, which is this big crazy choreographed thing with 10 people onstage and guest musicians. There was a ton of stuff going on, two stages, multiple projectors. It’s this wild thing.
Next, we’re going to Berlin for Berlin Atonal with Marshstepper again. We’re going to bringing in Raymond Pinta who we worked with previously here. It’s this gigantic vertical screen in Kraftwerk — it’s like three stories, just a gigantic vertical screen — and we’ll be on the stage below doing a live stream.
The post How Team Rolfes uses motion capture suits to create wild interactive experiences appeared first on The Verge.