MCS Login



MCS Login

Blog Archive


Press
inside IMD
Tuesday, 08 February 2011 19:09

A sneek peek inside of IMD's now closed studio space.

 

Modern Interior Performance Capture Studio

A sneek peek inside what was the IMD studio.

It was located inside a pair of old aircraft hangars which had been transformed into one of the most beautiful Performance Capture Studios in the world.

 

Its former location:  the Bay Area, California.

 

The IMD studio had a 120,000 sq ft studio area.

Designed for an intensely collaborative and creative film production studio, this massive internal project involved transforming two airplane hangars on a former Coast Guard base and converting them into a full production film studio.

 

IMD's unique operational flow guided the architectural design of the studio, which was conceived around the premise of ‘strange loop’, a term often used in film to describe the movement through interconnected levels, where one finds oneself back where one started.

Through the utilization of an adaptable display wall system that encompasses the various departments within a continuous loop, various artistic and production departments are able to engage and collaborate with each other in a non-linear back-n-forth manner.

The studio was built and  positioned according to its specific level of required light and sound control.

Through the aggregation of the significant amount of required ‘dark program’ (screening rooms, color grading stations, theaters, etc), along a centrally located core, other areas were allowed to receive natural light from the exterior while simultaneously providing a flexible open floor plan.

Carve-outs through the core of the building defined shortcuts, maximized circulation efficiency and provide ‘color’ opportunities otherwise not available, due to the sensitivities of computer monitors.

Additionally, strategically located curvilinear vertical nodes promote further collaboration between levels and enhance the experience of the considerable height of the hanger from ground level.

The flag wall system, primarily comprised of 11” x 17” magnetized steel panels, dynamically communicates the unique process of performance capture, while facilitating the large demand for adaptable working pin-up space. In addition, the 18% gray tri-fold curtain provides flexible horizontal light control throughout the day.

Performance Capture Studio by Lorcan O’Herlihy Architects
Modern Interior Performance Capture Studio 1
Modern Interior Performance Capture Studio 2
Modern Interior Performance Capture Studio 3
Modern Interior Performance Capture Studio 4
Modern Interior Performance Capture Studio 5
It certainly was a beautiful studio.
 
facial tracking
Saturday, 05 February 2011 01:49

Real-time facial tracking for mobile phones

A mobile phone using  the new facial tracking software

A mobile phone using the new facial tracking software

Facial detection technology is now pretty common in digital cameras, but has also found its way into things like taps, door locks, televisions and even ice cream machines. Recently, researchers from the University of Manchester developed software that allows mobile phones to detect faces too. Unlike some devices that simply identify faces, however, phones equipped with this software will be able to continuously track faces in real time.

Camera-equipped phones using the software identify and track 22 facial features, and do so at a high speed. Not only can the technology follow subjects that are moving, but it can also do so while the phone itself is moving, even if it’s spun around laterally 360 degrees.

“Existing mobile face trackers give only an approximate position and scale of the face,” said lead researcher Dr. Phil Tresadern. “Our model runs in real-time and accurately tracks a number of landmarks on and around the face such as the eyes, nose, mouth and jaw line... this can make face recognition more accurate, and has great potential for novel ways of interacting with your phone.”

The team is now looking into uses for the technology, although they already see it replacing passwords and PINs for logging onto websites by phone. On a more fun note, they also envision it being used in apps that could apply objects to live video images of peoples’ faces – ever wonder what you’d look like with a mustache?

The software is the result of over 20 years of research at Manchester, and is part of the EU-funded Mobile Biometrics (MoBio) project.

 
motion control for all
Thursday, 03 February 2011 21:19

PrimeSense Teams Up with ASUS to Bring Intuitive PC Entertainment to the Living Room with WAVI Xtion

 

PrimeSense and ASUS plan to bring  controller-free interaction to the PC in Q2 2011

PrimeSense and ASUS plan to bring controller-free interaction to the PC in Q2 2011

TEL AVIV, Israel & TAIPEI, Taiwan--(BUSINESS WIRE)--PrimeSense, the leader in sensing and recognition technologies, and ASUS, a leading enterprise in the new digital era, announced today that PrimeSense Immersive Natural Interaction™ solutions will be embedded in WAVI Xtion, a next generation user interface device developed by ASUS to extend PC usage to the living room. WAVI Xtion is scheduled to be commercially available during Q2 2011 and released worldwide in phases.

“Natural Interaction’s appeal to consumers means more monetization opportunities based on personalization, various branding and advertising programs inside applications.”

The WAVI Xtion media center for the PC leverages ultra-wide band wireless link and PrimeSense 3D sensing solution to provide controller-free interaction experiences in the living room. Users can browse multimedia content, access the Internet and social networks, and enjoy full body interaction in a more user-friendly and natural living room experience.

In addition to WAVI Xtion, ASUS also adopts PrimeSense solutions to introduce the world’s first PC-exclusive 3D sensing professional development solution, Xtion PRO, for software developers to easily create their own gesture-based applications and software. Xtion PRO is scheduled to be commercially available in February 2011. Developers will also have the chance to sell their applications on the upcoming Xtion online Store.

PrimeSense and ASUS will introduce WAVI Xtion and Xtion PRO at the 2011 International Consumer Electronics Show (CES), January 6-9 in Las Vegas. It can be viewed in the PrimeSense booth (South Hall 4, upper level, Booth #36255) and at the ASUS suite (Venetian Ballroom, Level 3, San Polo 3501A and 3501B).

“Our agreement with ASUS for developing WAVI Xtion demonstrates that Natural Interaction technology is already mainstream,” said Inon Beracha, CEO, PrimeSense. “This user interface is a new paradigm that represents how all CE products will eventually be naturally controlled and operated.”

“ASUS combines its wireless cross-room solution with PrimeSense’s simple, intuitive, gesture-based control technology to allow users to enjoy and share PC content on TV with gestures. WAVI Xtion is the unprecedented living room experience that will revolutionize users' recreational lives,” said Kent Chien, General Manager, ASUS. “Natural Interaction’s appeal to consumers means more monetization opportunities based on personalization, various branding and advertising programs inside applications.”

PrimeSense and ASUS are also working together to promote and support the OpenNI developer community with developer kits. PrimeSense’s open, smart platform and hardware/software API lets publishers and developers easily apply 3D-sensing technology to a variety of applications and create new Natural Interaction content.

PrimeSense and ASUS are at the forefront of the Natural Interaction movement for controlling digital entertainment devices in the living room - such as the TV, set-top box and PC. This next generation of user interface is bringing together the entire ecosystem of the human sensory experience and closing the gap between humans and machines.

 
How an animator learned to love motion capture
Thursday, 27 January 2011 20:59

This week marks my fifth year as a game developer. And like any anniversary, I can’t help but reminisce about times gone by. Add to that the fact I only started animating with the computer a little time before, and I am the definition of sentimental at this moment. But with this being my first post, I figured you might indulge my trip down memory lane as a perfect introduction to my background and way of thinking. So pull up a chair… er, pull your chair closer to your digital firebox and settle in for my technological soap opera.

My fear and distrust of technology started immediately. I went to school for hand drawn, “traditional” animation, forsaking the computer as an impure method of crafting such a magical art. 2d animation versus 3d animation was (is?) all the rage while I was in school, as many believed the influx of 3d KILLED 2d, and that to even THINK about trying computer animation was to sacrifice your warm, organic heart for a cold, lifeless toaster.

Download full size (34 KB)

Though once graduating, the reality of just wanting to animate instead of working at Radio Shack quickly tempers such hostility towards method. So by the time I finally realized learning how to animate in a 3d medium was necessary to find work, I just kept telling myself I had to make peace with it. But then a funny thing happened. I fell in LOVE with computer animation. I was no longer held back by my drawing ability or the time it took to process each drawing to finally see my character come to life. I could quickly add subtlety to facial expressions and held moments that breathed a sense of life into my creation more than I ever felt I could before. And ultimately, it was the same thing I was doing when was slaving over the drawing board. I was creating movement and life by my own hands, just with different tools. And by the time I was putting a reel together to find work, adding any of my hand drawn animation didn’t even enter my mind.

So fast forward through the job hunt, the job interviews, my first studio, learning how game animation varied from film, shipping my first game, then leaving my first studio for my second and you will find me excitedly animating The Incredible Hulk. Everyday I got to go to work creating over the top animations of a giant green monster smashing other brightly colored creatures. And after two years of hand keying animations in games, I was getting pretty comfortable with my skills. Sure, I still had a lot to learn, but I was as adapt with a mouse and keyboard as I ever was with a pencil and paper. But of course, comfort and technology are always at odds. It was getting towards the end of the project, and there was still the little issue of 30 minutes of cinemas that needed to be done. And as much as every animator at the studio would have LOVED to hand key every one of those 30 minutes with an attention and love it deserved, the deadline meant that wasn’t going to happen. So it was decided by those leading the project that we were going to use mocap.

And there it was. The first shot in the next animation war I was about to be drafted into.

I had heard of other animator’s war stories about their experience with motion capture. Most complained about how constrictive it was, having a key on every frame and no easy way to adjust timing without losing the subtlety of the motion. But the biggest issue was that the animation wasn’t yours. It was the director’s intent with an actor’s movement. By the time the animator got it, they were just meant to be a faceless drone that smoothed it all out, added some finger movement and copied the facial animation from the video of the actor. And that is the opposite of what an animator wants to do. We got into the field to create performances and feelings of emotion that are real to anyone that sees them. We are the actors, we are the directors. We are not the janitors of other people’s movements and performances.

So, with all those thoughts running in my mind, I am given my first cinematic with mocap. And instantly, all those horror stories came true. Without a mocap studio of our own, it had to be done offsite, many states away. That meant that only a couple people could go to interact with and direct the actors. Not having the tools to process or experience to clean up much of the data, everything was first outsourced to another studio. But as with any outsourcing, what you get back isn’t going to be able to go right into the game, so even when the cinematics were delivered, we still had to touch them up and get them into the engine. Our custom rigs and tools were not made with mocap in mind, so our workflow took a real hit. It was a nightmare and everyday I swore that I would never work with mocap. Ever. Again. The war of key framed vs mocap was on, and I felt stronger about it than I had about 2d vs 3d. Probably because this directly affected how the rest of my career was going to play out. Mocap was a dirty skinjob!

But again, life has a way of finding ways to challenge those convictions. I was on the search for a new studio to hang my hat, and the one I was most interested in was heavily invested in mocap. To the point they had their own motion capture space. And while I was initially being brought on to do key framed animation, I knew I would have to deal with mocap. But be it hubris or just general excitement for the studio, I accepted. And when it came time to work with mocap data, I braced for the worst. Then surprising thing happened. What I found was that it wasn’t as bad. Mind you, it still sucked. I was having to clean up someone just talking in front of a table, and neither the performance, movement or direction were ones I felt any form of connection to. But at least this time there were tools in place and I knew if something wasn’t working, it could be recaptured quickly or I could talk about the intent of the performance with the director or actor. It also fit the characters and the world better than in a superhero game. So both visually and work flow wise it made sense. So I dealt with it as the majority of my time was still key framing creatures and just generally awesome stuff. I was comfortable and content, so of course mocap had to find a new method of attack.

As that project was wrapping up, I was moved to another game with another team and lead animator. And quickly my day to day work was ONLY dealing with mocap data. And what was worse, it was just taking files from an excel list, with no interaction between the director or actor what so ever. Sure, the tools were better than what I had on Hulk, but this felt just as soul crushing. My connection to the characters was non-existent and I am sure that was felt in every asset I checked in. Mocap had fired back, and it was a critical shot. So, when I again switched teams and leads, I felt like I was being medevaced to safety. And now I was part of a resistance to fight back! When my new lead said he didn’t care if we used mocap or key frame, just as long as we got the work done to the proper standard, I gave mocap the middle finger. I was going to do nothing but key frame, and my time with mocap was all but a distant memory.

But as is always the case, deadlines started to loom. And the need for mocap to get everything done became apparent. But this time, it came with a caveat. If we were going to use it, each animator was able to direct the actors. So even if we couldn’t animate it, we could still get the performance we wanted. While I figured this was mostly just a consolation prize after having my precious key frame taken away, I was on board to try my hand at directing, if for nothing other than the experience. Plus, directing actually sounded like it might be kinda fun. There was a glimmer of hope.

The first time I directed mocap, it was awkward. I wasn’t sure where I should stand, who I should talk to, how physically involved I should get with the actors, how MUCH direction I should give them, and when I was asking too much of them. I actually left that first session with a new level of dread for all things mocap. I had all but written off any sense of fun with the mocap process. But then I was delivered the data to be cleaned up. And at that moment I realized what I had always hated about mocap: the lack of ownership. Sure, the motions weren’t 100% dictated by me, but the intent was. The performance was what I would have done had I hand keyed it, and now I just got to go in and push the timing, and the poses. And that was enough to keep me going back to each session, willing to invest myself more and more into the mocap experience. By the end of the project, I was looking forward to directing the actors more than anything else I was involved with.

As that project wrapped, and we started to do R&D on the next, it was decided the art style was going to be less realistic and more stylized. You know, the type of thing animators dream about. We had visions of key frame animation and cup cakes dancing in our heads. But I had a nagging feeling that we would be missing out on an opportunity if we didn’t use mocap. The reality is, as games require more animation, the ability to keyframe all of them becomes harder and harder. And in all honesty, keying idles, turns and the sort aren’t the most fun. So I decided to work with the mocap department to try and come up with a method to integrate it with a more keyframe sensibility. The rest of the animators on the team weren’t too excited about the prospect, but since it was just R&D, and there hadn’t been much success before, I don’t think they worried much that anything would come of it.

Before we captured anything, I sat down to identify what the biggest issues the team had with mocap. At this point, I was done fighting the war, and wanted to come up with a treaty.

  1. The team did not like MotionBuilder, which was the defacto mocap program. They were comfortable with 3ds Max Character Studio and wanted that as the only program we had to use.
  2. Quick delivery of the solved data was needed so that the animators could jump right into the editing process.
  3. Proof of concept was needed that something stylized could be created, in essence not making it look like mocap, faster than just hand keying it.
  4. That complete ownership could be felt with the final product. We needed to be able to say, this is my animation from beginning to end.

The first two points were largely out of my hand. We had a mocap department that handled all the tech, and up until this point solving quickly to character studio biped had been an issue. But thankfully when they looked into it again, they found with newer versions a lot of the kinks had been worked out and they could deliver as quickly as they had in previous pipelines. Problems 1 and 2 were quickly taken care of. That left 3 and 4 squarely on my shoulders. But this was the fun part, and was ultimately what I had wanted all along.

First step was to get to know the character, like we would with anything we animated. But the important part was getting the actor in on the research soon after I had an idea what I was looking for. As soon as I had some reference pictures, video, key frame tests, concept art and the model, I would sit down with them as I was scheduling the shoot. This gave them time to understand the character and start thinking about what was going to be needed. This also got us both talking about and explaining the purpose of each animation before anything was even recorded. By the time we did get into the studio space to record, we were both aware of what I wanted out of the character, and how we were going to do it. This allowed us to just get past the functional part of the animation and drill down to the personality, timing and emotion of the action. You know, the stuff us animators go gaga over. This also opened both myself as the director and the actor to experiment with different methods of expression in their performance. If the character is blind and meant to stumble around, we would have the actor keep their eyes closed, and put some cushions and mats throughout the space for them to bump into unknowingly. If they are meant to be a violent, loud character, getting them to scream violently would come more easily if they weren’t worried about just trying to remember the basics. And having spent all that time getting into character meant we were both comfortable physically interacting with one another when it came time to posing. And getting that emotion out of the actor comes through in the data. The way they hold them selves, and whether they are giving it their all or holding back can be seen when you put it onto your character in the game. And getting that performance out of the actor is incredibly rewarding as a director, because you are crafting a performance. And THAT is that true sense of ownership we had yearned for.

After that, editing the mocap to get something stylized was just a matter of playing around. I first key framed the action I wanted to get a time estimate that I could compare the mocap against. With that in place, I dove into editing process. It was at this point I realized that essentially what I was doing was capturing video reference of what I wanted to animate, but now had the ability to manipulate that reference freely and quickly. And while I am in no way a propionate of character studio or biped (maya custom rigs FTW!) I found my familiarity with the tools made for quick and easy mocap editing that was FUN! It became my sloppy joe moment. See, I don’t like ketchup or mustard on their own, but when you mix them together with some magic, it becomes a delicious sloppy joe. And in the case of max and mocap being ketchup and mustard, ownership was that magic ingredient.

And here is the moment I fell in love with mocap.

 

(download)

Sure, nothing revolutionary in the final product or the work method. But it was enough that at the end of the process, I was excited about the animation and proved that adding mocap into the workflow could personally pay off. I wasn’t saying we should use mocap for everything, just that we could benefit from using it where it excels: idles, walks, runs, transitions. It would give us the base that we could quickly push to fit out needs, and save us time, allowing us to really throw ourselves into the more elaborate, over the top animations we all love. The moment that I truly realized it was successful was when a couple days later a couple of the animators most against its use were in the mocap studio, by their own choice! The treaty had worked! Humans and Cylons were working together in harmony!

It took me five years to really believe what I had just said to make peace with my conscience when I first started down this career path. 2d, 3d, keyframe or mocap doesn’t really matter. They are just tools and you don’t always get to choose which one you use. What you do get to choose is your involvement in each, because that is where you will find your happy place.

 
military motion capture tech
Wednesday, 19 January 2011 19:15

All-Seeing Blimp does realtime lidar of the battlefield from 65,000ft

All-Seeing Blimp Could Be Afghanistan’s  Biggest BrainThis fall, there'll be a new and extremely powerful supercomputer in Afghanistan. It'll be floating 20,000 feet above the warzone, aboard a giant spy blimp that watches and listens to everything for miles around.

That is, if an ambitious, $211 million crash program called "Blue Devil" works out as planned. As of now, the airship's "freakishly large" hull - seven times the size of the Goodyear Blimp's - has yet to be put together. The Air Force hasn't settled yet on exactly which cameras and radars and listening devices will fly on board. And it's still an open question whether the military can handle all the information that the airship will be collecting from above.

U.S. planes already shoot surveillance video from on high, and listen in on Afghanistan's cell phones and walkie-talkies. But those tasks are ordinarily handled by different aircraft.  Coordinating their activities - telling the cameramen where to shoot, or the eavesdroppers where to listen - takes time. And that extra time sometimes allows adversaries to get away.

The idea behind the Blue Devil is to have up to a dozen different sensors, all flying on the same airship and talking to each other constantly. The supercomputer will crunch the data, and automatically slew the sensors in the right direction: pointing a camera at, say, the guy yapping about an upcoming ambush. The goal is to get that coordinated information down to ground troops in less than 15 seconds.

"It could change the nature of overhead surveillance," says retired Lt. Gen. David Deptula, until recently the head of the Air Force's intelligence efforts. "There's huge potential there."

The first phase of the Blue Devil project is already underway. Late last year, four modified executive planes were shipped to Afghanistan, and equipped with an array of surveillance gear.

Phase two-the airship-will be considerably bigger, and more complex. The lighter-than-aircraft, built TCOM LP, will longer than a football field at 350 feet and seven times the size of the Goodyear Blimp at 1.4 million cubic feet.

"It's freakishly large," says a source close to the program. "One of the largest airships produced since World War II."

The Air Force hopes that the extra size should give it enough fuel and helium to stay aloft for as much as a week at a time at nearly four miles up. (Most blimps float at 3,000 feet or less.) Staying up so high for long is all-but-unprecedented. But it's only a third of the proposed flight time for a competing Army airship project.

The Army's "Long Endurance Multi-intelligence Vehicle" relies on a more complicated, hybrid hull. Blue Devil's complexity is in the hardware and software it'll carry aboard.

Sensors will be swapped in and out using an on-board rail system that connects pallets of electronics. Defense start-up Mav6 LLC is doing the integration work. In addition to an array of on-board listening devices, day/night video cameras, communications relays, and receivers for ground sensors, the Blue Devil airship will also carry a wide-area airborne surveillance system, or WAAS. These sensors - like the Gorgon Stare package currently being installed on Reaper spy drones - use hives of a dozen different cameras to film areas up to two-and-a-half miles around.

The footage can easily overwhelm the people who have to watch it (not to mention the military's often-fragile battlefield networks). Already, 19 analysts watch a single Predator feed. Gen James Cartwright, vice chairman of the Joint Chiefs of Staff, told a conference in November that he'd need 2,000 analysts to process the footage collected by a single drone fitted with WAAS sensors. And that's before the upgrade to the next-generation WAAS, which uses 96 cameras and generates every hour 274 terabytes of information; it'd take  1,870 of the hard drives I'm using right now to store that much data.

That's where the supercomputer comes in. With the equivalent of 2,000 single-core servers, it can process up to 300 terabytes per hour. So instead of just sending all the footage to the infantrymen, like most of today's sensors, the airship's processors will crunch the information, adding meta tags like location and time. Ground troops will query a server on the airship, which will only broadcast the stuff they're interested in.

"People ask: ‘With all these sensors, how're you gonna transmit all that data down to the ground?' Well, we don't necessarily need to send it all down," Deptula says. "A potential solution is to process part of the data on-board, and only send what is of interest. That reduces the bandwidth requirements."

Provided the Air Force can get the blimp in the air, and the gadgets on the blimp. The first flight is scheduled for October 15th.

 
Twilight: Breaking Dawn
Wednesday, 19 January 2011 14:25

more info about that performance capture techniques are going to be used in Twilight: Breaking Dawn to rapidly age Mackenzie’s character Renesmee. The CGI team is going to use a mixture of ‘Motion Capture’ and ‘Live Action’ Photography by fitting Mackenzie with an individually made skull cap fitted with a tiny camera positioned in front of her face. Like Avatar, the information collected about her facial expressions and eyes is then transmitted to computers. Mackenzie’s face will then be digitally transferred onto the face and body of a younger child to create the illusion of her character’s rapid ageing. Besides the performance capture data which will be transferred directly to the computers, numerous reference cameras will give the digital artists multiple angles of each performance. All live action and motion capture footage will then be composited on screen.

 
<< Start < Prev 1 2 3 4 5 6 7 8 9 10 Next > End >>

Page 9 of 35