MCS Login



MCS Login

Blog Archive


Press
Alice In Wonderland: Johnny Depp Talks Alice
Wednesday, 13 January 2010 17:00


Empress Eve Posted by Empress Eve  |  

Tim Burton's Alice In Wonderland

Disney’s Alice In Wonderland movie, directed by Tim Burton, is coming to theaters (in 3D and IMAX 3D even) this March 5 (a very important date). And I, for one, just can’t wait!

Before I get to the news tidbits — new images from the film; preview of the film’s score; and video interview of Johnny Depp — I wanted to clear one thing up about the movie. If you’ve seen the trailer, you’ll notice that Alice is not a little girl anymore. I’ve had a lot of people ask me about this who think “Alice is too old” because they assume this is a retelling of the original Lewis Carroll tale. It’s not. In Burton’s new movie, Alice is a teenager who’s returned to the magical world she visited as a child, where she reunites with her old friends (all those beloved characters) and is on a mission to end the reign of the Red Queen. You can read the full official synopsis here below.

And now the news…

Most of you probably know (or at least assumed) that Burton’s longtime collaborator Danny Elfman is providing the score for the film. Now, if you visit the film’s official site, you can hear a small snippet of the score. And from what I’m hearing, I already know this is one score I’ll have to own.

 

Next up, check out the one-minute video here below of Disney’s Movie Surfers’ interview with Johnny Depp, who plays The Mad Hatter, where he talks about Hatter’s relationship with Alice.

Lastly, there’s a few new pics: A new promotional banner for Alice In Wonderland which features all the lovely characters from the movie; there’s also the first official photo of The Mad Hatter from the film (meaning, from a scene in the film, not a promotional image).

Enjoy!

Synopsis

From Walt Disney Pictures and visionary director Tim Burton comes an epic 3D fantasy adventure ALICE IN WONDERLAND, a magical and imaginative twist on some of the most beloved stories of all time. JOHNNY DEPP stars as the Mad Hatter and MIA WASIKOWSKA as 19-year-old Alice, who returns to the whimsical world she first encountered as a young girl, reuniting with her childhood friends: the White Rabbit, Tweedledee and Tweedledum, the Dormouse, the Caterpillar, the Cheshire Cat, and of course, the Mad Hatter. Alice embarks on a fantastical journey to find her true destiny and end the Red Queen’s reign of terror. The all-star cast also includes ANNE HATHAWAY, HELENA BONHAM CARTER and CRISPIN GLOVER.

Capturing the wonder of Lewis Carroll’s beloved “Alice’s Adventures in Wonderland” (1865) and “Through the Looking-Glass” (1871) with stunning, avant-garde visuals and the most charismatic characters in literary history, ALICE IN WONDERLAND comes to the big screen in Disney Digital 3D™ on March 5, 2010.

 

click below to see the interview with Johnny Depp

http://www.youtube.com/watch?v=hcIwxwmO38I&feature=player_embedded

 
Robert Zemeckis Casts The Beatles For Motion Capture ‘Yellow Submarine’
Wednesday, 13 January 2010 16:56


The Movie God Posted by The Movie God  |  

Not too long ago it was reported that the king of motion capture, Robert Zemeckis, would be remaking The Beatles‘ animated film, Yellow Submarine, using the technology. Now this new project has its cast of four actors who will be offering their voices to the movie as the four legendary Beatles.

 http://geeksofdoom.com/GoD/img/2010/01/2010-01-12-the_beatles.jpg

The trades are reporting that Cary Elwes, Dean Lennox Kelly, Peter Serafinowicz, and Adam Campbell are all in negotiations for the roles. Elwes is the best known of the bunch with his roles in movies like The Princess Bride and Robin Hood: Men in Tights; he would be voicing George Harrison. Peter Serafinowicz isn’t a household name here in the States, but you may know him better than you think. He’s very popular overseas and has appeared as Simon Pegg and Nick Frost’s flatmate in Shaun of the Dead, as well as on the popular series Spaced. Serafinowicz would be handling the voice of Paul McCartney. Lennox Kelly would be appearing as John Lennon in the film, and has appeared in The Secret Life of Words as well as popular shows Doctor Who and Robin Hood. Finally, Campbell would be voicing Ringo Starr; he appeared in the recent TV show Harper’s Island, and is looking to find more forgiveness for previous roles in the hated Epic Movie and Date Movie.

Click on the link below to watch a couple of videos of Peter Serafinowicz showing off why he landed a role.

 http://www.youtube.com/watch?v=3x5Mqges1Zg&feature=player_embedded

http://www.youtube.com/watch?v=nQr8WzY5zUk&feature=player_embedded

The 1968 Yellow Submarine followed The Beatles as they traveled via the bright-colored submarine to Pepperland, a beautiful world inhabited by friendly, music-loving people that has tragically been taken over by the music-hating Blue Meanies, who have frozen all who live there and destroyed all of the beauty and music. The Beatles are employed to help out and use their music try and take Pepperland back and bring joy to the people once again.

Zemecksi has made a name for himself in this arena for his dedication to pushing the motion capture technology into the mainstream. His movies The Polar Express, Beowulf, and A Christmas Carol all showcased

As it turns out, on The Peter Serafinowicz Show, the actor does a bit called “Ringo Remembers,” where Ringo reminisces about the good times. Thanks to Cinematical for making me aware that these exist.

 
Performance Art
Wednesday, 13 January 2010 16:44

By Alex Wiltshire

 

 

If you’d asked us back in September whether we cared about the death of the cinematic narrative in a non-linear, interactive medium, we would have probably shrugged pensively before going back to stroking our beards in front of Slither Link. It took Uncharted 2 to remind us that we don’t actually mind games that try to be films – so long as those films are pretty good. Historically, however, games have swung wildly at the lowest hanging fruits of cinema, and yet still somehow failed to reach them. Naughty Dog, meanwhile, managed to combine a tight script with snappy direction, breathing life into its CG cast with credible human expressions, transforming them into complex, sympathetic characters with emotive voice-acting. Who would have guessed this might be a winning combination?

Of course, if it was that easy, we’d have engaging cinematic drama gushing from our consoles. More often than not we have horrifying digital mannequins, jerking about like malfunctioning wind-up toys while bored stage-actors drone through the dialogue, wondering if they’ll ever play Hamlet again. Clearly there are many hurdles – technical, practical and ideological – that lie in the way of getting as complete and evocative a performance as Naughty Dog has achieved. But there are now also solutions for developers, offered by thirdparty companies specialising in staging CG performances, combining directorial nous, cutting-edge capture technology and animation expertise, reshaped to fit varying budgets, scheduling and technical requirements.

“I think developers are now acutely aware that they need to have believable characters that can carry a story,” says Mick Morris, MD of Audiomotion, which provides capture services for TV, film and games. “But purely from a technical point of view, it’s only in the last few years that we’ve been able to wrestle a good solution for that out of the available technology.”


Imagination Studios' John Klepper finds that a limited number of markers tends to produce the best effect: "It's mostly about placement rather than number".

Morris points to the latest generation of consoles as the leap in capacity which enabled detailed, lifelike animation in realtime. And in no other area is this as true as the face, the subtleties of which can only be conveyed through a comparatively large expenditure of a game’s technology budget – or so it is often thought.

“Game companies have been avoiding the face,” says CEO Mike Starkenburg of facial mo-cap specialist Image Metrics. “We call it ‘the illegitimate helmet’. There are guys in gasmasks but never any gas in sight. Faces are so difficult to do right that it’s risky, and in terms of the engine’s tech budget, the face can easily be three times as expensive to move as the body.

“The geometric shape of the head – that’s the mesh. The process of moving that mesh in a believable way is rigging. You could move it one vertex at a time, but that would take forever. Instead, animators create a set of controls. One, for example, will open the eye and another will close the eye. I’m simplifying – most mouth rigs have 20 different controls. The body has probably 20 controls. The face can have 60, sometimes a couple of hundred. You can get a really good facial rig with relatively few controls – equal to that of the body – but it’s an optimisation problem, and most people simply haven’t done enough facial rigs to get good at solving it. We have. Inadvertently, we’ve become world experts in facial rigging. Many animators will spend their lives animating bodies and cars and so on, but how many faces do they really do? Even a game like Grand Theft Auto IV has, like, 80 characters. We’ve done literally thousands. So we always look at the facial rigs when we walk in and try and persuade developers to adopt our strategies.”

No other area is as crucial to producing an emotional performance as the face, says John Klepper, CEO of mo-cap firm Imagination Studios: “The face is everything – we as humans look at faces from the day we are born, and we have an amazing perception of its subtleties. In order to be able to transfer those extreme subtleties to animation takes an in-depth understanding of how to rig and skin and weight a character correctly. Eighty per cent of the rigs we get sent look like hell – there’s not a lot of competence, and the difference can be huge if you have something mere millimetres out of place.”

That in-depth understanding is an elusive thing. As Klepper, Morris and Starkenberg say in near unison, the major stumbling block for developers has been in building up the required level of in-house expertise – not simply on the technical side of animation, but in understanding the vicissitudes of motioncapture itself, a process which requires both a keen knowledge of the technology but also demands other skillsets: directing, acting and cinematography.

Audiomotion’s frequent collaborator, Side, specialises in delivering casting and direction for mo-cap productions. “This is what we do day in, day out for a multitude of different companies,” says Side’s managing director, Andy Emery. “Developers have to come up with a pipeline for getting these performances in-game, but that’s a problem we might have already solved on another project. We get to learn from all the different processes that people try. A developer just doesn’t have that exposure. They might do one project in two years – we’ll do 25 similar projects in that same time.”

Equally, Klepper’s experience working for Starbreeze has left him an advocate of outsourcing animation: “The lifespan of the game can be 18 to 24 months, but the time that animation is required, if it’s well planned out, may only be six or nine months. So, if you have it in-house, you crunch like crazy during that period, but what are you doing for the rest of it? I came over from LA to get Starbeeze’s internal mo-cap studio up and running. I got a very in-depth view of the pros and cons of having a large internal team. The result of it was they had to close the mo-cap studio down because it was too expensive. They still have a small team of very competent internal animators but they outsource quite extensively.”

Not all studios are beholden to outsourcing, of course, as Morris points out, citing EA’s mo-cap studio in Canada, Sony’s San Diego studio and Activision’s capture setup in Santa Monica. But, even then, he suggests that outsourcing may bring interdisciplinary expertise that would otherwise be unavailable.

“There’s always going to be that pressure to use those internal resources,” says Morris. “But the breadth of work we do, bringing experience from Hollywood blockbusters, music promos, commercials – perhaps those internal teams don’t have this much exposure to those influences.”

One other reason that the industry has struggled to squeeze scintillating performances from its CG casts is simply timing. The production methods of film clash with the fluid, ever-changing nature of game development, says Side’s Emery: “The process of capturing performances for games has often been driven by horrible phrases like ‘vertical slice’ – it doesn’t work. You need to get creative people engaged, involved and on a contract for a period of time before they move on to other projects. The technology and capture method has too often defined how we get these performances rather than being driven by the fundamentals of the performance itself.”

“Having to lock down your script is a terrifying thing for developers,” says Morris. “But there are huge benefits. If they do draw a line in the sand and work back from there in terms of rehearsing actors and having the director spend as much time with them as possible, then ultimately they’re going to get much stronger performances.”



But whether developers prioritise the performance-capture schedule or not, there are always awkward practicalities that both Side and Audiomotion are adept at working around. Emery describes full performance-capture – capturing sound, body and facial animation all at once – as the holy grail, but it is sometimes impossible to implement.

“Often, you have to record the voice, and then do the mo-cap with completely different actors,” he explains. “Take Guitar Hero – these tracks were already been recorded. So we got someone who was bloody good at miming to belt out those tracks, and we got the facial animation from those sessions. Sometimes, because of the constraints of the vocal talent, the voice work is already recorded in LA, the developer brings audio files to the mo-cap session and you’ll have actors mime to the pre-recorded stuff. It’s not the best way to do it, but if that’s the only option, we’ll take that approach.”

Nor is it ever a given that your voice-actors will actually be capable of the physical performance required for motion-capture. And, if you aim for A-list celebrity talent, it may simply be more economical to use other actors as their bodies for the lengthier mo-cap shoots.

“If you want Vin Diesel’s voice for the main character,” says Klepper, “you’re going to have to spend an unbelievable fortune to get him in a studio for three or four weeks while you shoot all the body data. Alternatively, you spend one or two days in a voiceover booth and get all the audio, then get somebody else to play the physical performance. In that situation where you have pre-existing voice, you then have the tricky task of getting the performances to match.”

Emery is sceptical of the benefits of getting in big names for this very reason. “It’s not about getting triple-A actors,” he says. “I don’t think Uncharted 2 has triple-A actors – it has great actors. We find it frustrating when what we want to do is rehearse, try a few different things, maybe go through it a bit slowly, work on the script – and that’s a very difficult sell when you hire a Hollywood star for just a few hours.”
But actors are only one of three essential ingredients, continues Emery: “There’s a misconception that a good actor will make something out of a bad script. They will make it better than a bad actor doing it, but they won’t make it. Script, casting and direction are the base tools. You can put great actors together, but without good direction they’ll soon lose their focus. If it looks like you don’t know how to get a performance out of them, they can disengage quite quickly with a project.”

This is not wisdom that has percolated down to all game developers, however, many of whom, for budgetary or aspirational reasons, repurpose members of staff from elsewhere in the company to act as directors or writers, when they are perhaps not as well qualified as they might believe. While Emery says the majority of Side’s clients now use external directors, there are still some horror stories: “Sometimes people will say: ‘Oh, well, I used to work in this drama group’. That’s fine, but it is the equivalent of me saying: ‘Well, I did a bit of claymation at school – can you let me have a go at one of your character models?’ When our directors aren’t working for us, they’re directing theatre or directing TV. They are directors – that’s what they do. We’re vocal enough that developers know the implications of using a director without directing experience, but we can’t go too much further than that.”


Image Metrics' videos show that it can match a video of a face without markers; Starkenburg says, "We have a version of Emily (above) that'll run in a game engine - we just can't find one that can do it!"

“What we see ranges from the animator writing the dialogue to productions similar to a Hollywood movie with a director and a second,” says Image Metrics’ Starkenberg. “There is a pattern developing, and the people who are most organised tend to be the people working on franchises. Storytelling in games is coming. People are doing more of it, but they’re used to being limited by the technology. I don’t think they believe they can get the subtlety in the movement, so they don’t do all the other stuff that’s required to make that a good shot. You need great writing, great casting, great acting, great directing – if you have all that and a great engine there’s no reason we can’t put that performance in a game. It’s just that all this has rarely come together for games.”

Perceptions are changing, however, and developers are slowly waking up not only to the importance of getting drama right, but also the humbling fact that they might not always be the best ones to do it. Emery explains: “We used to see a lot of work which was just a case of putting a cast together and letting the developer direct and record it, however hit or miss that may be. And that was a progression from: ‘Have you got a studio we can record in?’ It’s an important step-change we’ve seen already.”


More from Image Metrics face-matching videos

“People are slowly starting to realise that mo-cap can be a good and easy thing,” says Klepper when we ask if Imagination Studios finds that there’s still a need to educate developers about mo-cap’s pitfalls and solutions. “Unfortunately there’s a whole history of studios that churn out mo-cap data with the intent of producing quantity over quality. It’s tricky because mo-cap data can be a real pain in the ass. How it’s solved to the rig, what kind of rig it is, how it gets put into the engine – all kinds of things can go wrong, especially when studios just hand over data. I’d say 60 per cent of the clients who come to us say that they hate motion-capture – and then they realise pretty soon that it can actually be a pleasure.”

“It’s about changing the way people work as much as it is about changing the amount of budget they allocate,” says Emery. “To get the most out of a full performance-capture scenario, it’s about getting people involved early. It’s about being organised and considering that you might want to do your voice at the same time as your motion-capture. It’s about having had a great scriptwriter involved from an early stage.”
There are, as Audiomotion’s Morris observes, many ways to skin a cat. The choices facing developers are a little overwhelming – if and how they choose to break down the capture into multiple sessions, separating motion and audio, being the primary decision. But there are also different technologies involved in capture, each with their own champions, benefits and drawbacks.

The method most will be familiar with involves markers being placed all over a body, and a large number of static cameras tracking the movement of these markers to build up a 3D picture of the body’s movements. It’s a technique that can easily be scaled up to include multiple actors interacting on a soundstage, or scaled down to capture the minor movements of the face. Voice can also be recorded at these same sessions, permitting the full performance to be captured in a single sitting.



“In an ideal world, I’d go for full performance-capture every time,” repeats Emery. “But whether you are capturing that physical or facial performance or not we all give a different performance when we’re moving. We work very hard to try and incorporate that physicality into as many performances as possible. For a long time we’ve had a lot of disparate elements put together to create a single performance and I think we’ve all seen how they can suffer from that. Anything you can do to tie those elements together is enormously beneficial.”

But there are plenty of instances when this isn’t possible – and the video-capture technology pioneered by Image Metrics offers a useful alternative to marker-based facial-capture methods.

“You can’t get the same degree of subtlety with markers,” claims Starkenburg. “The process of putting markers on is complicated, and you can easily occlude some of the dots when you scrunch your face up – or they can fall off. Body motion-capture works really well, but the limbs are fairly large and wellspaced. Faces are so small and to capture an emotion requires so many little movements. What a lot of people do right now is mo-cap the body and hand-animate the face. But if you’re doing it by hand, it’s time consuming. [Video-capture] is affordable and efficient. You can divide that up between really high quality or really high volume, but either way it’s much more effective than doing it by hand.

“We capture from video – any video,” Starkenburg continues. “We’ve actually had people take stuff on their iPhone and been able to use it. Most of our customers capture video while they’re doing a voiceover, but we can use any picture of a face that is relatively straight on – we can go 20 degrees in either direction. That’s where the maths comes in. We then plot changes in the values of texture and light from frame to frame and use some statistical validation to say: ‘This is a face, so that must be an eye’. The human face has a statistical average: the eyes can only be a certain amount apart, the nose will be above the mouth, the tip of the nose will be in front of the cheek, and so on. We actually started off a lot more generalised, not just dealing with faces, and have since spun out a medical company to look at X-rays. They’ll look at an image and say statistically a spine looks like this, and if they then take a long series of X-rays – say, one a month for two years – they can look at the changes in the same way we look at frames in a video, and diagnose disease.”

The quality of the results are astounding – so much so that one of Image Metrics’ demonstration videos tricks you into thinking you are watching the capture footage when in fact it is the CG model. But though, as Starkenburg says, technology which allows you to pull 3D data from a 2D image is “really very cool”, it’s clear that it is not without its drawbacks.

“The problem I have with the video-capture method is that the actor has to look straight down a camera,” says Emery. “You can have head cameras, and with a single actor that’s OK, but feeding back the comments I’ve heard from actors and directors, if you’ve got a group of actors performing an action scene or an intimate scene and they’ve all got little cameras looking back at their faces, it can be quite difficult. Interestingly, if you have markers attached to your face, you soon forget about it. It’s not the same barrier to performance. The video-capture tech works well for things like RPGs where you’ve got a vast quantity of dialogue – I can see real merit in those scenarios. And if you’re using a marker-based technology then you still have to do the eyes. Video-capture can track the eyes.”

“The thing you notice with video-capture is how much unusual eye movement there is,” says Starkenburg. “If you walk down a street and you see a girl sitting on a bench and you’re checking her out, your eyes are all over the place – they’re not doing what you think they’re doing. An animator trying to estimate that doesn’t get it right. We recently did this sports game – when the guys are running around and jumping and dunking, their faces are really expressive and it’s not something a hand-animator would think about.”
There are circumstances in which hand animation does the trick, however – particularly if you want larger-than-life results. The expressive faces of Uncharted 2’s cast, for example, were animated by hand.

“A lot of that’s to do with the uncanny valley issue,” says Richard Scott, managing director of Axis Animation, another collaborator with Audiomotion and Side. “An expression might look fine on a real person; when it’s superimposed on a computer-generated character it loses its realness. You want to be able to exaggerate. A subtle smile might not read so well on a CG character, so you want to push that smile a little bit. So that’s why people choose to refine the motion-capture or keyframe the faces from scratch. We actually chose to keyframe all of the Killzone 2 intro animation. We couldn’t get Brian Cox in a full-performance setup, or get a camera to shoot Brian while he was doing the voice – but keyframing gave us that little bit of extra flexibility to push him into hyper-realism.”


John Pertwee playing Killzone 2's Commander Radec

Of course, there’s one remaining question for developers: what can you afford? The studios we speak to are quick to insist on the relative good value of this investment (“It’s a cost-effective way of engaging your player – getting that extra percentage score, getting those extra column inches,” says Emery) but hiring out soundstages, directors and so on is clearly something that comes with a substantial price tag. Imagination Studios and Audiomotion are positioning themselves as high-end services, looking to the needs of big-budget titles. As such, they are reluctant to jeopardise their reputation for quality by offering cheaper options.

“Even if a client asks us for raw mo-cap data, we won’t deliver it,” says Klepper. “We don’t insist on building the rigs ourselves but we do insist on solving the data to the rigs, so that we can at least ensure that when it leaves Imagination Studios it looks pretty good. If we can tweak the rig, or build a new one, we can get it much, much better. The benefit that our clients get is trust. We’re going to listen to everything they need, we’re going to send them tests, so they know that everything is perfect before any production work gets under way.”

“We are, and always have been, about quality – so we can’t launch a budget range,” echoes Morris. “But there are other solutions. You’ve got software where you load in the audio files from your voiceover session and it generates mouth shapes (see ‘Marker my words’). That, for lots of people, gives satisfactory results, but it misses all the nuance of real performance from real actors. There are subtle things that a director can bring to a session that you’re not going to get out of a piece of software.”

But you don’t always need that level of subtlety in a game, where a mass of background characters may be suitably furnished with rudimentary face animations. Image Metrics’ video-capture technology is thus the most scalable in terms of cost, since it doesn’t require a stage, markers or an elaborate camera setup, and can be used to rapidly produce the 3D facial data for reams and reams of dialogue.

“The top tier of games and the bottom tier of film are almost the same,” says Starkenburg. “That tier is what we call the ‘hero shot’. But you don’t need that when the character is 35 feet away and facing three-quarters in the opposite direction. And there’s lots of that in games, and what we offer is less than half the price of hand animation.”

Regardless of the level of animation, the one thing that all of the studios interviewed here stress is the need for a quality script. All the technology in the world will come to nought if the dialogue is leaden and absurd.

“The reality is we aren’t telling great stories yet,” explains Starkenburg. “It’s not really about the ability to get good motion-capture, it’s about writing and direction. But it would be frustrating for a great director to come to games if he was unable to project his vision. So if our technology becomes more available, they will invest more in the front-end of actually getting these great performances.”

Be it using markers or video-capture, recorded as a full performance or in disparate sessions, the technology is there to produce the data developers need to create convincing movement. The next step is working out how to move the player.
 
Avatar's Special Effects in Overdrive ( Technology )
Tuesday, 12 January 2010 16:22


James Cameron's 3D computer graphic adventure Avatar is on track to become the HIGHEST GROSSING MOVIE OF ALL TIME, largely because of the films stunning special effects; Concept Overdrive technologies was at the core of many of those effects.

 

"We were approached by the Avatar production in late 2005 because we had been developing real time motion management systems," said Concept Overdrive President Steve Rosenbluth. "The Avatar workflow was all about integrating streams of motion in the production environment, both real and virtual, so our technology was a perfect match."

Six Overdrive motion management systems were used during production, and the "Synthesis" render pipeline system was developed. Concept Overdrive technology was also integrated into Virtual Camera and Camera Wheels applications.

Avatar's biomechanical "Amp-suit" was on a hydraulic motion base controlled by an "Overdrive" motion control system. Motion paths from Maya and Motion Builder were imported, accelerations modified on the Overdrive timeline, and ethernet SDK triggers from other systems synchronized the base with camera and CG departments.

Hi-Def camera telemetry was gathered by Concept Overdrive's streaming SDK in microcontrollers on the camera bus. Overdrive read camera focus, iris, zoom, interocular and convergence data, acting as a streaming telemetry server. Datasets were recorded in the Overdrive motion editor by remote computers using the ethernet SDK - thus the real world camera metadata was both distributed live to other departments and stored for post production.

The "Samson" helicopter was filmed on a hydraulic gimbal controlled live by the Overdrive control system. Actors and camera operators alike performed inside the helicopter while outside operators simulated a flight path with joysticks. The helicopter motion was captured so the moves could be matched in the 3D computer graphic environments.

Simulcam was the process of combining real world actors and sets with computer graphics actors and sets, giving the director the experience of shooting the scene live. This new process allowed the shot to be framed accurately and facilitated artistic decision-making "in-camera". The camera motion in Simulcam, both lens data and gross positioning from an optical mocap system, streamed through an Overdrive system. The data was sent live into the 3D CGI world via Concept Overdrive's ethernet SDK, allowing the "compositing of worlds". Automated datasets were generated for post-production, and a video application called "Vcap" was developed by Concept Overdrive for the calibration process. Industry experts are saying that Simulcam was the groundbreaking technological advance of the film.

Overdrive systems were used to motion-control camera lenses in shots which utilized previously captured Simulcam data. The continuing development of Overdrive's hard real time math engine enabled the on-the-fly lens mapping.

Concept Overdrive developed the Synthesis system for Avatar, which was the editorial pipeline of the main camera stage. The harvesting of metadata from this mocap stage was largely automated by Synthesis, which assembled assets from multiple departments after each take. The system gathered the data, modified it and rendered it into computer-game resolution video files which were "digital dailies" for the editorial department. A flexible task-sequencing architecture was designed which utilized networked resources to automate the render process. Nearly every CG shot in the film passed through Synthesis; the renders were the final editorial cut of the film before the high-resolution rendering.

The Virtual Camera, which is prominent in Avatar publicity, contains a Concept Overdrive microcontroller which handles analog and digital inputs, sending them through an Overdrive system to be streamed into the 3D computer graphics world. A camera-wheels device was also developed with an internal embedded Overdrive computer; this was fed into the CG world and used for dolly shots with frame-accurate sync. Nearly every CG shot of the film flowed through Overdrive computers and protocols at some point.

"Being a deterministic hard real time system, the Overdrive boxes were the only computers on set that could gen-lock to camera shutter and time code." Rosenbluth says, "On some shots there were four of our systems running in parallel, it was awesome to see it all happening. On-set users had no idea of how much data was flying around the room, they just took for granted that they could get the shot - which is how it should be." Concept Overdrive systems ran unattended for months at a time.

The technologies deployed on Avatar saved man-years of labor and made the virtual production run like a live-action shoot. Industry experts are calling the Avatar technology ensemble one of the greatest technological achievements of recent cinema history.

Concept Overdrive released version 1.0 of Synthesis and version 2.3 of Overdrive in the fourth quarter of 2009.

 
Sci Tech Awards Announced
Monday, 11 January 2010 23:50

Academy Presents 15 Sci-Tech Awards

By: Thomas J. Mclean

The Academy of Motion Picture Arts and Sciences is recognizing developments in 3D, motion capture and ambient occlusion rendering in announcing the awarding of 15 scientific and technical achievements.

The 46 individuals the awards recognize will be honored at the academy’s annual Scientific and Technical Awards Presentation, which will be held at the Beverly Wilshire Hotel on Feb. 20.

The winners of the academy’s Technical Achievement Award, for which they will receive an Academy Certificate, are:

• Mark Wolforth and Tony Sedivy for their contributions to the development of the Truelight real-time 3D look-up table hardware system. Through the use of color management software and hardware, this complete system enables accurate color presentation in the digital intermediate preview process.
• Klaus Anderle, Christian Baeker and Frank Billasch for their contributions to the LUTher 3D look-up table hardware device and color management software. The LUTher hardware was the first color look-up table processor to be widely adopted by the pioneering digital intermediate facilities in the industry.


• Steve Sullivan, Kevin Wooley, Brett Allen and Colin Davidson for the development of the Imocap on-set performance capture system. Developed at Industrial Light & Magic and consisting of custom hardware and software, Imocap is an innovative system that successfully addresses the need for on-set, low-impact performance capture.


• Hayden Landis, Ken McGaugh and Hilmar Koch for advancing the technique of ambient occlusion rendering. Ambient occlusion has enabled a new level of realism in synthesized imagery and has become a standard tool for computer graphics lighting in motion pictures.
• Bjorn Heden for the design and mechanical engineering of the silent, two-stage planetary friction drive Heden Lens Motors. Solving a series of problems with one integrated mechanism, this device had an immediate and significant impact on the motion picture industry.

The winners of the academy’s Scientific and Engineering Awards, for which they will receive an Academy Plaque, are:

• Per Christensen and Michael Bunnell for the development of point-based rendering for indirect illumination and ambient occlusion.
• Richard Kirk for the overall design and development of the Truelight real-time 3D look-up table hardware device and color management software.
• Volker Massmann, Markus Hasenzahl, Klaus Anderle and Andreas Loew for the development of the Spirit 4K/2K film scanning system as used in the digital intermediate process for motion pictures. The Spirit 4K/2K has distinguished itself by incorporating a continuous-motion transport mechanism enabling full-range, high-resolution scanning at much higher frame rates than non-continuous transport scanners.
• Michael Cieslinski, Reimar Lenz and Bernd Brauner for the development of the ARRISCAN film scanner, enabling high-resolution, high-dynamic range, pin-registered film scanning for use in the digital intermediate process.
• Wolfgang Lempp, Theo Brown, Tony Sedivy and John Quartel for the development of the Northlight film scanner, which enables high-resolution, pin-registered scanning in the motion picture digital intermediate process.
• Steve Chapman, Martin Tlaskal, Darrin Smart and James Logie for their contributions to the development of the Baselight color correction system, which enables real-time digital manipulation of motion picture imagery during the digital intermediate process.
• Mark Jaszberenyi, Gyula Priskin and Tamas Perlaki for their contributions to the development of the Lustre color correction system, which enables real-time digital manipulation of motion picture imagery during the digital intermediate process.
• Brad Walker, D. Scott Dewald, Bill Werner and Greg Pettitt for their contributions furthering the design and refinement of the Texas Instruments DLP Projector, achieving a level of performance that enabled color-accurate digital intermediate previews of motion pictures.
• FUJIFILM Corporation, Ryoji Nishimura, Masaaki Miki and Youichi Hosoya for the design and development of Fujicolor ETERNA-RDI digital intermediate film, which was designed exclusively to reproduce motion picture digital masters.


• Paul Debevec, Tim Hawkins, John Monos and Mark Sagar for the design and engineering of the Light Stage capture devices and the image-based facial rendering system developed for character relighting in motion pictures.

 

The society wishes to congratulate Paul, Tim, John, Mark, Paul, Steve, Colin and Kevin on their Oscars and for their contribution to advancing the art and science of Motion Capture.

 
Should actors star as young versions of themselves?
Tuesday, 05 January 2010 04:05
by WorstPreviews.com Staff
Discuss: Should Actors Star as Young CG Versions of Themselves?
When James Cameron was promoting "Avatar," he spoke about his technology with Entertainment Weekly, stating that it is now very easy to take convert an actor to a CG alien creature. He added that his motion-capture tools can also be used to create human characters.

"If we had put the same energy into creating a human as we put into creating the Na'vi, it would have been 100 percent indistinguishable from reality," said Cameron. "The question is, why the hell would you do that? Why not just photograph an actor? Well let's say Clint Eastwood really wanted to do one last Dirty Harry movie, looking the way he did in 1975. He could absolutely do it now. And that would be cool."

We got a little glimpse of this in "Terminator Salvation," in which audiences got to see a younger, bulkier version of the Arnold Schwarzenegger we saw in the original "Terminator" film. But it wasn't Arnold behind the movements, and it wasn't as perfect looking as with Cameron's technology.

Question: Would you want to see Clint Eastwood star as a younger CG version of himself by doing his performance in a motion capture suit? What about Arnold in a new "Terminator" film, or Mel Gibson in "Mad Max 4"?

How about a Clint Eastwood film, with another actor providing the movements for the young Eastwood?
 
<< Start < Prev 21 22 23 24 25 26 27 28 29 30 Next > End >>

Page 26 of 35