Dr. Of Machinima

A blog By Dr. Nemesis following the progress of Binary Picture Show's work, as well as other Machinima.

Cinema Inspiration in Machinima Technique


There are rare moments when I’m at the cinema and I’m so inspired by what I see, I try to think of ways I can incorporate such ideas in my Machinima.

In Blade 2 we saw the introduction of the L Cam. CGI shots of digital stunt men were seamlessly merged with live action shots, providing more fluid action scenes.

It’s a live action shot and Blade gets punched, sending him hurtling into the air. The action slows down and he comes so close to the camera (he’s now the CGI Blade) that we can see the sunshades on his head wobble a little. He smacks into the wall, and the live action Blade lands on the ground.

Traditionally this is done by cutting the CGI and live action shots together but the L Cam technique allowed it to be done in just one shot! Apparently the L stands for “liberated” and as far as Machinima goes we’ve almost ALWAYS had a liberated camera. The problem for me is that my mind wasn’t quite this liberated, and for good reason. When I first tried my hand at Machinima I really went to town with the disembodied camera idea. Almost every shot in my first film was a dolly, the camera was weaving through people’s legs, pipes, hovering in the sky, I was out of control! I had to learn to reign that camera in and in that, perhaps some of the freedoms afforded by a virtual camera were forgotten. Until I saw Blade 2. Bouncers, had I finished it, would have had some some great action sequences thanks in part to this film (I might still finish it!!).

Despite what people may think from my early films I’ve always been a bit of a facial animation enthusiast. Back in the Quake 2 days the technical process for facial animation made it so difficult to get a good performance that by the time I came up with the idea used to animate the faces in Beast (an idea which was and is still unique, to my knowledge) I was just happy I could have lips moving at all. The facial animation in Beast made the characters in Bouncers look like stroke victims, however it still wasn’t as good as it could have been.
My first gripe is that the characters in Beast don’t blink in the whole film. This wasn’t impossible in Crazy talk 4.5, it was just difficult to implement while keeping other facial expressions going.
My second gripe is that their eyeballs didn’t move much. Other than on one occasion they always faced forward. This is where the cinema inspiration slips in again.

When The Polar Express hit the box office one seemingly persistent criticism of the CGI was that the characters’ eyes seemed dead, giving them a very eerie feel. In Beowulf they combated this by using Electrooculography to actually capture the movement of the eyes exactly as the actors moved them, and the result was a much improved virtual performance.
Now, I have no access to this technique, but it made me think of what kind of things I could do to improve on Beast’s method, and luckily Crazy Talk 5 accommodated. One thing that makes eyes seem more alive is jitter. The eyeballs never rest perfectly still, a fact that makes control of a computer via eye movement a challenge for interface designers. Again, 4.5 could have done this, but not without difficulty. Due to the live puppeteering in CT5 I’ll be able to make the characters blink, roll their eyes around, AND attempt to simulate a small level of retinal jitter – all in one pass.

With my animation muscles nicely flexed the next thing that’s really given me a brain itch is sound. As old fans of Binary Picture Show will know, I struggled with sound quality for quite a while. Now that I understand it a bit better things have improved and I can now move on to spending every other waking moment thinking about the actual sound effects. This is even more important in Digital Memory because of the main character, who my faithful blog readers might remember, is a robot. “Should a robot really make some kind of noise every time it moves, or would that just be annoying?”, I often ask myself.
Well, Pixar’s latest gem, WALL-E tells me yes, yes they do make noise with every movement. However I get the troubling feeling that if this isn’t done very well it would indeed descend into an assault on the ears, annoying the same way someone persistently zipping and unzipping their trousers in your face would be annoying.
It’s not just the sound work that was inspiring though. I found this film even more visually appealing than Finding Nemo. As the two main characters don’t exactly have English as their first and commonly spoken language, their actions (or animations) did the bulk of the talking, and it was done so well, especially since they weren’t humanoid in their design.
Just as facial animation helps a character appear more life-like, the sound effects given to Wall-E’s every roll forward, or lifting of an arm, or twitch of his eyebrows, added to his presence.

If I can get anywhere near a similar result in Digital Memory I’ll be a very happy man. It’s not impossible. Phil Rice and Ricky Grove have kindly offered to help (and we all know how good they are), but the amount of sound work seems so staggering I doubt I could let them at it in good conscience. In Beast, most of the sound effects were already in place when it went to Phil. Ricky did some clean-up (there were some clipping problems in the dialogue files, which I now know occurs during the video capture process in Motionbuilder) and Phil added a few sounds and reverb effects, etc, to give it a more engrossing atmosphere. Hopefully I can do something similar for Digital Memory so that it doesn’t become a chore at any point in their helping. It’s a difficult thought since the sound in this is going to be so much more complex than in Beast. As always a cross my fingers for a good outcome.

Totally off topic I saw a film today, Twaddlers, made in Antics. The viewer comments on Youtube reminded me why I don’t like Youtube, and partly why I left Machinima.com. Infantile comments aside, it was fun, but really annoyed me because of it’s similarity to an idea I had in University and was really looking forward to producing some day. Twaddlers could have been made a little better, some polish here and there, but the random humor is very funny, I loved it. Give it a look if you can. from the comments, some people get it and some just don’t.

Facial Animation in Machinima


The last test video (Meet the heavy Spoof) went well. I definitely intend to use this method on the new ‘Bouncers‘ series, but before I commit to it entirely we’ll actually be making a short film that will rely heavily on the technique, to see just how far we can push it and if it’s really feasible to do it for a runtime above 1 minute.
So this test project is called ‘Beast’ and it’s heavy on the dialogue.
One problem Machinima has almost always been plagued by since inception is the lack of emotional expression available. Facial animation was always difficult to implement and on the whole emotional Machinima has had to rely solely on audio. Great actors and a few choice tunes were really all you could do, and you don’t need to be a veteran to know that great acting is rare.

Thankfully now, there is Half-Life 2 and UT2K4. However many of the popular engines still have no lip syncing tools. The Sims 2 is a great example. The film dialogue has to be laid over characters who are actually moving their lips to something else (ie lines from the game). Because of this I’ve always thought the technique relied too much on luck, or accidents. Facial expression’s are do-able using a few tricks, but it’s not really possible to get a range of emotions to be as fluid as in an engine with a dedicated tool.
Another great example is Second Life. Highly popular for Machinima, but unlike it’s counterpart, There.com, it doesn’t come with lip sync abilities. And this is where it get’s interesting.
It’s becoming popular, not just in Second Life but also other lip sync lacking engines, to use Crazy Talk. This way you could potentially lip synch any engine, although some video editing is often required, and it can be extensive.

In Machinima’s progress, not only are we seeing better graphics as the engines improve, but also a greater ability to connect with the audience. It’s from this ‘fight for emotion’ that ‘Beast’ will be born. With any luck the facial animation will do what the voice acting cannot, as we are one of the many groups who don’t have easy access to great actors. ‘Beast’ is designed in such a way that the facial animation is not a nice extra, but rather an absolute necessity. Simply having lips move is not enough anymore, and not having them move at all…. So hopefully in a week, we’ll have some interesting results. We’ve been working on it for almost 3 weeks now so it’s very close.

Heavy Weapons Guy Knuckles


It’s been ages since I posted any progress on Bouncers, so this 1 is massive. I finally got round to trying an idea I had for Lip syncing in the new Bouncers series and here’s a demonstration. Those of you who have seen the “Meet the Heavy” video for Team Fortress 2 should get an extra laugh from this.

I never imagined I’d have that kinda control. The facial animation is done in Iclone’s Crazytalk, then brought into Motionbuilder. I did a few animations to make sure he wasn’t standing still, and perfecto. This is a great improvement from the old method I used when in Quake 2.
And to think I had the idea when waking up one morning.

For anyone wondering why the blog is so sparse: due to an unfortunate event all the old posts are gone. This is technically a new blog. I was able to salvage the html file, so my old article “Machinima’s Missing Child” was re-posted. I may bring back other semi important ones later.