…. Not creatively but technically.
Machinima is a technique of economy. Among it’s greatest strengths has been the fact that someone unversed in the art of animation, or 3D modelling (or any of the other disciplines involved in CG filmmaking) can make a film none the less.
Programs like Sketchup 3D make it fairly easy for someone (like myself) who’s crap at set building to quickly and easily put a room together to suit a specific need. Machinima is all about the shortcuts. But there’s never quite been an equivalent for character animation. There’s been no simple solution to the fact that good animation, suitable for your specific project is difficult to come by.
By now I have little doubt that everyone’s aware of what the Microsft Kinect can do when used in conjunction with specific software, and although I’m not nearly as active in the Machinima community I’m really excited by this.
Now it’s feasible to have exactly the kind of animation you want (within reason) rather than making do with pre existing collections and I’m not ashamed to say I was skeptical that this would happen so soon. Because of companies like ipisoft, I did believe that markerless motion capture software (that analyses video) was the way forward but before the Kinect this was still not really cheap or easy enough for home use. Even easier now, Iclone 5 was recently announced and that from what I’ve seen, that can capture a Kinect performance directly into your scene!!
So finally the important questions.
Now that the Kinect can make use of software like these and Brekel, is there really any technical hurdle that prevents the average Machinima artist from transcending the Machinima classification?? Now that we have this vast amount of control and can more effectively convey emotion, can artists stop relying on the viewers lower level of expectation?
Essentially is this the final piece we needed to appeal to the wider world while not over complicating the technical process?
I believe the answer is yes. Where’s the limit? We don’t yet know. A few years from now will we also be using this technique to capture facial animation?
The disappointing part is that I’m not sure I’m seeing many widespread benefits of this new ability within the community quite yet. Then again I’m not very up to speed and I suppose it’s still early days. For all I know several Machinima films have already made great use of this technology.
I know if I ever release another film I’ll strive to make great use of this. But I’d rather see the community explore this to a much greater extent than it has been so far.