Tuesday, 9 April 2013

Messing around.....

Untextured raw model frame

I mess around with all sorts of shit that never ever sees the light of day for various reasons, sometimes I dont have time to finish  them, sometimes I get bored and move onto something else, sometimes work crops up and mess around time goes bye-byes.   So all sorts of model and coding  / 3d tech that I mess to fill some time in never gets seen.

But as a digital sculptor at heart, I always love to see my shit moving and animated, however there has always been el problemo... I suck at animating.   So as I had a spare night , while waiting for my kids to go to sleep next door to my work room (until they do nothing 'serious' can be done anyway) I thought I'd mess around with something I mothballed ages back due to lack of time.

So here's some very early dick arounds with something I've had in the pipe for ages that does markerless facial mocap.  However as its for 'me' I needed certain requirements that arent available off the shelf (or without costing the earth..I find its easier to write my own stuff than buy shit alot of times).  My main requirements is that the data integrity must be flawless (I have a bit thing about data integrity!) , must be able to be used on multiple meshes  with different topology and vastly different shapes.  It must get me at least 'ball park' before I do a shed load of blend shape tweaking (there are none on this at the moment).

But most of all and the biggest problem is that it must work while I am wearing glasses (no I cant wear contact lenses...very long and borring reason why).  So thats what I personally needed it to do.... this is just an early mess around so dont get too excited.... chances are it'll never see the light of day anyway like a lot of my stuff doesnt.

Now for the technical stuff for those who asked.... 

It was knocked together using the OpenNI SDK, a modded Kinect and plugged my nex 5's output in as extra data so  could pull more from the rgb than you can get with a kinect.  Then both the mesh thats generated and used as control for the main mesh and the resulting keyframes were ran through some code from ReDucto (remember the experiemental denoise function???)  At the moment I need to come up with a new way to handle the eyes as on this the eyes arent tracked as they crap out a little (not a tremendous ammount, but enough to be annoying).

One idea is a sort of subtractive function between the modded kinect data and the nex 5 output data...but nothng set in stone.  Also at some point I'm going to set things up for 'proper' multi camera mocap using syntheyes to generate the point cloud needed as it'll give far better results.  But like many things, that goes on the back burner for a while.