MoCap

See also: [The pizoi games project] [Bender Modeling Pgm] [Doom 3] On this page: {Introduction} {Links}

Intro

By attaching reflective sensors to a person (or other moving object), the motion can be digitised (captured) by a computer system. According to [
wiki]: A high speed 4 megapixel sensor costs around $1,000 USD and can run at 640,000,000 pixels per second divided by the applied resolution. By decreasing the resolution down to 640 x 480, these cameras can sample at 2,000 frames per second, but then trade off spatial resolution for temporal resolution causing blurring or jitter which requires heavy filtering to correct. At full resolution they run about 166 frames per second, but typically are run at 100 to 120 frames per second. A $100, low speed 4 megapixel detector has a bandwidth of about 40,000,000 pixels per second and is unsuitable for motion capture since the motion blur will cause errors requiring filtering causing unsatisfying results. With about 200 LED strobes synchronized to the CMOS sensor, the ease of combining a hundred dollars worth of LEDs to a $1,000 sensor has made these systems very popular. Note that by Shannon's Principle, the capture rate "should" be at least twice the max rate of any movement. Also, there are "passive" and "active" markers which either merely refelct laser light or actively send out a coded signal (ie, with an identifying signature) making the "data reduction" easier and less error prone.

Sensor Stuff

There are several kinds of sensors: Note: Read "Inertial" first. Inertial - The sensor has a velocity detector built into it (technicalluy speaking, it's an accelerometer, thus measuring chnages in direction and speed). These are (relatively) cheap to produce and you "just" stick them on your MoCap suit (think black leotards/body suit here). Each sensor is wired up to data converter (transducer) which converts the analog (continuously varying) signals into digital data that a computer can easily understand. The actor danses around and the data the sensor is sensing is sent to a computer which performs the computational DATA REDUCTION. This data reduction converts position/velocity/acceleration (in 3 dimensions) into a vector graphic model. Think of the vector model as Balls (joints) connected to each other via sticks (limbs). The 3D model is refered to as "6DOF" - "Six Degrees of Freedom"; ie, 3 space co-ordinates (X, Y, Z) and 3 velocity vectors (Vx, Vy, Vz) -- each ball is moving and the limb moves with it. Data reduction involves the concept of being able to convert Position/Velocity into Postion/Rotation, Path History (X/Y/Z, VX/Vy,VZ, AX/AY/AZ - position, velocity, acceleration) or any other "useful" MATHEMATICAL MODEL. The computer stores it on disk, and you now have a digitised model of the actions -- ie: MoCap (Motion Capture). The model can then be inputed to a graphics program; eg, Doom 3 (static models), Maya (about $400) or Animation Master about ($300) or Blender (open source, $-free). Once imported the stick-figure model of the balls and sticks can be given "skin", "textures", "mass", and of course expressions, sounds, etc. Note that in reality all of these things are just "hung together" (connected) and when we watch the vid, we think we are watching a real thing; see the fab film "S1mone". Passive optical - With the advent of high speed computers and lasers [light sources that emit "coherent light" -- imagine each beam of from each laser having the most accurate internal flight recorder in the history of the human race (See map) -- thus, the measuring device (detector) can precisely determine what happened as each sensor moves. Thus, the sensors are "just" special reflectors that each laser/detector pair can be used to track MOTION. Main disadvantage (for now) is lack of accuracy (mainly limited to limbs -- ie, can't pcik up on (eg) facial expressions), and need for relatively dark room (MoCap lab) to get noise free data. Acoustic - Each sensor sends out a sonic signal. Accurate but expensive. Nice for LARGE objects in daylight. Again active acoustic sensors involve attaching a sensor to the object in motion. Passive acoustic means that the object is tagged (or not), and the external system bounces sound waves off the object and calculates what/ how it's moving. Can get expensive, but good for open site locations -- sound waves can go anywhere and as long as there's not much air movement can be accurate. The ultimate are UltraSound (extremely high pitched) systems which can give internal images of living and other objects -- prices are coming down. Magnetic - Same as acoustic, but uses magnetic sensors to detect motion. Fairly cheap, not very accurate and not good for open site locations. Probably the ultimate systems are MRI (Magnet Resonance Imaging) that use NMR (Nuclear Magnetic Resonance) to flip individual atoms using an intense magnetic field -- very expensive, but can of course render internal data with amazing accuracy. Offshoots include "PET" (Positron Emission Tomography) which potentially is even more accurate. [forums.awn.com] [www.awn.com] (Animation World Network) [www.visgraf.br] links! etc!

Rigging the Space

Once you have a model there are several thing that you an do with it: 1) Study the motion; eg, medical/sports analysis of motion. 2) Use the model to create animations; ie, move the model about, "clothe" it in skin and textures, etc. 3) Use the model as a starting point, and (manually or via programming) add fatures to it; usually adding motion options, refinements, etc. 4) Use the model as the basis of a character in a vid or game. On-line "game" worlds... [caligari.com] (what "should be called" www.truespace.com) [trueplace] (free, on-line env) at $360 (for truespace), i would hope so! [www.truebones.com]

Papers and Such

(Specific to MoCap) [Jerry Isdale's FAB jump page w/ton of vendor refs] [Edminson, Jones, Lockhart, and Martin's [paer on "E-textiles"] [mirrored here] [Miller, Jenkins, Kallmann, Mataric's IEEE paper: "Motion Capture from Inertial Sensing for Untethered Humanoid Teleoperation]

More technology

[www.xsens.com [

Links

BioVision -- BVH

(file format)

D3/Blender and MoCAP

[
Intro] [] MoCap tutorial http://www.zandoria.com/motioncapture.htm [www.kinetic-impilse">] (Sutton's pages] Animation Master ($300) [www.hash.com] More tutorials: [tut jump page] az tutorials [map tut] NX Shape Studio [NX Studio blurb] [CS formation design] Unigraphics NX - what it may mean to you (EDS consortium) [field guide] BEGIN BLOCK QUOTE Unigraphics NX can directly open industry-standard JIT files, which can be viewed with the native Unigraphics geometry. This feature supports cross-platform visualization and collaboration, as long as the files are JT-compliant. And from I-DEAS, Unigraphics NX includes the Dynamic Navigator toolset for sketching wireframes. The second group of changes in Unigraphics NX involves a healthy dose of knowledge-based engineering (KBE). With “Knowledge Fusion,” what EDS calls its knowledge engine, users can capture their company’s design rules, standards, and proprietary process knowledge, then re-use those throughout the digital product development environment. EDS starting adding KBE into Unigraphics two years ago. Now, it’s embedded throughout Unigraphics NX. KBE shows up in at least two new areas. First, KBE is in new design validation tools, such as Quick Check and Check-Mate. Quick Check ensures that design criteria, such as volume, weight, critical dimensions, and other user-defined parameters, are met when creating new designs. Check-Mate ensures that designs comply with company standards in areas such as geometry, assembly, and drawing checking. Users can add custom checks through Check-Mate Author. END BLOCK QUOTE about maya ($375) [ww.quidoo.com] Math apps [spiral poly's]