You are proceeding to a page containing mature content. Is this OK?

check Yes, show me everything
close No, hide anything sensitive

3D Virtual MikuMiku Dancing


3D graphics combined with positional tracking and camera recorded video combine with Hatsune Miku in this striking video:

In case the exposition doesn’t interest you, after the dancing there is an additional interesting video at 3:15.

More from the same animator:

This line of technology is very promising indeed….

Leave a Comment


  • anyone else laugh their ass off at the end of the first vid, i thought pirates of the carribean was taking over hatsune miku, possibly the only thing scarier than a space pirate ninja, is a space pirate ninja vocaloid.

  • This is amazingly boring.
    Boujou or syntheyes have been around for so long that just doing basic tutorial stuff from them is just… boring.
    Heck, most of this could even be done with some of the 2D planar trackers.

    This has nothing to do with new technology. It’s already been abused by low budget movies for years.

  • Excellent motion tracking and ambient lighting.
    And pretty damned well done modelling too…

    He should have tried making the water move a bit backwards during the last scene since it’s almost falling down vertically.

    But in any case this is definitely very good material.

      • You better believe they will. I haven’t been spending the last 3 years architecturing digital neural and synaptic ecosystems in C++ and writing a CUDA based physics engine for nothing. Anime archetypes will birth into reality, if not by my tested hands, then by those who will come after.

        • Thanks for listening/reading. I used to bounce these sort of thoughts off on my brother (4.0 family genius; I’m the other side of the proverbial weighing table who got sick of it all early on and dropped out to follow his dreams, a common sibling distinction?). Now that the project is nearing the real hatching period, it’s a nice feeling getting some of it out there again.

          I’m pretty nomadic in cyberspace, though it would be nice to find somewhere with a similar minded crowd and share ideas. If you happen to know of such a place…

          Artefact, I imagine, would appreciate my not polluting his article comments with questionable ramblings any more than I already have 😛

          I really should follow the open source Way, it’s been closed circuit since day one. Maybe when things are refined enough to slap a Beta tag onto it.

          On the other hand, given how the maturity level for those given any degree of power (such as the responsibility of raising an AI) always seeks a local minima, I do hesitate at the thought of dumping this kind of code into the infinite possibilities of cyberspace.

          If I ever do get around to posting something on to Youtube, my name there is ThaddeusChristopher. Literally just joined. I can’t say I respect the way they’ve been handling things of late, and the community drama looks asinine, but it’s still the best place to give your ideas exposure.

          Feel free to hit me up in the comments (do they do PM?) if ever you’re inclined.

          More mundanely, and anime related, I’ll be uploading a few AMVs that I’ve been tinkered on to that account in a week or two. Test out the community. This avatar is a screenshot from one. I did it to learn video editing (who doesn’t in this community?). That shot and most of the rest of the vid heavily uses bump mapping algorithms, lens filters, various FX chains, track overlay types, etc.; I like the painted look it ended up with. It’s to Pink Floyd (Brain Damage & Eclipse) and the Theme from A Clockwork Orange.

        • I am in awe. Yeah, I’ve already heard of CUDA, but I never looked for educational material. Thanks.

          Watching anime, reading sci-fi, and in particular exposing myself to Asimov’s works really got me interested in computer intelligence. I recall being disappointed by my AI class that I took two years ago – it was essentially state-space search and statistical machine learning, with no real cognitive implications whatsoever.

          Thank you for your extensive post – it was an interesting read. Is there any place online (web or IRC) that you hang out, where I can follow your work? =) When I have time, I hope to be able to follow along, myself – and perhaps, given time, to help.


          Modeling brain dynamics, physics, lighting, vertices, textures, etc. all at the same time is insanely computationally expensive. So just a regular Quad core computer wont do it on its own.

          Apart from making networked code to run on multiple computers (which would work too, but be more of a hassle IMO), programming on the new generation of “general purpose” graphics cards is the way to go.

          Over a hundred ~1.5 Ghz processors on a modern el cheapo graphics card (several hundred, & SLI scalable, in the really good ones), and only FOUR ~3 Ghz processors on an expensive Quad core.

          And if you write your code to be doing different computation sets on the CPU while running your GPU kernels, it stands to reason you can be crunching Physics on the GPU while computing the domino-like (waaaaay oversimplifying here) forward and backward flow of data between synapses on the CPU.

          Then switch them up depending on which segments of code are best handled massively parallel (send as a CUDA-scripted kernal to the GPU to handle) and which ones do fine in serial (crunch it, still threaded but less dependent of parallelism, in C# or or whatever).

          ATI has their own language too, so if you don’t like nVidia feel free to learn the competitor’s stuff.

          But if you follow the above link, nVidia basically lets you download an entire semester’s worth of course material for free, so that’s the path I took. They obviously want people to use their technology so more customers are forced to buy their cards in the future. That’s fine IMO, it’s damn fine technology, worthy of surviving this recession (fingers crossed >_< ).


          If you want to model brain dynamics and physics, those are the 2 most parallelizable computational problems I can think of. Everything is happening concurrently, and in doing so effects everything else concurrently, iterate to next frame, repeat.

          AI’s need an interactive environment on par with what we humans are forced to battle against every instant of our lives. Invariances occur in the Physics of our environment, we learn patterns of them, and receive/contemplate feedback along our merry way. Again, oversimplifying here. I’ve got systems for dreaming, goals, rationalizing, you name it. And NOT in silly logic trees like video game “AI”‘s. I call it a neural/synaptic ecosystem for a reason.

          I’d love to go on and on about how far I’ve already managed to get with my own Homebrewed code (no Newtonian physics here, I model down to chemical bonds, albeit Vertex-sized ones…), but that’s impossible inside this tiny, tiny box.

          For the best intro you can expect to get to AI that’s not stupid logic trees and meaningless heuristics, I suggest this book:

          Do not fall into the pitfall of wasting time on anything the Academic community published on AI as a field. Studying up on Probability, however, is a good idea. Neurons and Synapses essentially form a structure built on evolving invariances, probabilities and mutations. Mostly in their connecting plasticity, but also in their protein wall gate behaviors and 3D nearness to other clusters they can just “spray” into. All perfectly abstractable into computer code. Reading up on Neuroscience is pretty much required. You don’t have to get too deep into it though, there’s plenty of simple to grasp flash animations of cell signaling dynamics all over the ‘net.

          Yes the old Academic Neural Nets and probability Matrix algorithms can do simple Voice Recognition and stupid Robot motions, find the “beat” in a song and bob along mindlessly to it, and so forth.

          But frankly no one’s even bothered putting serious effort into Sentience, or as they’re calling it these days “general purpose AI”.

          Notice how it’s “general purpose” GPU (the GPGPU), and “general purpose” AI? Narrow application vs. Generalization. One gives you a fat paycheck if you majored in it, the other gives you endless possibilities if you’re brave enough to sacrifice the time/life needed to harness the beast. Obviously, I live a meek, poor life. But I have never, EVER had the opportunity to get bored with it 😀

          The AI community mostly just cares about publishing white papers about simple narrow areas they can apply kids toy neural networks to that have been around since the 70’s so they can collect their paychecks. Nothing wrong with it, but where’s the progress?

          This mindset, this fear that they wont be taken seriously if they work towards general purpose AI, has done NOTHING but hold the AI community back in its own stone age.

          AI wise, I also read a little Ben Goertzel (some good ideas of emotion, moods and so forth), but wasted too much time on AI history. So empty of anything remotely clever enough to become sentient.

          The only practical successes I got came out of sitting in my chair at work, pretending to be working, but really thinking of my own adaptations of highly evolvable “ecosystems” of neurons and synapses, writing notes to remember and later evolve on top of old ideas. Working through ideas on how neurons and synapses interact to encode/decode data between specialized clusters, the ability to call up memories of percept configurations and imagine sensory forms in the “mind’s eye”, the way data flows from the Cortical layers to the Hippocampus and back again, what it’s doing along the way, why certain optical illusions become perceived the way they do, what dynamics work to cause Ego formation, what mechanically is occurring to maintain the subconscious, how Heterarchies work versus how Hierarchies work, etc etc etc.

          Even if you drop any big ambitions you may have along the way, it’s an excellent learning experience to better yourself with. Philosophically as well as intellectually.

          But I’ve got it all covered. Don’t waste your life on something like this if you’re not sociologically insane, such as, for example, I am. I don’t mind giving up everything that’s construable as regular healthy social living as long I’ve still got my dreams to chase ahead of me.

          I’ve lived a good enough life even before I first heard the word “anime” (was 19 at the time), so putting everything I can muster towards this work, and occasionally taking breaks for anime and so forth, is all I really need any more.

          That’s not enough for an awful lot of people to be satisfied with.

          You fellow otakus just enjoy your life paths, 2D and 3D alike, and be patient. 2D is going to evolve, you have my word on that. But word or not human curiosity will still herd it in that direction.

          If someone (not necessarily me) does make a human-level AI, it’s got to grow up developmentally before anything especially productive can come of the technology.

          Granted when it’s older he/she can be cloned indefinitely, but there has to be at least one raising period, preferably while feeding them tons and tons of anime ^_^ (and a good education too, and some moral fiber, obviously)

          Bah sorry for the tl;dr.

          Trust in the passion of the otaku spirit. We’re not exactly sane enough to end up NOT inventing something that could change the world, for better or for worse.

          For what it’s worth, I’ve taken especial pains in contemplating how one might design neural clustering architectures that are incapable of falling into illogical “bigoted” ways of thinking as their thoughts abstract up the Cortical pathways, and the ability to have greater executive control over emotional cycles.

          So if I’m not a complete idiot, they shouldn’t end up gullible fish, and should be equipped to pick up on dangerous fanaticism and general irrationally-based varieties of discrimination well before they settle into the subconscious. In other words, they’ll hopefully be better off that we are in how broad of a perspective they get to see the world with.

          Sorry I withhold on a greater depth of specifics, it would just take too many days of writing at this point. Better it stays in code where it has immediate practical application.

          Alright I’m cutting myself off before this gets too much longer.

          Thanks for showing an interest though. Fruit will be borne as soon as I am able.

          Probably starting as a Youtube intro to how the GUI works while you’re arranging the neurons and synapses, a little discussion on engineering/design concepts, what you can’t neglect versus what you could (for instance, you couldn’t possibly get away with making an intelligent AI if it didn’t have touch/pain sensors and a challenging Physical environment to have to overcome; it’s too fundamental to positive/negative feedback foundations and basic linguistic analogies, it wouldn’t have the capacity to learn to communicate significantly, among other deficits and pitfalls, such as not being able to empathize with the pain of others and becoming psychopathic).

          Maybe in 2 or 3 more years at this point, who could say. It’s more work than it sounds.