i’ve found it really handy to go into InkScape and make a quick SVG by hand then load it up in a text editor to see the right markup to use for different things. for example with layers InkScape has it’s own namespace and extra options so you can put really spammy data in topmost hidden layers and only reveal them when you’re zoomed in on some area of interest.
here’s a link to the full W3C SVG standard which is a great reference for all the possibilities. and here are some new examples of this madness in action.
with @party this weekend came the first releases of a new demo group vrtx (pronounced “vertex”). we did two entries for the freestyle graphics compo and got 2nd and 6th place. kirill did this sweet spider web scene and guybrush, kirill, and i did the mazelized text. watch out for more cool graphics and demos coming from vrtx in the future!
i couldn’t make it out to blockparty again this year but i did participate online through the ustream stream and chat which was a blast! i really hope more demo parties do this in the future. watching live party streams is fun but being able to chat at the same time with people at the party and others watching the stream during the compos is just so much better. the archived video is available on ustream but unfortunately the hilarious chat doesn’t replay with it haha. and of course lots of the entries are available for download on pouet.
though we didn’t make it out in person, xplsv took 1st in the hi-rez graphics compo with kirill’s rad render! it’s actually using assets from a demo we started after finishing our invite for blockparty 2009. we wanted to release the demo at blockparty 2009 but didn’t have time to finish and then again for 2010 we wanted to do it and again didn’t find the time. don’t worry though we’ll release this guy eventually.
congrats to everyone who did manage to make a release or make it to the party! and how great to hear the fifth edition of blockparty will be on the west coast in california! see you there?
if you’ve encountered me at all in the last week either in person or online then you know i got into the Starcraft 2 beta!!@ with Starcraft still being my favorite game of all time i can’t help but be exploding with excitement over this! now i’m not going to write a big review or anything like that here because everyone already knows it’s fantastic and will be buying at least one copy when it’s released. but well okay… it’s amazing fantastic wonderful true to the original super fun crazy addictive and all these things which is making it really hard for me to do anything productive!!! gg blizzard.
but anyway sole has been diligently blogging daily on her breakpoint demo progress so i should at least be able to manage weekly updates right? thanks for being a good influence sole!
so recently i’ve added what i think are some really neat animation and sync controls to the demo system but haven’t capped a video of that yet so you’ll just have to wait for those details. also got in material support, more controls over copied instances, metaballs, and mesh displacement.
here are some new pictures of a simple scene generated in demo studio and rendered offline:
and once again here’s a peek at the construction:
okay now i need to get in a match of SC2… or maybe two…
this year i’ve been working on the next version of demo studio. so in brief, version 1 (tokyo, mudballs, ccc) was your typical drop effects into a timeline and edit the parameters, version 2 (hofn, sokuseki) had more powerful sync and scripted effects, version 3 (n-0505, blockparty invite) was entirely code driven with pop-up ui only for tweaking. with version 4 i hope to get the best of both worlds between artist/designer support with ui but without sacrificing the handcrafted codery goodness that comes from not having ui.
with lots of the core of the new system done now i can get things animating and syncing to music. here’s one of the quick tests from this weekend and a peek at the ui showing how it was done. the music is from Zardonic’s remix of Nine Inch Nails Ghosts track 35.
so 16 years ago i started keeping a journal, 14 years ago i started saving all my source code, 10 years ago i started saving regular screenshots of my projects and since then have been accelerating the rate at which i store off snapshots from originally around one every month to by the end of 2009 one image a day and one video a week. this is quite a lot of data but the rate of technological advance in storage has far exceeded the increasing rate of data i store. right now you can buy 1.5TB of storage for $100. this is insane and it makes my data set look pretty tiny and pathetic!
so to get started on the next decade i’ve upped the ante and created some software to help. inspired by gordon bell’s research and latest book Total Recall: How the E-Memory Revolution Will Change Everything my app grabs a screenshot of my multi-monitor desktop every 15 seconds and stores it off with lots of metadata. this was actually a bit too much data for my tastes since each compressed snapshot comes out to many megabytes. so i added an additional layer of inter-frame compression similar to what is done with video and this gets me down to well under 100 kilobytes for most snapshots and it’s currently trending around 100 megabytes per day. this still sounds like a lot but that 1.5TB drive is actually large enough to store that rate of data for me for the next 40 years! ridiculous right?
of course you’re probably wondering what the point is… well, there are lots! for one, just like the journal it is fun and rewarding to go back in time and see what my life was like in the past and recall the things i was thinking about and doing. it helps to gain perspective on things. with this automated capture now so many more possibilities unfold. my simple playback app at the moment can already give me fast replays of the past as well as statistics on my activity (how much time spent coding, web browsing, chatting, etc). in the future i’ll be able to run OCR on the data to recognize any text on screen and then be able to search effectively and quickly extract text from my computer at any moment in the past. further in the future i’ll be able to feed all this into generic AI software to train it to respond, work, and think like me so i can have great digital assistants and even further in the future provide a much more accurate history and memory to my simulated consciousness after all of our brains have been scanned and moved to processors in space!
anyway this work is the reason i didn’t make as much progress as i wanted on other projects this holiday.
i’m still bummed i didn’t make it to the actual party. oh well in 2010 i should make it. i’m also planning on attending the new @party in massachusetts. oh and breakpoint in germany in 2011. fellow usa demosceners please go to all those parties too!
what’s nice is then you can load the file up in firefox or inkscape and zoom in. also you can easily draw in other data like triangles and circles or put nodes you’re interested in in different colors and what not.
here’s a 3d kd-tree seen from above subdividing space evenly
here’s a 3d kd-tree seen from above using surface area heuristic
last night i had a great time with kirill, guybrush, and matt at microsoft testing out and playing with live music visualization stuff. guybrush and i will be doing the visuals for the portland, oregon and seattle, washington stops of the data beez tour! it’s been a really fun experience for me so far as i’ve been introduced to some great musicians (whose tunes are drilled quite deep into my brain now as they’ve been looping for so many days XD), have picked up the programming environment processing, and given lots of thought to whole new aspects of sync and synesthesia outside of what i’ve previously toyed with in doing demoscene productions.
departing from what i think is the normal vj style and inspired by guybrush, paris and other visual artists on the tour, i’ve focused on creating a sort of visual instrument controlled by a laptop keyboard and a korg midi controller. it has five independently controlled layers: feedback, background, midground, foreground, and post process. each layer maps to a row on the laptop keyboard for switching between effects and a block of controls on the korg for controlling speed/amount, beat/kick, and either audio reactivity or transparency. it makes for an incredibly fun toy but i have to avoid some combinations that produce immediate eye cancer!
this weekend i’m hoping to finish the bulk of the code, effects and visual content and also get in practice time ‘playing’ it as a complement to music. hopefully things turn out well and the musicians and audiences next week enjoy the results.
so i gave video strokerization a try and it turned out pretty cool. here you can see the nine inch nails march of the pigs music video strokerized:
for this i told strokerizer it could only paint 64 brush strokes each frame so it didn’t get much chance to fill in fine details with the camera always moving around like crazy. it wound up making for a nice loose look.
on the code side i managed some more optimizations and am finding my way around nVidia’s CUDA visual profiler pretty well now. my laptop’s 9600M GPU gets completely pegged now which is great. on my desktop the GTX280 though is just too fast that it’s still finishing work faster than the CPU can feed it. i think i’ll have to move more parts of the algorithm over to the GPU to fix that.
so previously i blogged about my glypherizer experiment to approximate images with a small number of font glyphs. unfortunately it didn’t meet with the fantastic success i had hoped. the ideas were still a bit itchy though so i wound up working some more on it. this time around i used images instead of font glyphs so they could be brush strokes or other crazy stuff in addition to text. also i rewrote the code in CUDA so it runs on a 240 thread GPU instead of a measly 8 thread CPU. here you can see some of the strokerization tests:
you can see more of the full set on flickr if you like. thanks very much to kirill for the library of cool brush strokes!
this was my first real CUDA app and i was surprised just how quickly i was able to port everything over to GPU kernels. so getting it running on the GPU was pretty easy, but what turned out to be harder was making it fast. i had to optimize my kernels to reduce the register counts they needed quite a bit and use nvidia’s occupancy calculator spreadsheet thing to work out good block sizes and thread counts. that along with using the async stream API helped get the speed about where i expected but i still think there are some things holding it back. i can’t wait for more performance tools for tracking this stuff down.
some time back i was experimenting with some procedural animation and simulation as part of researching how i could animate a creature for a future demo. i didn’t really achieve what i was wanting but i had a lot of fun playing around with it and giving some life to a bowl full of worms. here’s a video capture of them in action:
anyway, when kirill saw the worms he got excited and suggested that he could cut out the top of the fire skull so we could use it as the bowl for the worms. after that we decided we also needed to cut out the eye sockets so the worms could wiggle and drip out of the eyes! this was, as you may be able to imagine, entirely too much fun! i hooked up a hot key so we could drop worms into the skull and another to rotate the skull back and forth and we had ourselves a great little toy. here you can see it in action:
finally for fun we did some offline renders of it:
i’m not sure what the future holds for the skull worms. maybe a halloweentro is in order.