August 27, 2008

SIGGRAPH 2008: A Student Perspective

CGW has provided news from SIGGRAPH generated by our editors and, in the case of new products, from our vendors. Now we are providing you with a unique look at the conference and exhibition from the perspective of four SIGGRAPH Student Volunteers who attended the show.
A biography of each is attached at the top of their individual blog, which they initially produced for FireUser.com, a community Web site for users of AMD ATI FirePro and FireGL accelerators in the fields of CAD, 3D visualization, medical imaging, video VFX, and digital content creation. The site is supported by FirePro engineering and marketing team members and features a blog, support forums, and the Idea Exchange—a Digg-like way for users to submit, discuss, and vote on ideas for feature or service improvements in the FirePro line. As part of the group’s community outreach at SIGGRAPH 2008, FireUser worked with the SIGGRAPH Student Volunteer program to give students a venue to blog about their impressions of the technology and presentations at the conference.

FireUser.com and the students have kindly offered their blogs for publication here on the CGW Web site, as well. We think you will find their views enlightening!


Josh Fincher - Art Institute in Pittsburgh


I am in my senior year of the Game Art and Design Bachelor's program at the Art Institute in Pittsburgh, Pennsylvania.  Being an artist all my life, as well as an avid lover of video games, I found my place at AiP in 2004 after beginning college in pursuit of a degree in Computer Science. In my time at school I have been a part of many great projects including a contest through The Sci-Fi Channel, and one currently for the Carnegie Museum in Pittsburgh. http://www.jivefincher.com

Stereoscopic 3D is here to stay
 
Just about everyone loves to watch 3D movies since it’s such an incredibly different experience from a normal viewing. From the distinct visual differences to the feeling that a character is about to reach out and touch you, 3D films make for a truly unique viewing experience. Now imagine playing your favorite computer game, watching live television, or experiencing a theme park ride all in 3D. Monday afternoon, 3D for Gaming and Alternative Media: How 3D is Altering Our Concept of Entertainment presented exactly that. Neil Schneider, President and CEO of Meant to Be Seen and Mark Rein, Vice President and Co-Founder of Epic Games presented The Power of 3: An Insider’s Look at Stereoscopic 3D Gaming; Mark Mine, Director Technical Concept Design at Walt Disney Imagineering presented Designing Theme Parks in the Virtual World; and Steve Schklair, Founder and CEO of 3ality Digital Systems presented Production of Live 3D Content for Broadcast. Stereoscopic 3D is here to stay, so why isn’t there a bigger buzz?

What exactly is Stereoscopic 3D (S-3D) and how does it work? S-3D “is the ability to display visible depth through two dimensional media” (http://www.mtbs3d.com/). More specifically, S-3D achieves the illusion of visual depth on-screen by flashing two identical images at the same time (one left eye and one right eye image) at different positions. If you look at the on-screen image with only one eye at a time, you will see a slightly different perspective through each eye. When both images are combined at the same time, the viewer witnesses an amazing 3D experience. There are, like all things, a few hurdles that need to be overcome in order to have an S-3D experience - the hardware solution that is capable of filtering a unique image for each eye, and a software driver that will take the game’s visual information and translate it into both left and right eye views.

Right now, NVIDIA has the distinction of having the recommended stereoscopic driver of choice. The driver is supported by most newer NVIDIA graphics cards and works with mostly all of the hardware solutions on the market. The driver “works by intercepting DirectX and OpenGL programming calls, and translates the virtual 3D information into a practical stereoscopic result.” While NVIDIA has the most supported driver, Meant to Be Seen says there are driver solutions which support other NVIDIA and AMD/ATI products. Thankfully, most current DirectX and OpenGL games are S-3D compatible to some extent with a bit of tweaking.

As for hardware, there are many solutions on the market now which support S-3D, and the good news is that they are reasonably priced (based on the type of gamer that is already investing in the game hardware). The only draw-back, for now, is that S-3D games are PC based only, though there was a rumor which Neil mentioned that suggests the technology is on the way for console games. Most of the speakers seem to agree, however, that the technology most likely won’t make it to console gaming until the next generation of systems due to the processing power it takes to render two gaming streams simultaneously (systems currently just don’t seem to have that kind of power onboard). More on the impact S-3D could have on the entertainment industry later.



Pat Howk - Indiana State University


I am a Student seeking an M.F.A. at Indiana State University. Graduated with a B.S. in New Media at IUPUI. I fell in love with animation towards the middle of my career at IUPUI and decided that I wanted to be an animator. I'm currently working on multiple animation projects in both CG and Stop-Motion. http://www.linkedin.com/in/phowk
I could see Modo and XSI being a good pipeline…
 
Thursday - Modo FTW! - So today I went and watched a demo of modo 302 at the Intel booth. I want it! The modeling and unwrapping tools alone would make me switch from Maya to modo for modeling tasks.

The animation tools aren't at all available for character animation, but you can do simple turn around or movements. What struck me was just the ease of use. To select a loop of edges you just double click an edge. You could click an edge, face, or vert and increase the selection by hitting a key, so it would increase the selection to the edges near it. Another thing is the modeling tools were similar to sculpting brushes. You can grab a face and "pull" it out and adjust it anyway you want. From the demos I saw you can literally create as fast you you imagine. That's it. I mean the people doing the demo's are pros but come on! They were making faces and weird creatures in as little as 5 minutes while I asked questions and they explained it to me. The renderer for modo is top notch now too. I would still keep your renderman or mental ray around, but the modo renderer is no slouch. I seriously want this piece of software! After seeing it in use I can't see why I've been using Maya for so long for things other than animation.

Another thing I saw, on the same day, is XSI's ICE (Interactive Creative Environment). This is XSI's node-based programing. This is hard to explain - you have to see it to believe it. So... I found this vid. It's visual programing. Like I said I can't explain it other than saying that it's like the hypergraph in Maya connecting shaders, but this is writing scripts. The Siggraph demo showed that someone made pong and space invaders only using ICE. They also showed some crazy rig and skin trick where you can drag and the last action you just did to the ICE viewport and connect other node's to that so that action is used by the other nodes... Like I said hard to explain. Watch the vid! I wouldn't mind trying it out.

I could see Modo and XSI being a good pipeline... Maybe someone could write Autodesk's Stereo camera rig in ICE!

And I've got to mention the Star Wars Clone Wars movie. Not Impressive. The animation was a bit stiff, the faces showed no emotion and all around it tried too hard to be funny when it just wasn't. With that being said, if this movie was left to be viewed on TV like originally planned, then it would have been different. As a TV show it's good. I just don't think it should be shown in theaters. It's not good enough for theaters. Other than that the battle scenes in the movie are really good. They were huge battles the action was great. I don't really want to say much about the movie because I don't want to ruin it for those that want to see it.

So that's it. My time at Siggraph 08 is finished. Friday I went to my shift to take down the slow art displays then back to the hotel. Not much to see on the last day. For those that haven't been this conference is huge, and exhausting, almost overwhelming. I'll be glad when I can finally sleep.

CUDA desktop rigs and whisper quiet workstations

Wednesday I spent what time I had on the exhibition floor and saw a few things I liked. 

First I went by the NVIDIA booth. The coolest thing they have at the booth is called a Quadro Plex 2200 D2 ($10.8k starting price).  What it is is an external system that is packed with 2-4 Quadro video cards.  When it’s plugged into your system you system recognizes it as one card.  It also automatically adjusts the resolution and scaling if you power more than one monitor.  The model I mentioned was the highest end and has 8GB memory, 120GBps memory bandwith, 1 Display Port, and 4 dual link DVI.  I thought this thing was amazing.

The other cool thing I saw worth mentioning was AMAX.  They make high end workstations and render farms.  The workstation I saw had five high power fans that were whisper quiet!  I literally had to put my ear up to the machine to hear it.  More on that later tonight when I get more time to post. Attending the ILM talk in the SV booth now.

AMD Booth Tour - real-time lighting, dynamic tessalation, stereo 3D output
 
Tuesday was the AMD/ATI booth tour!  As soon as we got their we met up with Bill Shane, official title is “business development executive”.  Bill tooks us around to the different displays withing the AMD/ATI booth and explained to Tim and I what was going on in each display.

This very first thing he showed us was a workstation running one of the top-end FireGL and was demonstrating a car demo put together by Works Zebra of Tokyo that allowed you to customize a car any way you want.  The interesting this about this demo was that the software was using the GPU to compute real-time lighting for the car. So no reflection maps on surfaces or lights.  There’s no need.  The FireGL was able to compute the lighting on the fly!  Another big thing was they had announced today the FirePro line of graphics cards. Bill confirmed that the low end card would in fact be $99!  I couldn’t get a price for the midrange FirePro the FirePro V5700.  But I do know that the low end FirePro V3700 has 256mb graphics memory, 2 dual link enabled DVI outputs, and “next generation GPU with 40 unified shader proccessors” The midrange FirePro V5700 has next generation GPU with 320 unified shader proccessors, 512 mb memory, 2 Display Ports and one dual link DVI, and HDR rendering with 8-bit, 10-bit, and 16-bit per RGB color component!  Those two cards are said to be coming in the fall.

Starting with the FireGL V7600 the cards all have HD component video out, at least 512 of memory, and Stereoscopic support!  Along with that, the top of the line FireGL V8650 comes with 2GB of memory!

Going back to the booth now Bill showed us a station where they were demonstrating their GPGPU’s which are GPU’s without outputs.  Which means you basically have the added bonus of two graphics cards without the second card doing anything beside computations.  And the last thing I’m going to talk about here is when we watched a demo that was running the top of the line consumer Radeon card.  It was an AI demo with “Froblins” a mix between a goblin and a frog.  The demo showed the little guys mining gold and bringing it back to the center of town, it showed collision detection so that the Froblins won’t collide with one another and it show dynamic tesselation.  Yes.  They showed us an example as in the farther you zoomed out from the landscape the less triangles are in the scene, hence less detail.  But the further you zoomed in then the more triangles where in the scene and you characters raising the level of detail the closer you got.  That, my friends, was an amazing demonstration.  Truthfully it was a bit overwhelming at the booth.  There was so much that AMD/ATI is doing now that its hard to keep up with.  To me those were the most interesting, and the ones I understood the most.  I want to thank Bill Shane and AMD for doing this for us and being very nice and professional the whole time, even though he knew he was dealing with students.  and newbie interviewers.  It was very informative and he even left himself available, by phone or email, for more information if we had more questions.

Maya 2009 (and all Autodesk products) are adding 3D Stereoscopic tools
 
Monday, I caught the end of the Autodesk event and got to see a little bit about the new Maya 2009! The first thing I saw was the new particle system. It was really easy to recreate fluid effects, smoke, explosions. Everything done in the demo was created without writing expressions. To me this was the one of the best parts. They also demonstrated some real-time collision detection. But the good parts were the new Stereoscopic tools that are coming out for all of Autodesk's products. I'll focus on Maya since I'm a Maya user. The best thing was Maya 2009 has a built in stereo camera rig. Some really cool options for this camera rig are real-time 3D so that you can animate and model and do everything you want to do while wearing your 3D glasses! That way you don't actually have to render you scene just to see if you stereo is working properly. The next cool thing is the camera can actaully project a red color and a blue color right onto the screen. What this does is gives you a reference for the 3D. Meaning that everything in front of the red plane is going to look like it's coming out at you and everything behind blue is going to be in the background. This also will increase your work flow by allowing you to get a good idea of what your scene will look like before you even render your first scene. As it is now you had to make your own rig and continually render a scene just to see how far the depth is and if you need to tweek more. The whole Autodesk pipeline got a stereo upgrade to help make stereoscopy an easier thing to do.

The industry right now looks like it is getting behind theses 3D/stereo technologies 100%. Dreamworks reps claim that their moving to it and ALL of their upcoming 3D movies are going to be in stereo. These tools just make that transition a whole lot easier for pro's and students alike. If your a 3D student today then you can't afford to not be working on stereo projects in school. It's doesn't matter if you like it or not, from what I've seen the industry if moving at full steam with stereoscopy.



Ted Isla - Full Sail University


Currently, I am attending Full Sail University in Winter Park, FL obtaining a Bachelor of Science in Computer Animation. My focus is in Compositing and Motion Graphics. Outside of my academic activities I am the University Apple Campus representative, TOMS Shoes campus intern, and serve as my university's SIGGRAPH chapter President. http://3dcompositor.com

XSI ICE is a standout and as user-friendly as Shake
 
Summary - SIGGRAPH has been one the best experiences of my life. For one week the entire computer graphics industry congregates to share one common interest, the evolution of computer technology. The convention was more than just a tech demo; it was a social gathering of great minds. Other than the Olympics, I believe SIGGRAPH was the second largest international gathering this past week.

It seems that the tech lingo of our CG community spreads widely across the globe. I found myself in conversation with some Japanese developers who were researching how to render a kitchen scene by ray tracing it with 10 bounces in just a matter of minutes with 6 lights. A fellow student volunteer from New Zealand explained how he coded his own expressions to generate particle effects. At home, I have trouble explaining the work I do to my family during Christmas. This was an enlightening experience to speak to people who could tell me how to get the right results from my HDRI Map!

Out of the major packages, one that stood out to me was XSI ICE. Its multi-threaded technology delivers high-end real time interactive results. The best feature is its node based command workflow. I found the system very user-friendly coming from an Apple Shake background. With this in hand, it shortens production time on a project without having to produce intricate lines of code.

The most underrated section of the convention was the New Tech demo located in the South Lobby. An exhibit that stuck out was the Copycat Arm system. The contributors were Kiyoshi Hoshino, Motomasa Tomida, and Emi Tamaki from the University of Tsukuba. Users were able to film their arm in front of a high-speed camera while an algorithmic program translated pre-calculated data to a robotic arm. In other words, the mechanical arm imitated the users movements without any pre-calibration.

I’m very fortunate to have participated in this year’s Student Volunteer Program. Not only were we able to network with peers, they fed us every day and gave away free stuff at the end of the week donated by our sponsors! And in compensation for the amount of shifts we worked, the Committee organized office luncheons and hall lectures throughout the week with industry representatives from Computer Graphics World, Dreamworks, Sony Imageworks, Disney, Howey Digital, Curious Pictures, and ILM. We even received cool hats from Reality Check Studios.

It is difficult to cover all the amazing things that happened these past five days. It’s just been hard searching for an equitable vocabulary to describe the entire experience. If there is one thing from SIGGRAPH that I genuinely earned, it is the new friendships I made with my fellow Student Volunteers whom someday I’ll be working with.

Confucius Computer: Transforming the Future through Ancient Philosophy

Wednesday - Confucius Computer was featured as one of the New Technology demonstrations at SIGGRAPH. The software is an innovated form of illogical media computation that explores into Confucius philosophy. It enables the user to learn his historic teachings by incorporating his philosophies with casual, everyday activities such as eating and listening to music.

The first station is Chat based. The user can engage in conversation with a virtual representation of Confucius and ask him questions or deliver statements. In return, he responds with wisdom of encouragement accompanied by relative vocabulary words in lieu of his philosophical advice.
Station two demonstrates algorithms that filters any form of music in order to make it “balanced”. The application will filter the rhythm and scale of the song, and output it harmonically in “positive” Chinese pentatonic style. Simultaneously during analyzation, it will also generate a painting in correspondence to cosmological theory of the five elements. The five elements being: metal, wood, water, fire, and earth. This also allows the user to manipulate the elements in the painting to generate a different music output.

Station three is about food! Here you can measure the balance of your Yin and Yang intake with every meal. The user is able to input a recipe and Confucius Computer will inform you whether the ingredient is hot, cold, or neutral according to traditional Chinese medicine.



Timothy Chrismer - Savannah College of Art and Design


I started as a Computer Science major at Texas A&M University, but soon realized I was interested in more than just the technical end of CG. I found the need to tackle both my technical and artistic sides together and transferred to the Savannah College of Art and Design. Now I'm focusing in lighting and surfacing, using my programming background to augment my artistic skills. I'll finish my studies in March of 2009 with a BFA in Game Development, and a minor in Technical Direction. http://www.timchrismer.com

Computer Animation Festival, Pixar, ILM and James Cameron

Friday - I got to see a re-showing of the entries for the Computer Animation Festival, today. It's great to see such jaw-dropping visuals on a huge screen, like the Nokia Theater. There were many shorts that I felt were noteworthy for their overall visual style and storytelling ability. Our Wonderful Nature: The Water Shrew was hilarious to watch. As was Chump and Clump. I definitely agree with the winners of the Student Prize category. The style and visual appeal of 893 was breathtaking. It was hard to look at that short and remember that it was made by students.

The Nokia Theater also hosted three Studio Nights this week. Tuesday, Pixar showed Frederic Back's The Man Who Planted Trees and had a short talk session with Back and John Lasseter at the end. Afterward, Leslie Iwerks's The Pixar Story was screened. It was really exciting to get to see the history of Pixar and John Lasseter in-person in the same night. Wednesday, Sony Pictures Imageworks hosted A Tribute to Stan Winston. Most notably, James Cameron was there to share stories of his collaboration with the iconic effects artist. He ended the night by screening the Blu-Ray enhanced Terminator 2: Judgement Day. Thursday, Lucasfilm hosted the pre-premiere screening of Star Wars: Clone Wars. Before the screening, we got a little insight to the background of the making of the film and series from John Knoll, a VFX supervisor at ILM, and Dave Filoni, director of the Clone Wars series.

This week, I got to see in-person a lot of the people and technology that are that backbone of this industry and make it great. The knowledge and memories gained in these few days will stick with me forever, and I can't wait to repeat it. See you all next time.

It’s better to trust the people you collaborate with
 
Thursday - As part of an attendee-rewards program, I was one of five Student Volunteers and attendees chosen to sit down and talk with graphics research pioneer and author, Andrew Glassner. It really was an eye-opening experience. After we introduced ourselves and explained our history, we were fortunate enough to hear a little insight from him on the realm of research.

Oftentimes, we like to think of ourselves as the sole owners of our own ideas. We think that we would be better off keeping our ideas to ourselves, until they're fully-realized, and then unleashing them upon the world to be met with great respect and awe.

"That," says Glassner, "just doesn't happen."

What usually happens, instead, is that the idea festers and we can never get everything quite finished enough to be fully-realized. When that happens, what we really need, despite our huge egos and Type A personalities, is another person in the loop.

Glassner went on to say that we have a natural tendency to fear collaboration. We fear that someone whom we trust will betray us and take our ideas to advertise as their own. "There's nothing wrong with being cautious like that," he noted. "The thing to keep in mind is that even if they do steal your idea, likely they won't be able to take it as far as you could, and you can always come up with something else."

The take-away that he gave at the end of our talk was that no matter what happens, it's better to trust the people you collaborate with until they prove otherwise. Ninety-nine out of one hundred times, they'll be loyal to you and you'll be better off because of it.

It was a huge honor to get to meet Dr. Glassner and it totally made my day.

30-bits really IS visually impressive
 
Wednesday - In my previous overview of Pat's and my tour of the AMD/ATI booth, I mentioned that the new DreamColor monitor was being specially displayed as being compatible with the new FirePro line. After visiting the HP booth and reading an article on it in the August '08 Computer Graphics World magazine, (CGW) I wanted to explain a bit more about the DreamColor display.

The DreamColor was created through collaboration between HP and DreamWorks Animation after DreamWorks saw the need for an affordable alternative to expensive LCD displays for their productions. They already were in a technology partnership with HP, and it eventually became the HP DreamColor Technology initiative. Their new display, the LP2480zx, adheres to industry standards for color spaces and "customers can [even] control color nuances such as gamut, gamma, white-point, black levels and luminence."

HP is advertising the LP2480zx to be marketed worldwide starting at $3,499. In my opinion, it's a great price, considering the value and capabilities. I had the chance to see DreamColor in action, and I must say it's visually impressive. I'd definitely be looking into purchasing one, if student loans weren't an issue!

On another note, I got the chance to sit in on a session called OpenGL: What's Coming Down the Graphics Pipeline. The class was hosted by Dave Shreiner (ARM), Ed Angel, (University of New Mexico ARTS Lab) Bill Licea-Kane (AMD), and Evan Hart. (NVIDIA) For the most part, it covered the basics and history of the OpenGL pipeline. Even though I've studied the basics in texts before, I find there's something special to be gained from having it repeated in-person.

They started us off with flowcharts and a full overview of the pipeline, covering vertex and fragment shaders, and how they fit into the big picture. We then got to hear about the underlying mathematics and theory behind working in OpenGL. Bill Licea-Kane covered the specific shader coding principles, with many examples of functions in present and previous versions of GLSL. These principles were reinforced through a few sample shaders and examples. Finally, the entire session wrapped with a look ahead to what's coming for OpenGL. On Monday, they had announced OpenGL 3.0, and they went on to cite some of its new features including sRGB framebuffer mode, API support of texture lookup for OpenGL Shading Language 1.30, conditional rendering, and floating-point color and depth formats for textures and renderbuffers.

All-in-all, this sounds very exciting! I'm very anxious to see how well this runs in conjunction with the FirePro line this fall!

Froblins and dynamically-generated LOD
 
Tuesday, the full Expo opened, and what a sight it was!  Pat and I went over to the AMD/ATI booth and met with Bill Shane, AMD’s Senior Business Development Rep.  We were given the full rundown of ATI’s new FirePro line, being demonstrated at SIGGRAPH this year.

The line, set for release this fall, offers many new additions including DisplayPort technology, Firestream (GPGPU) solutions, multi-display support, and extremely stable Vista and Linux drivers.  One of the demonstration pieces, “Froblins”, using the Radeon 4000 series, boasted dynamically-generated level-of-detail, over 30,000 agents in a crowd simulation, and real-time ambient occlusion.  Finally, we were also given a peek at the result of AMD’s partnership with HP and Dreamworks: the new HP DreamColor display.  All in all, it was a very informative and exciting.  I’ll give y’all some more information after I talk with HP more about the DreamColors tomorrow.

Rapid Prototypers
 
Monday - Well, it's finally started! This being my first SIGGRAPH, I wasn't prepared for exactly how huge it really is! There wasn't very much to see today, since the Expo doesn't fully open until tomorrow. I started working as a "roamer" for FJORG!, (pronounced Forge) which is a student animation competition. It's viking themed, so there was a lot of dressing up in fur vests and helmets and yelling, of course. It really was pretty fun, but as soon as that was over with, it was back to business. I worked my second shift down in the Studio, where people can come by appointment to print or otherwise produce their work. The rapid-prototypers were really cool, producing structures that could only be expressed with the aid of digital 3D printing technology. Anyway, I ended up being extra help with a hands-on Photoshop workshop by professional Wacom artist and author, Steven Burns. It was really exciting to help out, but I'm excited to see more technical presentations tomorrow, when the Expo opens. I'll definitely have more information for y'all after the AMD booth tour, tomorrow.