Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Pixar Universal Scene Description (github.com/pixaranimationstudios)
181 points by aethertap on Sept 22, 2016 | hide | past | favorite | 35 comments


Currently reading The Pixar Touch by David A. Price and they talk a lot about the types of software and standards for rendering they had to create over the years. Incredibly interesting looking back at what they were able to achieve. A talented crew for sure.

For those interested: https://www.goodreads.com/book/show/2632830-the-pixar-touch


Thanks for the recommendation. I've always been interested in the rapid rise of Pixar and the consistency in quality they've maintained for these past two decades.


30 years! Although they started out in hardware... talk about a pivot.


Every step of the way they were scheming how to get to feature films -- everything in between sounds like a means for resources (money, hardware, people, so on so forth).


I just finished Creativity, Inc. and it was a pretty good look at how Pixar evolved. Also highly recommended.


There was a demo of some USD-related software at SIGGRAPH, for context: https://www.youtube.com/watch?v=JmH4KYcmHOo


Hey! That's our company's flagship laptop in the wild! https://system76.com/laptops/bonobo


A bit OT and I hate to be a pest... but can you guys please start offering laptops with more than 1080p displays? It's almost Kafka-esque that my phone has 77% more pixels than a $2,500 graphics workstation laptop.


your company? You work for Clevo in China? because this is Clevo P870DM3-G aka Sager NP9873


It looks exactly the same but with a different logo. However the speaker in the video does seem to have the system76 (Bonobo) logo on his

Clevo: http://www.gentechpc.com/Sager-NP9873-S-Clevo-P870DM3-G-nVid...

Bonobo: https://system76.com/laptops/bonobo


I never knew anyone was building laptops with dual desktop GTX1080s, but you pay for it by having to lug around two 330W power supplies.

Since someone from System76 is on the thread, can you comment on why the System76 version uses a 1080p@120Hz screen instead of 1440p@120Hz like the Clevo?


Just why. If you want to work on the go, clean up your damn inbox!


because its cheaper, they are selling lowest possible Clevo configuration in order to reach lowest starting price.


That's interesting they didn't just collaborate on alembic.io, despite being sister companies.


From USD docs: ( http://graphics.pixar.com/usd/docs/api/_usd__overview_and_pu... )

USD: What's the Point, and Why Isn't it Alembic ?

The outward-facing goal of Universal Scene Description is to take the next step in DCC application data interchange beyond what is encodable in the ground-breaking Alembic interchange package. The chief component of that step is the ability to encode, interchange, and edit entire scenes with the ability to share variation and repeated/instanced scene data across shots and sequences, by providing robust asset-level (but not restricted to asset-level) file-referencing with sparse override capabilities. Additionally, USD provides a handful of other "composition operators" and concepts that target:

Encoding asset variation and preserving the ability to switch variants late in the pipeline Scale to scenes of unlimited complexity by deferring reading of heavy property data until it is needed (as in Alembic), and also by providing composition semantics that allow deferred (and reversible) loading and composition of arbitrary amounts of scene description without sacrificing the ability to perform robust dependency analysis between prims in a scene. (See the discussion of payloads for more information). The USD project also developed under a high priority inward-facing mandate to simplify and unify Pixar's binary cache-based geometry package (TidScene) with its ASCII, animation and rigging scene description and composition system (Presto core). This mandate required that the advanced rigging concepts and datatypes of Presto be layerable on top of a reduced-featureset, shared (with USD) core. Given the time and resource constraints, and necessity to not massively disrupt our in-use codebase, we chose to largely retain our existing data model, while looking to Alembic as a guide for many schema decisions along the way. While it is unlikely Pixar will attempt to expand the Alembic API and core to encompass USD composition, asset resolution, and necessary plugin mechanisms, we are committed to providing a USD "file format" plugin that allows the USD core and toolset to consume and author Alembic files as if they were native USD files (obviously writing cannot always produce an equally meaningful Alembic file because composition operators cannot be represented in Alembic).


Alembic is a baking format. Slightly different use cases.

Usd has support for Alembic baked nodes: https://github.com/PixarAnimationStudios/USD/tree/2eb01f5cd4...


Could you explain what "baking" means in computer graphics?

I spent a few minutes googling, and all I've learned so far is that:

- there isn't "baking", there's "texture baking", "light baking", and many other "<insert noun here> baking"

- it seems to mean any kind of precomputing stuff

- apparently the term was coined by Pixar in some paper


That's exactly right. It's just a general term nowadays for embedding pre-computed information into an asset for the purposes of optimizing the rendering of that asset. Of course, you need a renderer capable of understanding how to make use of this information to properly reconstruct the scene, hence standardization like the Alembic format is useful.


> precomputing stuff

Correct. In this context it means turning the geometry into frame by frame mesh data so that your animation isn't dependent on anything like bones, cloth simulation, muscle/skin simulation, soft or rigid body dynamics, inverse kinematics in the rig, expressions on the bones of the rig etc.

Not only does it break dependencies, but it is also easier to get that geometry into another part of the pipeline like effects, lighting and even compositing.


So does a renderer such as RenderMan use a baked format like Alembic or a format like USD?


Neither, and both. Lets say you have a scene with a cup of coffee spilling onto a desk...

The overall description of the scene, where the desk is, the lights and materials etc, could be stored in a USD scene.

Some tool would read this and generate an intermediate file in RIB format. This would be what a RenderMan renderer actually reads.

For the fluid sim itself, the generated RIB file might contain a reference to a plugin which points to the baked Alembic data, which would be a directory with one alembic file per-frame of mesh data representing the fluid surface.


The basic answer is it uses neither file format but what it does use has similar attributes to both.

RenderMan uses RIB files for its scene description and RIB files contain geometry. This means that one RIB file would typically contain a frame of data and source the shaders and textures from separate file paths.


baking, in this context is a stream of vertices where each frame carries geometry information and nothing else.

If geometry wasn't baked in, then you would have a separate description of geometry (and everything else) as well as keyframes which are left to the client to interpolate.

Think of it as a HTML versus a screenshot. Both carry the same information, but for screenshot you don't need anything else apart from image viewer. For HTML you need parser, layout, render, whatever.. poor analogy, but hey.

USD is a next generation RIB, like OSL (done at sony) is next gen RSL. Both RIB and RSL stem from RISpec https://en.wikipedia.org/wiki/RenderMan_Interface_Specificat... which is a good intro to see what's being talked about here.


> Could you explain what "baking" means in computer graphics?

It means that the code/configuration used to produce a particular result (e.g. an animation sequence) isn't accessible anymore, but the result is.

For example, Houdini (http://www.sidefx.com) lets you procedurally animate 3D geometry, using a collection of nodes, scripts, etc. This is very useful for authoring, but is extremely heavy when all you want to do is render the resulting animation.

So, you "bake" the animation to a file (Alembic is a popular option today, which Houdini supports) and then you use the baked file during your rendering. None of the code and computation that produced the animation need to be available to the renderer.

Put another way: "baking" is the CG term for "memoizing" the result of a computational process.


An animated scene is broken down into frames. In each frame there is usually geometry that is moving. Each piece of geometry is a collection of polygonal data that has (xyz) points as its basic primitive.

It is often convenient, sometimes necessary to store point data into an off-line file for each frame of the animation. That point data is called a bake.or a point cache.

For instance if you do a simulation that takes 4 hours to solve you would absolutely want to bake out each frame of the point geometry. Then you can load that data into ram and play back the simulation inside of a modeling/animation program in real time.

There are many other uses for geometry baking.


> - it seems to mean any kind of precomputing stuff

Yes, it's a fancy word for that.

But when hearing that word (rather than 'precompute' or 'compile') it implies a few things.

Usually it means precomputing to a file rather than memory.

Mostly, but not always, baking implies generating a large amount of data from a smaller amount.

Typically the source description is for a whole sequence or animation, but the baked data will be a distinct set of data for each frame.


Speicifcally USD is a: * scene descriptor * geometry caching format * GL Scene Graph viewer

though each of those can exist independently of each other, and it does support Alembic as a geometry backend.

I believe they didn't support alembic because the geometry heritage of USD predates Alembic being a standard, and they added features like overrides etc.

Though interestingly, Blizzard just added support for Alembic layers that would describe overrides as well, so it's definitely an odd situation between the two.

I believe USD with an Alembic backend will be the eventual outcome


I noticed that their coding style consistently uses the spelled-out Boolean operator keywords "not", "and", "or". Can't think of any other C++ codebases that use this -- can anyone else?


>> Universal Scene Description (USD) is an efficient, scalable system for authoring, reading, and streaming time-sampled scene description for interchange between graphics applications. What does this mean for the layman? I got a 404 for the 'getting help with USD' link


I am going to hazard a guess as nobody else responded. They have a lot of different tools for different aspects (animation,mocap,shaders,texture painting, modelling, rigging, final render etc) and a USD allows them to quickly back-and-forth between the different stages on as-yet incomplete scenes to preview final renders, make fine adjustments further up and down the pipeline to accomodate for changes made elsewhere.


How does this compare with DAE, which seems to be the existing standard for this?


I'm not sure if anyone really uses DAE at all if they can help it. Collada is an attrocious format and everyone has switched to Alembic or FBX long ago.


Well, DAE is about the most poorly specified file specification on the planet... so I'm going to guess favourably ;)

An interesting format that is not very widely used (hopefully it will be one day) is OpenGEX. Here's a comparison of Collada vs. OpenGEX:

http://opengex.org/comparison.html


is this a scene graph library? does it support animation? or each frame is just a still 3d scene.


It is a scene graph and scene descriptor, and an (optional) geometry baking format along with an AZDO GL viewer




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: