Currently reading The Pixar Touch by David A. Price and they talk a lot about the types of software and standards for rendering they had to create over the years. Incredibly interesting looking back at what they were able to achieve. A talented crew for sure.
Thanks for the recommendation. I've always been interested in the rapid rise of Pixar and the consistency in quality they've maintained for these past two decades.
Every step of the way they were scheming how to get to feature films -- everything in between sounds like a means for resources (money, hardware, people, so on so forth).
A bit OT and I hate to be a pest... but can you guys please start offering laptops with more than 1080p displays? It's almost Kafka-esque that my phone has 77% more pixels than a $2,500 graphics workstation laptop.
I never knew anyone was building laptops with dual desktop GTX1080s, but you pay for it by having to lug around two 330W power supplies.
Since someone from System76 is on the thread, can you comment on why the System76 version uses a 1080p@120Hz screen instead of 1440p@120Hz like the Clevo?
The outward-facing goal of Universal Scene Description is to take the next step in DCC application data interchange beyond what is encodable in the ground-breaking Alembic interchange package. The chief component of that step is the ability to encode, interchange, and edit entire scenes with the ability to share variation and repeated/instanced scene data across shots and sequences, by providing robust asset-level (but not restricted to asset-level) file-referencing with sparse override capabilities. Additionally, USD provides a handful of other "composition operators" and concepts that target:
Encoding asset variation and preserving the ability to switch variants late in the pipeline
Scale to scenes of unlimited complexity by deferring reading of heavy property data until it is needed (as in Alembic), and also by providing composition semantics that allow deferred (and reversible) loading and composition of arbitrary amounts of scene description without sacrificing the ability to perform robust dependency analysis between prims in a scene. (See the discussion of payloads for more information).
The USD project also developed under a high priority inward-facing mandate to simplify and unify Pixar's binary cache-based geometry package (TidScene) with its ASCII, animation and rigging scene description and composition system (Presto core). This mandate required that the advanced rigging concepts and datatypes of Presto be layerable on top of a reduced-featureset, shared (with USD) core. Given the time and resource constraints, and necessity to not massively disrupt our in-use codebase, we chose to largely retain our existing data model, while looking to Alembic as a guide for many schema decisions along the way. While it is unlikely Pixar will attempt to expand the Alembic API and core to encompass USD composition, asset resolution, and necessary plugin mechanisms, we are committed to providing a USD "file format" plugin that allows the USD core and toolset to consume and author Alembic files as if they were native USD files (obviously writing cannot always produce an equally meaningful Alembic file because composition operators cannot be represented in Alembic).
That's exactly right. It's just a general term nowadays for embedding pre-computed information into an asset for the purposes of optimizing the rendering of that asset. Of course, you need a renderer capable of understanding how to make use of this information to properly reconstruct the scene, hence standardization like the Alembic format is useful.
Correct. In this context it means turning the geometry into frame by frame mesh data so that your animation isn't dependent on anything like bones, cloth simulation, muscle/skin simulation, soft or rigid body dynamics, inverse kinematics in the rig, expressions on the bones of the rig etc.
Not only does it break dependencies, but it is also easier to get that geometry into another part of the pipeline like effects, lighting and even compositing.
Neither, and both. Lets say you have a scene with a cup of coffee spilling onto a desk...
The overall description of the scene, where the desk is, the lights and materials etc, could be stored in a USD scene.
Some tool would read this and generate an intermediate file in RIB format. This would be what a RenderMan renderer actually reads.
For the fluid sim itself, the generated RIB file might contain a reference to a plugin which points to the baked Alembic data, which would be a directory with one alembic file per-frame of mesh data representing the fluid surface.
The basic answer is it uses neither file format but what it does use has similar attributes to both.
RenderMan uses RIB files for its scene description and RIB files contain geometry. This means that one RIB file would typically contain a frame of data and source the shaders and textures from separate file paths.
baking, in this context is a stream of vertices where each frame carries geometry information and nothing else.
If geometry wasn't baked in, then you would have a separate description of geometry (and everything else) as well as keyframes which are left to the client to interpolate.
Think of it as a HTML versus a screenshot. Both carry the same information, but for screenshot you don't need anything else apart from image viewer. For HTML you need parser, layout, render, whatever.. poor analogy, but hey.
> Could you explain what "baking" means in computer graphics?
It means that the code/configuration used to produce a particular result (e.g. an animation sequence) isn't accessible anymore, but the result is.
For example, Houdini (http://www.sidefx.com) lets you procedurally animate 3D geometry, using a collection of nodes, scripts, etc. This is very useful for authoring, but is extremely heavy when all you want to do is render the resulting animation.
So, you "bake" the animation to a file (Alembic is a popular option today, which Houdini supports) and then you use the baked file during your rendering. None of the code and computation that produced the animation need to be available to the renderer.
Put another way: "baking" is the CG term for "memoizing" the result of a computational process.
An animated scene is broken down into frames. In each frame there is usually geometry that is moving. Each piece of geometry is a collection of polygonal data that has (xyz) points as its basic primitive.
It is often convenient, sometimes necessary to store point data into an off-line file for each frame of the animation. That point data is called a bake.or a point cache.
For instance if you do a simulation that takes 4 hours to solve you would absolutely want to bake out each frame of the point geometry. Then you can load that data into ram and play back the simulation inside of a modeling/animation program in real time.
Speicifcally USD is a:
* scene descriptor
* geometry caching format
* GL Scene Graph viewer
though each of those can exist independently of each other, and it does support Alembic as a geometry backend.
I believe they didn't support alembic because the geometry heritage of USD predates Alembic being a standard, and they added features like overrides etc.
Though interestingly, Blizzard just added support for Alembic layers that would describe overrides as well, so it's definitely an odd situation between the two.
I believe USD with an Alembic backend will be the eventual outcome
I noticed that their coding style consistently uses the spelled-out Boolean operator keywords "not", "and", "or". Can't think of any other C++ codebases that use this -- can anyone else?
>> Universal Scene Description (USD) is an efficient, scalable system for authoring, reading, and streaming time-sampled scene description for interchange between graphics applications.
What does this mean for the layman? I got a 404 for the 'getting help with USD' link
I am going to hazard a guess as nobody else responded. They have a lot of different tools for different aspects (animation,mocap,shaders,texture painting, modelling, rigging, final render etc) and a USD allows them to quickly back-and-forth between the different stages on as-yet incomplete scenes to preview final renders, make fine adjustments further up and down the pipeline to accomodate for changes made elsewhere.
I'm not sure if anyone really uses DAE at all if they can help it. Collada is an attrocious format and everyone has switched to Alembic or FBX long ago.
For those interested: https://www.goodreads.com/book/show/2632830-the-pixar-touch