Understanding USD Scale and Character Interoperability at Pixar
Overview
In this session, Paul Kanuk from Pixar discusses the significance of USD scale and character interoperability in animation production. He provides insights into Pixar's transition from traditional animation systems to USD, the challenges faced, and the future of character interoperability in various software environments.
Key Points
- Introduction to USD Scale: Paul introduces USD scale, explaining its role in character interoperability and how it integrates into Pixar's production pipeline.
- Background: Paul shares his journey at Pixar, highlighting his experience in crowd simulation and research related to USD scale.
- Transition from Marinette to Presto: The shift from the proprietary animation software Marinette to Presto led to challenges in crowd simulation, which USD scale helped address. For more on crowd simulation techniques, check out our summary on Understanding the Role of a Color Scripter in Animation Production.
- Character Interoperability: The need for interoperability arises from various production requirements, including virtual production and collaboration with external vendors. This is particularly relevant in the context of Unlocking the Art of Color Scripting: A Comprehensive Guide.
- Open Source Example: Paul showcases an open-source example of a character asset from Toy Story 4, demonstrating how USD scale can be utilized across different software environments. For those interested in transforming assets, see our guide on Transform Your Photos into 3D Cartoon Characters with Stable Diffusion.
- Challenges with Blend Shapes: The session discusses the inconsistencies in blend shape support across various software and the importance of standardization. To learn more about managing configurations, refer to How to Save Configuration Presets in Stable Diffusion Forge UI.
- Future of USD Scale: Paul touches on the potential advancements in USD scale, including the use of machine learning for more complex rigs and the ongoing development of open exec.
FAQs
-
What is USD scale?
USD scale is a standardized way to represent character rigs, focusing on linear blend skinning and blend shapes to ensure interoperability across different software. -
How does Pixar use USD scale in its production pipeline?
Pixar uses USD scale to facilitate character interoperability, allowing for seamless integration of characters into various software environments and enhancing crowd simulation capabilities. -
What challenges does Pixar face with character interoperability?
Challenges include inconsistent support for blend shapes across software, the complexity of Pixar's proprietary rigs, and the need for effective communication between different animation tools. -
Can USD scale be used in real-time rendering for games?
While primarily used in animation production, USD scale is being explored for real-time rendering in virtual production environments, though its integration into game engines is still developing. -
What is the significance of the open-source example shared by Paul?
The open-source example serves as a practical demonstration of USD scale's capabilities, allowing users to test and understand its application in various software environments. -
What future developments are expected for USD scale?
Future developments may include enhanced support for advanced rigging techniques, machine learning applications, and improved interoperability standards across the animation industry. -
How can I access the open-source USD scale example?
The open-source example can be accessed through Pixar's website, where users can download assets and explore the USD scale features.
all right uh so yeah so today we have uh called from pixer uh so definitely there's a lot of excitement around
Pixar's contribution towards USD and all the animation work that they have done in the past so uh this is a session
probably like a lot of members were looking forward to uh so Paul feel free to take over and uh you know make your
prodction and then your session content hi everyone my name is Paul kanuk and I'm going to be talking about USD scale
and character interoperability and how that applies at Pixar um a little bit of a overview first I'll go over the back
story of why am I talking to you guys about this a little bit of an overview what USD scale is a tour of our open-
source example that Pixar provides as well as how USD scale both plays a part in our production Pipeline
and uh sort of how it works with character interop outside of our our typical Pipeline and work for the future
so a little bit on my background um I got started at Pixar in 2004 is as an intern got started in shading and render
optimization but around 2006 on rat 2 I started getting involved in crowd simulation that kind of became my
specialty and uh starting on Brave in 2012 I began leading crowd teams um and I just I think finished my sixth or
seventh uh show leading a crowd team on Elemental but between uh uh doing crowds I also do uh research and
development um including uh work with USD scale um helping Studios like Disney Animation get started with it and such
as well as a lot of Engagement with Disney research um so that's a bit of my background um so where does USD scale
come in in that whole list of things I've worked on well the story kind of goes back to Pixar's crowd pipeline CA
2006 2010 uh like many studios we were doing agent-based crowd simulation in massive
and we had a pipeline that would connect Massive to our proprietary animation software marionette now marionette did
everything from ring all the way through lighting and then we rendered in render man and we had a crowd pipeline that
basically would ingest rigs for Marinette output animation back into Marinette um and that was working great
on a number of films from Ratatouille uh Wall-E up and Cars 2 but then uh we had a pretty big pipeline upheaval uh where
we switched over from Marinette to a new system that we called presto and that's actually what we're using today this is
kind of picture's modern multi-threaded high performance animation environment and that kind of broke things up a bit
you'll notice that marionette is still around um connected with this geocache line but the crowd pipeline was orphaned
massive no longer connected to presto and as a result uh we didn't really have anything to work with for crowds on the
film Brave that said note how we still had this way to get from Presto to Marinette via geometry caching and that
proved critical both for crowds as well as the wider CG ecosystem because that geometry cache format that we call tid
sea the time for time index dictionary scene was actually the predecessor to Universal scene description essentially
a combination of that fast geocache format with the composition features that we had in Presta uh ultimately
became USD that's actually what uh Pixar's crowd pipeline was really built on for a number of years was just a
fancy way to sequence geometry caches together without even caring about the character's rig because we simply we
lost our crowd pipeline so we were just working with what we had um now to some of you that might sign a little sound
surprising why were we working with geometry caches why not skin skeletons like everybody else in the industry well
one of the reasons that in Presto our rigs are very complicated and to an extent they're
actually black boxes and not terribly skeletal at all you can think of it as animation controls going in uh there are
more well well over a thousand even if maybe only a 500 are used often and the and the the rig you can consider kind of
a black box if you Peak inside press to express his rigs as a node graph of deformers and weight objects with things
called execution markers that allow there to be Cycles in the node graph which is really mindbending but suffice
to say there there's something skeletal in there but it's not really the way we rig and at the end of the day it's
posing geometry so we kind of have to learn a skeleton from what that black box is to even work this way um now
there's a lot of reasons uh we you know to actually turn our rigs into a skeleton that's
understood by other um pieces of software because to ask we' like to get our characters out of presto you know
I'm In Crowd simulation we want to use crowd simulators like Houdini mass or goola there's also a lot of interest in
Virtual production uh to get our characters say on a moap stage or something with the with environment you
privas or even uh to work with um outside vendors to get Pixar characters into games or VR or promotional content
so there's a whole lot of reasons for character interoperability um we'd also love to
get outside animation into Presto again crowd simulators you want to get data out to it and back into Presto it's also
interest in using mocap for crowd motion as well as more Advanced Techniques like motion controllers and machine learning
uh to potentially uh direct their characters and for all of that we need some concept of interoperability with
their highly complex rigs so that's why I'm talking about character inop that's how we got to this topic now let's talk
about USD scale so we ultimately did make the transition to skeletons and we added uh skeletal features to USD um
initially we did our own implementation and then with a collaboration with apple and other folk we ended up um creating
the public facing USD scale API which is now what we're using and this is uh the link here's the a blurb to what it is
but suffice to say USC scale is kind of a a lowest common denominator way to represent rigs I mean not maybe the the
lowest lowest but pretty low and that essentially it's a rig that consists of linear blend skinning and blend shapes
and standard things like buy transforms and an API to access them so it doesn't sound all that exciting if I put it that
way that's it even getting a lowest Comon denominator to truly be inter able has been way more challenging and
interesting than I thought so when I introduce to you the open source example I'll show you some of the ways where
things that seem easy can break down or fail um in various pieces of software but that that's the idea that it's sort
of a a lingua franka it's not trying to be the most advanced rigging system it's something that hopefully every every
every piece of software can understand as a starting point um so here's just a little example I don't want to you know
you could look at at this in more detail on the web page but you know the idea is USD can it's either both binary and asky
and if you look at the asky it makes sense you can define a skeleton joints are just a list of tokens uh bind
transforms and rest transforms or list of matrices um an animation is described as
a list of joints the animation applies to as well as translations rotations which are floating Point return
and scales and doesn't look like much that's that's pretty much the the standard stuff you'd expect uh from skin
characters um the mesh itself what it needs to know about is what skeleton it belongs to so there's a relationship uh
attribute called skeleton that points to what skeleton it belongs to and then you have these primar for the joint indices
and Joint weights um as well as a bind transform uh that represents where that mesh was when it was found to the
skeleton so pretty standard stuff uh blend shapes get a little more complicated but not much uh just end of
the day it's pretty much what Maya has you know it's um both regular blend shapes and in between blend shapes with
weights between them we pretty much adopted my blend shape format and then it's expressed in USD um now if you want
to give it a try uh first thing I recommend doing is going to Pixar web page uh you know basically click on the
open USD link then there's this downloads and videos link and there under assets you'll see this USD scale
examples anyone wants to test out USD scale this is what I recommend starting with this believe or not is actually my
contribution to USD scale I didn't work much on the API other than more specifying it rather than writing it but
I was asked to put together an example so I took a Toy Story for asset this is actually a character named Camila a
background character from Toy Story 4 who was deemed suitably generic as to be open- sourced and made available to
everyone so let's take a peek at what's inside this USC scale asset so a USC comes with a viewer called USD View and
this is what I've used to open this asset and I'm basically making the guides visible and en vising her
geometry and showing the skeleton that's underneath the hood so pretty standard know it's got fingers and eyes
and if you zoom into the face you'll see that there's some okay facial articulation you know eyes closing and
opening mouth opening and closing that's done with blend shapes but there's also variant sets and I'm switching this rig
variant from Full to face bones now this is a version where the face is actually done with joints so that this open
source examle both exercises joint based facial animation and blend shape Bas facial animation the same rig there's
also an an orthogonal Varian set for oh that's showing essentially no facial animation so
there's three rig variants and there's an orthogonal geometric LED that has essentially decimated versions of the
character which should work with every rig variance so in addition to just having being a single character this
asset uses usd's variant features to have essentially multiple versions of the character in one um bonus points for
any software that can import and maintain those variant sets but clearly it's up to who whichever software vendor
what they want to support in USD we just want to make sure that this feature was exercised in the open source example so
with that let's take a tour to see how this open- Source example shows up in various software
environments um oh but before we do that sorry I just want to explain a little bit why this example is useful so even
though it seems simple there's actually a bunch of edge cases hiding within this example that can tease apart the
different different way software uh system that import characters one is blend shapes I'm actually been very
surprised how inconsistent support for blend shapes are in various dccc's and I think making sure that all software
packages can uh use these blend shapes would help all get us standardized for what it's worth there's about 700 or 800
blend shapes on this character these are blend shapes that were created programmatically which I'll talk about
later why we did that so it definitely also exercises scale of blend shapes another is there are joints with unusual
transforms um some systems don't support negative scaling in joints and this very much has negative scaling because
Pixar's rig uh use Negative X scale for symmetry to basically reflect one side of the body to the other and that's very
much in all the transforms so we've noticed some packages just don't like that and you know that's one of the
things we hope is a standard thing because a lot of people rig with negative transform negative scales um
another thing is non-identity bind transforms I've noticed um not all software supports that uh one of the
first things I look at is when you import this character check her shoes if the shoes are not on her feet you're not
supporting bind transforms correctly so that's another uh fun use case and then as I mentioned before variance because
there's both rigging and geometry variance this is a kind of a test case of other packages that want to support
that kind of variation so even though it's a simple example and arguably a very simple standard it still packs
quite a bit of complexity um and takes a lot of work for uh potentially to import it so um it's definitely a good starting
point and I recommend folk start with this asset so let's see how various uh people have done so in on Mac OS if you
right click and open this in preview you will actually see hooray the the example is playing back H including with blend
shapes so that's uh that's pretty impressive uh the reason this actually works as well as it is is um in newer
versions of Mac OS I believe this is actually being rendered with um Hydra and the uh the Metal Storm renderer so
this is actually not being displayed by the USD API without going through any other package um that's not true in
every uh Mac OS application but in the preview app that's the way it works um let's take a look at Maya a ask
Maya uh you can I believe every version since Maya 2021 or 2022 supports USD import so here I am browsing for that
and hitting import and one of the other things you'll notice is you know Pixar is a zup studio so you're gonna have to
do a lot of negative 90 rotations around the x-axis for this example but once you do that you can see the animation plays
back but her eyes are closed what that means is Autodesk has not yet implemented USD blend shape import I've
been told it's on the list of things to do but it's uh you know still uh still to be done so I still think this open
source example is a good thing for folks to test because it will help uh get more support for these features but still
that's uh pretty been pretty useful so far to pass our assets along uh in Maya uh here's an example in Houdini um
whoops sorry let me sorry now it's playing it back okay so in Houdini um since I think version
18 there's been this Lop context for layer operators it's essentially a context to operate on USD and in this
lops context if you create a file node and point it to a USD scale you scrub around you can see we got both linear
blend skinning and blend shapes working correctly but this is also using Hydra to draw directly into the viewport uh if
you want to bring the character into SS which is the shape operators of bini you have to use this USD character import
soop pointed at the Lop node um there's three outputs to from that and if you wire that up to a joint deform node
we're now in Houdini sort of K effects character system you can see you now have linear blend skinning but no blend
shapes yet so partial support for blend shapes in Houdini but not quite all the way working with uh Sops um let's take a
look at unreal 52 um I checked this out recently and was very happy to see when I dragged the character uh into unreal
after loading the plugin um and just did a rotation I hit play and I got both blend shapes and linear blend skinning
playing back just great so Props to epic games uh did a great job uh getting that uh working uh pretty well um now Unity
also has USD support it's a preview package and I gave this a try there's a few more steps I find uh to use uh Unity
um you have to sort of load the payload which then creates a bunch of assets you can see the texture showed up with
display color on the hair uh didn't show up so you know that's another thing that's being exercised is this asset has
a mix of textures and vertex color so um you know that's something that not everyone always supports and then in uh
Unity you basically use this timeline editor and you drag the USD game objects into the timeline editor and then point
that node back at the game object and then when you scrub in the timeline you get time based playback so animation and
USB scale support is in unity but display color is not working and the eyes aren't open so still no blend shape
support so that's something we hope is coming uh for Unity uh Nvidia Omniverse has done a
great job you have both blend shapes and um uh linear blend skinning working just fine in
Omniverse um if you zoom in there's some weird things happening with the irer action but to be honest I I don't know
uh we didn't really focus on the eyes I could easily be a mistake in the asset we I've never seen it fully refract in a
in the open JL viewport so uh but that said you know this is pretty impressive that that's working so you can see
there's already quite a lot of um support for USD scale in the ecosystem and we're excited to see it continue to
expand and hopefully include blend shapes more often um uh so yeah so a lot of lot of exciting stuff that's what's
out there in public so now let's focus back on how we use USD scale in a Pixar uh
pipeline so now that I've sh how you could represent a character um in USD scale uh I also mentioned that Pixar's
rigs are non not exclusively skeletal they're kind of these weird black boxes how do we get our production rigs into
this USD scale format that is useful for crowds and operability so we have um a few systems the first is called frame
Baker uh what it does is um exercises every degree of freedom in the rig and does non- negative lease squares
optimization to solve for the skinning weights that would best deform uh the rest geometry to closely match the
example arbitrarily complex rig so it's kind of like a rig baking technique um and you can see on top is the uh that's
the training poses on the bottom is the resulting mesh on the right you can see the skinning weights that it's solved
for for each bone um and for very athletic characters like this Incredibles character this is elasa girl
from Incredibles this actually works surprisingly well um for fleshier characters like Sully from Monsters uh
Incorporated it's a little trickier um there's uh definitely your your classic lineer blend skin
artifacts so there's been ideas to use postspace deformation to create corrective shapes for to minimize the
loss you know on top of the lar blend skinning and this works but it tends to be actually more complicated than we
typically need for our oper uh our operations and crowds um the body uh deformation differences are not quite as
important as the faces and we'll get to the faces next but first here's an example on Helen um AKA elasa girl where
in yellow that's the difference between the original and the baked version so it can actually be quite close uh for very
athletic characters but our faces is where this technique gets tricky um before we had USD scale blend shapes we
actually tried adding face joints uh to the character so basically bones that track positions in the face we're almost
doing like map facial map on our own characters and then basically adding those face bones to the frame Baker
calculation and you can see when you compare the original to the approximation it's close the other one
is a little lumpier but it's pretty close what's trickiest is the eyelids don't work super well this is a quality
test where we worked on Hector from Coco who has these very thin eyelids that no matter how many bones we used we really
couldn't get it to correctly approximate the island so there's a certain point where there's like a quality maximum to
face joints where blend shapes helped us exceed that so this basically brought up the question how do we learn and derive
blend shapes for a facial animation uh which isn't necessarily blend shape based a lot of it uses like wire and
curve deformers and all sorts of techniques that and subdiv projections um and don't necessarily combine
linearly like blend shapes how can you approximate that we actually you know went all out and in 2020 we published a
a cigar paper on frame Baker which of course uses machine learning to do a neural network to map animation controls
to corrective shapes that actually worked really well but it turns out was a little bit overkill for what we
typically need with character operability we actually compared face Baker against um the simplest fistic
that we could think of as a challenger which was just to take the minimum maximum value of every facial amination
control and write blend shape for it and just do regular old blend shapes um through force that way and it
actually looked pretty close to the machine learning version so it actually ended up using that simple hero stick
the one thing that helped the hero stick was rather than using the true Min and Max the control we look at the Min and
Max that was actually used in practice by animators to narrow the ranges a little to be a little more accurate we
also tried in between shapes but inet didn't seem to add much value just a simple Min and Max work for pretty well
and here's how that worked on uh element characters from Elemental so on this film there's a lot of deformation that
was nonskeletal like these watery features so this is a character always stripped out all the skeletal animation
and just left the non-skeletal components and the original rig is on the left and the USD scale blend shape
approximation using that heuristic I mentions on the right and it looks pretty close even zooming closer in we
can see that it's um not act but also not a bad match so for crowds and
interoperability this is great we don't need to go fully to machine learning uh to to get this more more accurate so
that's uh how we represent rigs now that we have our rigs represented in USC scale let's talk about how we use it in
Presto so we actually have a a custom procedural model that we call Presto crbs Foundation it's an aggregate model
for rigged crowds so it contains a rig that rather than moving points on a character around are basically moving
joint angles and setting blend shape weights and Mass on a giant crowd it both generates and animates a crowd with
a rig um it's highly vectorized and can basically scale up to tens and tens of thousands of agents and is designed for
kinematic uh which is to say non-simulated crows um fun graph just showing how we
vectorize the data um here's a a view of presto that shows how it's the rig is live and we're just changing
Distribution on the population you can see live how they're casting updates in the press to
viewport um you can see how they you know the orientations can be animated with a
lookout um and you can see them playing back animation just fine in the system this
is is the single looping State on a bunch of characters and one of the cool things is
the system even though it's procedural and it's a rig it supports direct manipulation but having callbacks on the
deep minip that drop in extra rigging for deforming it and that works even on top of motion um it's just all part of
the rig stack and here's how it works in practice here's a a viewport of presto where in the viewport we have this crowd
mode that lets um sorry I think I need to hit play there we go all right so this is
crowd mode where uh as you click in the viewport it's uh basically setting attributes on a add action in that um
rig that is adding agents Dynamic seene it's using depth information from hydra's frame buffer to basically place
it on the geometry so there's no need to pre-process the goo to some kind of nav mesh or anything it's all just you know
as you click place the agent and you notice that as you're clicking you're already getting poses and casting on
these characters and that's because the rig is already been set up for this crowd population and we're really just
you know adding instances to it and it's ready to go so soon as you hit playback on this the characters are already
moving and ready to go so this is kind sort of a parade scene so this next demo I'm going to show how um uh you can use
a curve uh generator to basically have characters you know along a curve so that you can have a parade going down
the street like Runners and people cheering and this is all stuff I did in a live demo at CF Asia
2021 in Tokyo um and yeah so in about nine minutes you can create a crowd like that because they did it live um that
said you know it's kind of a can demo I knew exactly what I was going to do ahead of time but this is all how USD
scale is ingested into cresto for both scalability and control um note that we those variant sets that I showed before
on the open source example have level of detail uh so you can have have the full mesh or highly decimated versions blend
shapes you can actually mix and match them so they can have blend shapes from one Clips on top of the poses for other
which is useful for changing the characters expressions or adding blend shapes to a system say like Houdini that
at the time we did this didn't support blend shape export speaking of Houdini we have a USD
import node that is basically is part of the PCF or it can import crowds exported from hudini
um here's an example of an agent-based crowd Sim we did in Houdini that we imported into PCF note that this
simulation didn't make it in the film it was considered a little too scary for turning red so in the end uh
it we had a much more tame you know shocked crowd rather than flee crowd but I still love that example because it
really showed off um uh how we could have interoperability with F at a large scale no a lot of uh why we're doing the
USC scale stuff is also to get things out of presto not just into Presto so um here's an example that shows the
playback of a USD file that's exported from our PCF crowd note that even though it's vectorized in Presto for the sake
of interoperability we export it as a single USD scale character per agent uh to make it easier to consume and other
pieces of software to make that scale we use scam graph instancing so all the unar data of the character is tagged as
insensible and that means USD is only composing uh that data once and reusing it in multiple places so even though
we're no longer vectorized we still cling on to some scalability by using SC graph incensing as well as of course
those variant sets um and then uh once you have it in USD view this is just showing off how you can change a
different Hydro renderer so this is going from uh uh openg uh HD storm preview to uh path tracing with
renderman xpu so that same crowd is now being path traced interactively um in USD view I know this
is not I guess terribly amazing demo compared to what other folks have shown recently with past racing but it just I
wanted to show that once you have your characters in USD scale in USD view you now have a choice of many renderers to
use anything that supports Hydra all right so um we're about to wrap this up this brings us to the topic of what
about more advanced reading um you know there's a lot Beyond just linear blend skinning and blend shapes um USD has
announced open USD has announced open exec which is a project in its early stages that will eventually have an
execution engine within USD that opens the door to expressing rigs in USD um arbitrary RS potentially um and it
appears that we're using pre reading system as a framework for this but don't hold your breath this is going to take a
couple years I think to get out there but that I feel like is the direction things are going um another possibility
is uh like that face Baker stuff I mentioned is um neural networks are ways to express arbitrarily Advanced rigs
that could be run on many systems using uh inference so you can think of a neural net as potentially being more
portable than the original production rating and we're already seeing examples like the ml deformer in Unreal Engine
that are building on this concept so I wonder if on the way to open exec if people are going to lean more on ML
deformers as a way to move more advanced rigs around potentially um but in the meantime I'd just be happy if there's
more support for just the lowest Comon dominant second USC scale but just looking in the future that's the kind of
stuff that I feel like the places we're going um it's also worth mentioning room so you've noticed none of the characters
had hair that I showed in USD scale um we transfer that information but don't draw it in USD view because Pixar we
have our own proprietary ha system so we keep the input parameters along with the asset and it renders at the end of the
day but for character and droper ability that doesn't help um note that basis Curves in USD scale uh can be rigged uh
they can have skinning weights just like meshes so that is one way to consider transfering Grooms is as rig curves um
you can also Define the the points per frame but that can get really ugly I've seen a file that's like 20 megabytes
become four gigs real quickly if you define the points every frame for hair so while USC supports it um I still
think it's an open question on the best most compact way to represent grooms in USD so I still think there's room for
discussion there but for now if anyone really needs to get this done you either want to skin some curves or do points
per frame and eat the cost um so yeah that is that is pretty much what I've got uh this slide isn't terribly
interesting so I'm going to stop the share um cool yeah thank you Paul great
presentation learned a lot about USD uh thank you for the whole educational content uh we do have some in the chat
um so if you Leonard has asked is there a preferred blend shapes system algorithm setups Etc are the correct
results uh documented some place this was came back on the slide that you presented earli on where you
first showed that uh there were a number of issues that you brought up before going through all the examples in the
various commercial tools okay so you're talking about the um the issues with the facial bones
I put it in chat because I knew I wouldn't remember the details so um okay let's see if you go through the if you
go back and it says probably like slide sixish or so all right I'm going to
uh reshare okay this is slide six nope probably a lot down two or
three after that no no not way down there way up y okay
so okay so is after after you talked about the pipeline and and the geocaching for your geometry and then
the examples this look up back before you do the any of the
Imports oh is this just oh you SC example one more next one no the other way slide
20 that ah okay so you want to talk about all the edge cases yeah I because because you said that there's lots of
different ways to do Glen shapes and nobody seems there doesn't seem to be General agreement on getting it done I
was wondering if there was is because people approach things differently and there's no standards or if they didn't
agree on the standards or there's multiple standards that each person's following individually and then for all
of these is their correct results because you point out several of the different problems and so how is
somebody determine if the way they're doing it is correct or not actually so yeah I think maybe I misspoke with blend
shapes I think there's largely agreement on the Autodesk standard for blend shapes which has both supports um Min
and Max blend shaped with uh with inet and then weights between them um it's more that I feel like um there's always
been inconsistency in support for them um and I don't exactly know why that is to be honest
um maybe it's just there's added complexity to doing that but yeah I I I don't I don't necessarily think there's
a rival methodology for how to run blank shapes it's just I think been on the complex side of characters that hasn't
necessarily been supported and then um the other ones I mentioned were more things like transform support that's the
one where I've noticed it been very explicit if all of a sudden you import this character and one side of the body
is missing I know that negative scales aren't being uh uh supported if the shoes aren't on I know that vine
transforms aren't supported so um that said I have noticed in um Mac OS scene kit there was blend shape support where
the face exploded so something went wrong there so there might be some normalization
um changes like whether or not the blend shapes are in the transformed space of the character or not but in terms of
blend like what the fundamental algorithm is I think it's pretty standard out there so I may have
misspoke when I presented this okay and then also and either I zoned out or just missed it uh but I didn't see anything
on blender did you happen to look at blender in in all your example cases so it's funny you mentioned that um we
don't use blender um at Pixar but uh a lot of folks do including Disney research and I am trying to get a Disney
research project uh working uh in our pipeline right now and getting interoperability so I'm walking the walk
with character interoperability right now and um I am looking into that um I tried just doing like in from pxr import
USD and that didn't run so there's probably something I need to do to build blender with USD or some plugin I need
to use does anyone here no offand I don't but I know somebody who does who who at least on the who really
understands blender okay yeah I might like that contact info
because yeah I'm I'm looking into what it would take because in general if blender can import USD I feel like if
they haven't you know done all the various import export features it wouldn't be hard to implement them on
top of that because blender's got a great API from what I can tell so um but yeah I I apologize for not having the
the latest State on that it's just I guess a lot because we don't use it very much even though I know a lot of folks
do actually so I'll give you the name his name is Julian Dar he does a lot of work with Kronos on the gltf Importer so
I don't know what his USD knowledge is but he does understand the blender side of things really well and um I'll get
your contact information to send you his thank you so much because yeah right now I'm trying to get matrices out of Bones
and blender of get them into Presto and yeah I am using USD scals and intermediary from that stuff but uh I
literally just started on this today so I'm finding out as we speak great question Leonard thank you
Paul on that I I had one questions in terms of the use case with USD a lot of the things that you should uh kind of
cater towards like cinematics or movie uh have you seen or like do you see any use cases for like real time rendering
like immersive experiences games using USD as well yeah possibly I mean definitely
virtual production which kind of splits the difference between you know games and film it's real time but for you know
capturing cinematics um very much it appears to be used particularly I think with real engine I know a lot of virtual
production things are done in that but ilm stagecraft system which is proprietary that does use
USD um so there's definitely virtual production use because I think with games themselves I hear from Friends the
biggest challenge is it's hard to include USD in the runtime of the game so it can be used for import into the
engine but it can't necessarily persist all the way to the final product um I don't know if that's something that's
being worked on or that's just part of it but my impression is it's used for Import and Export but not necessarily
for runtime yet um but I've seen proposals for like more robust LOD standardization and support with games
in mind so I know there are folks looking into it I just can't speak to the
specifics got it yeah thank you for that information uh and just like one more followup questions on that uh you know
you showed like an example of the import across different game engines uh so when probably like you know I need
to play around with that as well but uh when you do that import uh does like all the assets uh get grouped into that USD
representation or does the engine break that down into its native representation for example if you import a fbx based
character in unreal so unreal will do like a good segregation of like the skeleton the mesh textures and all that
so does the same thing happen with USD or uh does USD seen as like one single unit so there's there's different
flavors depending on the engine so my understanding is with unity you get a USD game object that points to a USD
file and you ask it to load the data but then that populates the um the content area with things like a skeletal you
know actor and textures so it's sort of like an importer but you can actually continually reimport port and change so
it maintains a connection to the original USD file but there is a translation that is happening there with
unreal it looks like there's two flavors one is called a USD stage actor that mostly is just a view onto the USD uh
file and lets you interact more on the USD level rather than with the uh the underlying unreal tools but then there's
an import step where you can do after that where you can turn the stage actor into UA asset stuff and then you get all
all the guts and there's like kind of a matrix on epics page of what is supported by the stage actor versus the
uh the full uh import to you asset and it looks like they they have a road map to continually improve uh how much is
being supported by each of those so um those are the only two engines I know about um in their support and I um again
I can't speak directly to it because don't uh work with those companies but I have it the impression is that both
unity and epic are actively improving the integration um it's probably a little bumpy right now if you look at
either you know it's still called a preview package in unity as well as like a beta plugin in um in
unreal got it okay that makes sense yeah I'll check out the link that you mentioned with the Matrix uh yeah if if
you have like feel free to share it in the chat as well so that other members can also access it
uh yeah meanwhile we are towards the last 3 minutes so if anyone have any questions uh we will continue the
session as a water cooler that will be off the recording uh so yeah so folks can stay back and ask those questions
but there's probably like room for one more question in the recorded session so if anyone has a question uh please go
for it another a question but a big thanks Paul that was awesome really appreciate
you taking the time to do that oh my pleasure it was really fun and looking forward to hearing what
everyone comes up but just so if everyone knows I was extending this invite from Nick porino who I know is
tied in uh with the community and it's h yeah very exciting what's going on here yeah yeah yeah now we're looking forward
to um making sure that all the kind of hopefully all the pieces begin to align so we can we'll have an easier
life absolutely
Heads up!
This summary and transcript were automatically generated using AI with the Free YouTube Transcript Summary Tool by LunaNotes.
Generate a summary for freeRelated Summaries

Introduction to USD: Understanding Hierarchical Scene Layouts
This video provides an in-depth introduction to USD (Universal Scene Description), explaining its hierarchical structure for organizing 3D scenes. It covers the basics of creating a USD stage, defining primitives, and integrating Bifrost with USD for game development.

The Future of Robotics: Innovations and Industry Insights
This video discusses the rapid advancements in robotics and AI, highlighting the impending labor shortage and the potential for robots to fill this gap. Key technologies like Omniverse and the new physics engine Newton are introduced, showcasing their role in training robots and enhancing their capabilities.

Exploring AI Implementation Challenges in Libraries: Insights from a Panel Discussion
This panel discussion delves into the challenges faced by libraries in implementing AI initiatives, highlighting experiences from various institutions. Panelists share insights on funding, governance, and the impact of AI on library staff roles, while emphasizing the importance of collaboration and ethical considerations in AI adoption.

Unlocking the Art of Color Scripting: A Comprehensive Guide
Discover the essential skills and techniques for mastering color scripting in animation and visual storytelling.

The Evolution of Product Teams: Insights from Industry Leaders
This video discusses the importance of maintaining small product teams to preserve a singular vision, the shift towards design-first sensibilities in product development, and the challenges faced by larger organizations in maintaining product quality. It highlights key figures in the industry and their approaches to product management and design.
Most Viewed Summaries

A Comprehensive Guide to Using Stable Diffusion Forge UI
Explore the Stable Diffusion Forge UI, customizable settings, models, and more to enhance your image generation experience.

Mastering Inpainting with Stable Diffusion: Fix Mistakes and Enhance Your Images
Learn to fix mistakes and enhance images with Stable Diffusion's inpainting features effectively.

How to Use ChatGPT to Summarize YouTube Videos Efficiently
Learn how to summarize YouTube videos with ChatGPT in just a few simple steps.

Pag-unawa sa Denotasyon at Konotasyon sa Filipino 4
Alamin ang kahulugan ng denotasyon at konotasyon sa Filipino 4 kasama ang mga halimbawa at pagsasanay.

Ultimate Guide to Installing Forge UI and Flowing with Flux Models
Learn how to install Forge UI and explore various Flux models efficiently in this detailed guide.