Acclaimed Polish game development studio CD Projekt Red came aboard as an early adopter of JALI’s technology, utilizing it for their open world RPG Cyberpunk 2077—which features a massive amount of complex, story-driven dialogue. JALI allowed the development team to accurately and comprehensively localize the game in 10 supported languages for players across the globe.
Since its inception, the JALI team has emphasized the importance of technical and creative collaboration across all aspects of production. As a result, they have joined the
Anim Revolution, an ongoing collaborative initiative focused on redefining the traditional content production pipeline. This trailblazing collective of software, hardware, and service provider companies has presented innovative demonstrations at SIGGRAPH (Special Interest Group on Computer Graphics and Interactive Techniques), GDC (Game Developers Conference), Unreal Fest, and other animation and computer graphics industry events.
CEO Sarah Watling—who brings over 15 years of experience in leadership and arts production to the award-winning JALI Research team—gave us an inside look into her career path, the development and evolution of the company’s innovative technology, and its ongoing collaborative initiatives across the industry.
JALI Research CEO Sarah Watling
Can you tell us about your career path?
Sarah Watling (CEO, JALI Research): I’m from Halifax originally. I wanted to be a music journalist, so I went to England and I worked at clubs and was a publicist for venues. I was always trying to angle into various publications. I ran out of money before I ran out of energy, which is I think a fairly concise way to express the reality of living in London.
The media landscape was completely different at the time. The way you communicated news to outlets was very different, let alone the way you developed stories and informed them. I went to shows and I reported on what I saw. I tried my best to meet the people behind the scenes, and it was great. I met a lot of cool musicians, which I think at that age was the icing on it all. I got to travel to a lot of crazy places, where you're like, “How did I get here?”
Then I came back to Nova Scotia. I was the executive director of the Jazz Festival in Halifax, which is incredibly small potatoes, but it was huge for me at the time because I was really young in my executive-level career. I came on originally with that organization as the communications manager, and then before that as a volunteer. Between all of these career shifts, I was always a bartender.
I would say that defining the development curve for all of those jobs, whether it was in hospitality, music, production, arts administration, or now in this iteration, I always just showed up and started doing whatever jobs weren't getting done—quickly learning how to make myself indispensable, and then eventually kind of running the show.
What brought you to your current position as CEO of JALI Research?
Sarah Watling: When the company first started, it was just a group of academics whose pedigree was impeccable. They’re all former Alias Wavefront R&D team. Alias Wavefront was acquired by Autodesk, and that early development team were the inventors and developers of Maya 1.0, which as we know is now still industry standard.
The CTO, Pif Edwards, was doing his research in computational linguistics. That was the academic sort of origin. He ended up as a student in Chris Landreth's facial animation class when he was an adjunct professor in the same department at the U of T [University of Toronto] DGP—Dynamic Graphics Project is the name of that research group. They formed a Padawan-master kind of vibe. At the same time, what Pif wanted to develop was kind of an affront to Chris’ sensibilities, because it suggested a degree of automation in a space that Chris was very precious about—and I would say a lot of artists are, and rightly so.
Pif wanted to create a computational model that would allow you to animate and direct complex character performances from script. That's still basically what we're doing now, but we had to start with speech. In a play or a game or wherever there's moments of drama, people are talking. So that's how we started making this lip sync and speech animation tool.
I came on board in the background to try to help make connections stick for them because they were getting all this interest. I don't know what they were saying back to these people that were emailing them saying, “We want to try this out,” but there was no response after that. I think it's not an uncommon thing for technically focused inventor or academic types to not also be sales front-facing types. So they said, “Can you go to these conferences for us and say the things and make people want to talk with us?” I was like, “Okay, I can do that.”
At the same time, I was starting to be privy to discussions about contract negotiations that were happening in other verticals. Then I managed their tech transfer out of U of T. At that point, I had hired some people for them and got their banking organized, and I was starting to help with creating their original brand—messaging and look and all that kind of stuff. So eventually we had the conversation around, “So maybe I should just do this?” And that's how that happened. It's an unorthodox path, but I'm here now.
Can you tell us about the publication of JALI’s SIGGRAPH paper and the early partnership with CD Projekt Red (CDPR)?
Sarah Watling: The publication was 2016 at SIGGRAPH. I think one of the things that made it stand out amongst a lot of the other submissions was that ML [machine learning] was employed judiciously. This was when everyone was talking about ML, before they were even talking about AI or GenAI. So it got a lot of derision for that, and a lot of disbelief. We had to resubmit because we were told that we were falsifying results—that there was no way what we were showing was possible.
There were a lot of big players already picking up on it right away. Studios like EA, Epic—everyone was already at the door saying, not only is this impressive, but it's a clearly defined prototype. CDPR in Poland was also saying the same thing, and I think Pif and Chris just vibed with them better. I don't think it was particularly strategic, although it was clearly the best choice. They said, we need this now and we can see it already works, so we'll pay you to develop it. Then when we release the game, we want to talk about this relationship and us finding you…We want that to be part of our story when we promote the game with the whole build-up, and then go sell it to the world after that. We don't want any interest in it or rights, or anything like that.
That development process [for Cyberpunk 2077] took a long time. It was an exclusive relationship for the period until they released the game—that was the exchange. It was a pretty amazing and singular kind of relationship. Very hard to replicate. It was amazing, and they were very generous with their time and with their belief that we were actually a competent product development team. Pif has always been very product-oriented and Chris had the legacy accolades as an Academy Award winner, and deep knowledge of Maya at the rigging level and the animation level. I'd say Chris probably knows more about it than many people working today.
How did the name “JALI Research” originate?
Sarah Watling: It’s a contraction that represents the two dynamic modes that are at the core of the original technology. If you imagine this on an XY-plane, it’s motion that is happening in the jaw space and motion that's happening in the lip space. Jaw-lip, JA-LI [pronounced “jaw-lee”]. So that's where it comes from, and that was the name of the paper. Then the four co-founders started internally calling themselves “JALI Goodfellows,” so that kind of stuck.
When I was at Gamescom, I met this guy who was regaling me about his 30-some-hour trek from Botswana. He was part of an African delegation of people in advocacy roles for animation as a growth opportunity in a young demographic. He was telling me that where he's from there's a mountain called Mount Jali, and the word in his culture means "elevated perspective.” Isn't that beautiful?
Can you tell us more about JALI’s technology and how it has evolved to where it is now?
Sarah Watling: When we first started developing the earliest iterations of the technology, it was a solution to automatically create animated 3D characters from audio analysis that could be described as fast, accurate, in-sync, and expressive. These were the sort of things that we were always striving to hit. Low processing footprint—fast, not just in terms of time to generate, but also in terms of total compute time. A tiny component of an overall game pipeline or production pipeline.
Because we had Chris and this OG Maya crew on our team, we had assumed that it would be feature animation and VFX that were going to be the earliest adopters. That was so not the case, like not at all. It was games. Games had the need for volume and for quality at volume. They had the need for localization and had the pipeline set up so that you could generate all of that animation as you're going.
Animation at the time—the reason why it goes: render, then send to dubbing or localization, is because of that render process at the end of a linear pipeline. You're not rendering all the time in real time or in-engine, so you're not able to benefit from the idea of localizing that animation as you go.
We worked with a couple of brave animation studios who were really interested in the technology, but at the time, appetite for R&D for new production pipeline tools or overhauls just was not there. When I say that, I'm not talking about huge players. You're talking about your regular, margin-driven animation studio that's probably not developing their own IP. Or if they are, they only have one and they're mostly doing service work. They’re competing either on a price-per-minute of animation or on a number of animation minutes per animator per day, that kind of thing. So that's where we started.
The input is from the script. At the time we needed both audio and text to feed the analysis system. Those inputs are analyzed. They’re broken down into their smallest composite parts—almost like their molecules, which would be phonemes and visemes. Our R&D has always wanted to go subatomic or submolecular, and that's the direction that we continue to go in terms of making richer the data source and then also more editable or controllable the output animation.
That analysis is followed by an alignment phase where those composite pieces are then matched up with each other. So, visual to audio—visemes into phonemes. Then the format is animation curves. The thing that has always made us competitive and continues to do so—alongside the development of AI-emergent or other ways of capturing and then producing facial animation data like p-cap or performance capture—is the curves that our system outputs. That data—if you were to look at it on the graph editor in Maya, which a lot of animators work directly in—it’s math and tangents. The curves are as though they had been key-framed, so they're very clean. The tangents on those curves are correctly constructed. You could just continue animating on them. There's no cleanup—is the advantage. That cleanup component is accompanied by an investment. That’s a block of time that has to be done, and that’s still the same, even with markerless capture, natural movement. When your source that you're capturing is a human performing, that noise that they talk about—which is all of these points per frame—is just natural motion.
Most of the time in an animated scenario, you want a cleanliness to the motion. When you think back to the examples of fully performance-captured feature 3D films—how odd they looked because they had this very natural quality to the movement which is inherently noisy and jittery and of humans, of real life. But in this animated, artificial world that's of someone's imagining, it just didn't jive. That’s the definition of uncanny, when something does what it’s not supposed to do.
Now, with facial animation, your traditional pipeline is p-cap or a common pipeline: p-cap, cleanup, polish. With us, it's just: animate, polish. The out-of-the-box is going to be of probably lower quality than say, p-cap, but the time to polish. So the same time to the same end result is much less. Then it's integrated into Maya and Unreal. It allows game developers to animate.
A game is interesting because the whole thing has to be animated. A view of wherever a player’s viewpoint is—that has to be animated. If you look at the ceiling, the ceiling is animated. The same as if you look at a character—they have to be doing something. That’s insanely expensive, which is why you get typical NPC [non-player character] movements. As long as they don't do something disruptively out of the ordinary, then you don't notice. Immersion persists uninterrupted. Immersion and perfection don't have to be the same. It's just a lack of an interruption.
We allow people to compute full game builds of crazy amounts of animation. It used to take a couple of days. With our most recent iteration of the product, it takes the equivalent of a long lunch to do all of those lines, to basically propagate changes globally across a game build. The benefit of that to animators and developers is they can return to the work when they've just published all of those changes that same day—actually within hours—which allows them to maintain creative continuity with their own thought processes. You know what it's like to be working on something and then put it away for the night and then come back to it, and you're like, “Yeah, what was I doing?” I think that applies across any creative exercise.
Can you tell us about the Anim Revolution collective?
Sarah Watling: The Anim Revolution is a group of companies that are friendly—as in we are friendly technologies that obviously complement one another, or not obviously. It would not be a surprise to find a tech stack where someone has implemented JALI but then also implemented, say, Ozone character rigs to be their 3D rig assets. JALI drives those rigs. Or House of Moves—which is our motion capture partner—they might be on the same project, and we did find each other on similar projects. That's part of, in some of the cases, the relationships within the sort of collective—and that's not an official term. We're still looking for the right word for what it is.
Or it was just personal relationships over the years as industry professionals. At Unreal Fest in 2023, our team met Rich Hurrey from Ozone Story Tech. They are developers of some pretty exciting new rigging technology, which I believe in a year everyone will know about. It was just instant, “Okay, we're definitely going to work together.” Both on the personality level and then the immediate understanding that our technologies are complementary to each other. We can help grow opportunity for each other if we work together.
We did a mini collaboration at GDC last year, and then we followed that with a mini collaboration at Annecy immediately after that. Then we went big at SIGGRAPH. Rich brought to the table—after the success of these two collaborations—Jim Henson's digital puppetry, House of Moves. Basically, he brought everybody. We had met Tom Mikota at CG Lumberjack at Annecy. We ended up bringing him along.
Lenovo has always been a huge supporter of what we've been doing by being our hardware sponsor. AMD was our original graphics and GPU sponsor. We now enjoy the support—depending on the show and what their marketing strategies are and designations for their spend—AMD, NVIDIA, Xencelabs tablets, and Immersive Enterprise Labs—who are actually their IP development. They are the perfect example of a company who would actually deploy all of us together in a pipeline. Their whole angle is about disrupting the linear workflow and adopting a more holistic, iterative workflow that ideally allows the creators to fail faster and less costly, so that you can move more confidently towards your desired end goal.
Mark Andrews, Animation Director of 'Brave', speaks to the crowd at the Anim Revolution booth at GDC 2025
We decided to all go in and share the cost of a booth at SIGGRAPH, so that was the simple motivation at first. But then we were like, “What's going to be the story? What are we going to tell people? Why is this going to make sense that we're all together?” So we started working on some original assets and ways to bring them to the audience at SIGGRAPH in an interactive way that highlighted the contributions of each of the technologies. That required us all to work together in a way that's not that common. Because discretionary budget for that type of R&D is usually always attached to a paying customer, rather than to yourself. It wasn't forced—it happened really organically.
Then there was this open sharing of know-how and production support. So in addition to the fact that in any given pipeline, what we wanted to show was that what we were saying was true—that we all seamlessly work together and let's show you how. Each company brings this pipeline sensibility, this common and inherently known understanding that you might be a tool or a technology in development, but in deployment and production use you are not in a silo any longer. You have to anticipate and be non-destructive to whatever is happening upstream of that process and similarly downstream. You can't be clobbering or overriding—everything has an intention. I think a lot of development teams struggle with exactly that when they're looking at, “How do we optimize our pipeline?” Or, “How can we improve this metric?”
There's tons of pressure, especially now, to explore emergent technologies. Is there any bite to any of this? Is this actually going to save us time? We’ve got to try it out—test it. I think one of the easiest ways to ensure that you get longer test times and a better run at turning any of that testing into business is to be pipeline and production—have that sensibility that we all seem to share.
What did the Anim Revolution demonstrate at GDC this year?
Sarah Watling: We had assets that we had all been collectively working on, so we decided to push them further. That allowed us to start working on non-human characters. As a result, we worked with House of Moves to celebrate their grand opening at Stray Vista in Austin back in January. They commissioned Ozone to create a dragon rig, and then Mark Bristol to write and storyboard it. We all continued to work on those assets.
We built some rigging capability in that dragon to make him able to talk. We debuted the cinematic that we all worked on together at House of Moves in Austin, which was called “Outpost.” It takes place on a snow-covered planet where two space marines are finishing their recon. Just as they're about to turn back to their ship, they detect signs of life. This ancient, gigantic, menacing dragon-type creature from the lost world or ancient universe comes up over a mountain ridge and blows them away with his fiery roar. It’s pretty awesome. It's a beautiful cinematic.
That dragon was never really supposed to talk, but Ozone and I had always talked about wanting to work on non-human characters together. So I said, “Well, let's make this dragon talk. Let's give them an actor's personality.” He's cast in this cinematic, and then he's going to talk to the folks at GDC about what went into the project. So he moves from being his character in the cinematic—very menacing and as you'd expect—and then his off-stage persona is very egocentric and diva-y. It’s very silly, but it was super fun.
I loved the fact that we probably improvised how we brought these characters to GDC together like 18 times, so it was a real test of each other's production resilience. I think it just showed how tight we have started to become. At SIGGRAPH, we had to work really hard to communicate what made sense for all these teams to be demonstrating together. This time around, people were already picking up on the narrative that there's a moment that's going to happen as a result of this collaboration. There’s something that's emerging, which is exciting. It's nice to see how this is evolving.
Will you be bringing the Anim Revolution back to SIGGRAPH this year?
Sarah Watling: We’re definitely evaluating the strategy for there. I think we're looking to complement whatever we do as the collective with some speaking opportunities as well. I think we've got a lot to contribute on that front. We, as a company, have been focusing a lot of our energy on developing education training modules.
We have a really deep bench of talent and experience as part of our founding team and also with a lot of the younger team that we've hired since then. We have a lot to offer in terms of new technologies and its applications, but also some foundational principles and art, which I think is an important thing. I always say I'm STEAM [Science, Technology, Engineering, Art, and Mathematics] because arts are a contextualizer, right? That's how it makes all of what's happening in our world make sense, it’s through the lenses of other people. I think that's an asset that we can make much more front and center.