Immersive Technology Lands in the Spotlight Via the Metaverse
NAB Show Daily
Growth to take off as virtual interfaces transition from tech toys into tech tools
BY Susan Ashworth, Tv technology
If you needed a specific definition for what the metaverse can do, you may be waiting awhile.
It’s an immersive embodiment of the internet. And it’s a shared, personalized experience. But it’s also an animated, interactive playroom, one that gives us the chance to experience our existence in ways we can’t in the physical world.
For broadcasters, game makers and content creators, the metaverse has the capacity to transform production of the industry.
By using affordable virtual reality technology, a media company might have the capacity to view, manage and interact with an ongoing production regardless of where it is happening. Or engage linear television viewers with an immersive, companion experience.
What’s clear is that the metaverse is poised to be something big.
“In general, it feels like the industry is on the doorstep of taking major strides toward delivering on truly immersive entertainment,” said Chris Brown, NAB executive vice president and managing director of Global Connections and Events.
GROWING MARKET
In its Tech Trends 2023 report, Deloitte Insights found that the metaverse is expected to be an $80 billion market by 2024 as companies begin to use the technology to create an enriched alternative to the flat, two-dimensional world we currently access via video feeds, email and texts.
“In other words, the metaverse is best thought of as a more immersive incarnation of the internet itself,” the authors of the Deloitte report wrote. “[It is an] ‘internet plus’ as opposed to ‘reality minus.’”
Growth is expected to take off as virtual interfaces transition from technology toys into technology tools, with new business models following closely behind.
In a recent panel discussion about the metaverse, Deloitte Consulting Principal Jessica Kosmowski said that industries are just now at the cusp of exploring unique initial use cases of the metaverse.
“We are essentially looking at the next evolution of the internet,” Kosmowski said. “Every aspect of the tech, media and telecom ecosystem is in for a major change in the next few years. Media companies will need to develop new business models [and] engage consumers with new content and experiences. Products and services will be reimagined at every layer of the technology stack.”
NAB Show is tackling the issue with a series of sessions, roundtable discussions and exhibitor displays. Leaders from Microsoft and Dentsu will take an in-depth dive into the metaverse and explore how companies have already begun to create destinations and experiences during the Tuesday session “Secrets of Building Your Brand in the Metaverse.”
Another Tuesday session, “East vs. West: How Will the Metaverse Evolve and Converge Globally,” will explore the commonalities and obstacles that exist between the Western entrepreneurial model and the Eastern centralized model and what businesses can expect when it comes to building within this new interconnected universe.
CREATIVE USES
What are the possibilities of all this? Consider scented packs that could be connected to a virtual reality headset to mirror the lush, scent-filled environment a user is watching on screen. Or a hyperreal augmented reality shopping experience led by an AI-powered avatar. Or the use of sensitive, interactive haptic gloves that would give a user a sense of touch.
There’s already demand for blending physical and virtual worlds in the media industry.
Sinclair Broadcast Group and Deloitte recently announced plans to launch a new metaverse sports fan community driven by a 3D creation tool. Beyond simply viewing a live game, fans can engage before the season and before each game. Sinclair called the partnership a key step in driving new revenue streams and deepening engagement with its viewers by redefining the sports viewing experience.
Almost universally, experts are saying that those interested in what the metaverse has to offer should start with strategy, whether the main goal is to develop new streams of revenue or to improve production operations through an augmented work experience.
On the show floor, exhibitors will spotlight their work in the metaverse and related experiences like Web3, AI and data-driven personalization.
“New immersive content experiences are imminent, from pure AR/VR or mixed reality variations to the full-blown promise of new digital worlds with users as the central character,” said NAB’s Brown.
While there are certainly hurdles ahead and the challenge of syncing all sides of the ecosystem — from the development of content, the process of content creation and distribution and the creation of the consumer technology necessary to deliver the ultimate user experience — the industry looks to be ready to take strides toward delivering deeply immersive entertainment, Brown said.
On The Main Stage: A Case Study — Color and Finishing in the Cloud
A Case Study: Color and Finishing in the Cloud Today | 12:45–1:30 p.m.
Jesse Kobayashi, VFX producer on The Lord of the Rings: The Rings of Power, will showcase how Blackmagic Design, Company 3 and AWS collaborated to create an entirely cloud-based infrastructure for conform, color-grading and delivery on one of the largest television shows in history and how these learnings and values are leading to new use cases and opportunities for productions across the industry.
The session will detail this collaboration and explore how learnings and values from the production are leading to new use cases and opportunities for productions across the industry.
Kobayashi is a visual effects producer with more than two decades of experience in the industry. In addition to The Rings of Power for Amazon Studios, his credits in visual effects include Kong: Skull Island and Warcraft for Legendary Pictures and Krampus for Universal Pictures. Kobayashi has also served as director of visual effects at Legendary Pictures and as a post producer at both Warner Bros. and Laser Pacific.
The session is part of the P|PW, produced by Future Media Concepts. Being held on the Main Stage, the session is open to all.
NAB Show: Learn How the Cloud Workflow… Worked on ‟The Lord of the Rings: The Rings of Power”
TL;DR
“A Case Study: Color and Finishing in the Cloud” is scheduled for April 16 at 12:45 p.m. on the Main Stage.
“The Lord of the Rings: The Rings of Power” VFX Producer and modern filmmaking consultant Jesse Kobayashi will share insights from that production.
Blackmagic Design, Company 3 and AWS collaborated to create a customized infrastructure, which Kobayashi will describe.
“The Lord of the Rings: The Rings of Power” VFX Producer Jesse Kobayashi will head to the NAB Show Main Stage to discuss how the production created a cloud-based infrastructure for conform, color-grading and delivery.
On April 16 at 12:45 p.m., he’ll deliver “A Case Study: Color and Finishing in the Cloud,” detailing how Blackmagic Design, Company 3 and AWS collaborated on this project. Kobayashi will also share best practices and takeaways from “The Rings of Power” ways of working.
This keynote is billed as a free “bonus” Post|Production World session, and is open to all show attendees. (P|PW is produced by Future Media Conferences.)
Kobayashi has two decades of experience as a visual effects producer, with credits for “Kong: Skull Island” and “Warcraft” for Legendary Pictures and “Krampus” for Universal Pictures.
Kobayashi also works as a consultant and advocates for the adoption of new filmmaking technology.
Kobayashi has also served as director of visual effects at Legendary Pictures and as a post producer at both Warner Bros. and Laser Pacific.
He is a graduate of Azusa Pacific University, where he helped found its first film courses.
Containing nearly 10,000 VFX shots, post-production on the first season of “The Lord of the Rings: The Rings of Power” was enabled by AWS.
April 7, 2023
NAB Show: Generative AI, Bringing Together the “Why” and “How”
TL;DR
Generative AI (think ChatGPT and DALL●E) is poised to change the media and entertainment industry in myriad ways.
Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag will discuss how creatives can use generative AI tools to facilitate their work today at a NAB Show Create session on April 17 at 3 p.m.
A NAB show panel discussion aims to separate the hype from the “how” and “now” of generative AI for M&E.
This panel, featuring AI & Blockchain in Media Project Director Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag, will discuss how generative AI tools can help the media and entertainment industry in 2023, and consider how this technology might disrupt and augment workflows in 2024 and beyond.
Discover where Bard, Whisper, and Dall●E might fit into your creative process, and learn about other AI tools that could soon automate microworkflows at a desk near you.
A NAB Show Conference Pass is required for this session. Register here.
Speakers
Yves Bergquist is a data scientist and the director of the AI & Neuroscience in Media Project at USC’s Entertainment Technology Center, where his team helps the entertainment industry accelerate the deployment of next-generation analytics standards and solutions, including artificial intelligence.
He is also the CEO of AI engineering firm Novamente, which applies neural-symbolic artificial general intelligence to large enterprise problems. Novamente is the AI developer behind Hanson Robotics’ “Sophia.” His team also built the world’s very first fully autonomous AI-driven hedge fund, Aidyia, which is now defunct.
Before Novamente, Bergquist managed business development at analytics firms Bottlenose and Ranker in Los Angeles. He was part of the founding team at Singularity University, a joint venture between Google and NASA.
Pinar Seyhan Demirdag is an A.I. director, multidisciplinary creator, visionary, an outspoken advocate for the conscious use of technology, and an opinion leader in generative A.I.
In 2020, Demirdag and Gary Koepke founded Seyhan Lee, which has become the bridge between generative AI and the entertainment industry. Seyhan Lee created the first generative AI VFX for a feature film (“Descending the Mountain“) and the first brand-sponsored generative AI film (“Connections/Beko“).
In 2022, they announced Cuebric, a tool that combines several different AIs to streamline the production of 2.5-D environments for virtual production stages.
The panel will be moderated by NAB Amplify Senior Editor Emily M. Reigart.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
ChatGPT poses a fundamental question about how generative artificial intelligence tools will transform the workforce for all creative media.
April 6, 2023
“The Last of Us” Creative Team Takes the Stage at NAB Show
TL;DR
HBO’s adaptation of “The Last of Us” dystopian video game, starring Pedro Pascal as Joel and Bella Ramsey as Ellie, is a 2023 fan favorite and also hailed by critics for artful storytelling.
The panel will feature show creator and showrunner Craig Mazin, as well as Timothy Good, ACE, and Emily Mendez; Ksenia Sereda; Alex Wang; and Michael J. Benavente.
“The Last of Us” showrunner and creative team will discuss HBO’s small-screen adaptation of the hit video game on the Main Stage of the 2023 NAB Show.
The Sunday morning panel, presented by American Cinema Editors, will discuss the editing, cinematography, VFX, and sound artistry that brought Ellie and Joel to life.
Executive Producer Craig Mazin will be joined on stage by editors Timothy Good, ACE, and Emily Mendez; cinematographer Ksenia Sereda; VFX supervisor Alex Wang; and sound supervisor Michael J. Benavente.
The conversation will be moderated by The Hollywood Reporter’s Carolyn Giardina.
In addition to his role as executive producer, Mazin is also the multiple Emmy award-winning co-creator, writer and director of “The Last of Us.” Previously, he served as the creator, writer and executive producer of HBO limited series “Chernobyl,” for which he won Golden Globe, BAFTA, Writers Guild, Producers Guild and Peabody awards.
During “The Last of Us,” Timothy Good, ACE, was primarily responsible for editing the first season finale and the third episode, featuring the love story of Bill and Frank. In addition to editing a wide variety of TV series and miniseries, including ABC’s “When We Rise,” Netflix’s “The Umbrella Academy” and Fox’s “Fringe,” he has also worked on the original “Gossip Girl” on the CW and Fox’s “The O.C.”
While working on “The Last Of Us,” Emily Mendez rose from assistant editor to co-editor alongside Good for four episodes. She has also worked on the editorial teams for “The Umbrella Academy,” Fox’s “The Resident,” Hulu’s “Light as a Feather” and Fox’s “Rosewood.”
Sereda was listed as one of the “20 Cinematographers You Should Know at Cannes 2019” for her work on “Beanpole.” Before “The Last of Us,” she worked on films such as “Little Bird,” “Petersburg. A Selfie,” “House on Clauzewert’s Head” and “Acid.”
Wang is a 20-year veteran in the film and television industry. He has worked for VFX studios such as Digital Domain, DNEG and Industrial Light & Magic. Wang became a VFX supervisor for “Deadpool” in 2015 and counts “Jurassic World Dominion” as a recent project.
Benavente names Hulu’s “Under the Banner of Heaven” as one of his most recent projects. He sits on the Sound Branch Executive Committee of the Academy of Motion Picture Arts and Sciences.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
Award-winning multihyphenate Brett Goldstein will dive into his creative process in a fireside chat on April 17 at 4:00 p.m.
April 8, 2023
Posted
April 4, 2023
When It’s All an Action Sequence: Editing “John Wick Chapter 4”
TL;DR
Director Chad Stahelski wanted to work with an editor who came with no preconceptions about how a John Wick action film should be put together.
Editor Nathan Orloff talks about being able to accomplish a fantastic rhythm, but over a near three-hour run time.
Stahelski discusses cinematic influences including “The Good, The Bad and the Ugly” and MGM musicals.
With John Wick 2 and 3 editor Evan Schiff unavailable, franchise director and co-creator Chad Stahelski cast around for a new cutting room collaborator for Chapter 4. He alighted on Nathan Orloff (Ghostbusters: Afterlife), in part because Orloff had limited experience editing action movies.
“In my interview with Chad, we just really hit it off,” Orloff explains on the Next Best Picture podcast. “I found out many months later that one of the reasons he wanted to bring me on is because I don’t have extensive experience in action. He didn’t want someone to come in and do their thing that they’ve been doing on other action movies… because John Wick is sort of antithetical to how a lot of action movies are cut these days.”
To understand why, you have to appreciate that Stahelski’s vision for the fourth installment in the franchise was to expand the John Wick universe by bringing in multiple storylines and a longer run-time to let the action play out on screen, rather than having the editing dictate the action.
“The other films are very much like, you know, that John is on a direct rampage or running for his life. This film was intentionally designed to be more reflective and contemplating, that after his entire career as a hitman, he is forced to reckon with his past and what he’s done.”
Stahelski’s influences range from the lush visuals of Wong Kar-wei to the operatic staging of Sergio Leone westerns. As the director explained to Jim Hemphill at IndieWire: “I love the seventies movie style. I love four act operas. I love Kabuki theater. The Asian cinema kind of breaks a lot of rules that we adhere to in the three act version [of movies] and we’d like to think John Wick breaks a lot of those rules because we do go a little operatic.
“Lawrence of Arabia is a good example like that. That movie kind of flies by to me and it doesn’t feel like you need an intermission in it.”
The filmmaker’s homage goes so far as to mimic the famous “match cut” by editor Ann V Coates in David Lean’s Lawrence of Arabia in which Lawrence in profile blows out a match and Coates cuts to a blazing desert sunrise.
“I remember vividly when I went to set in Paris, Chad asked me ‘what’s the most famous cut in all of cinema?’ and said we’re going to do it our way,” Orloff relates to Next Best Picture. “I wanted to make sure we did the exact number of frames when the fire was blown out before cutting to the sunrise. You know, I wanted to do it justice.
“He told me he’d rather swing and miss than do the same thing over again. And so that match cut is indicative of [telling the] audience what we’re going for.”
Another acknowledged influence on the director’s action style are classic MGM musicals or those featuring Fred Astaire. In films like Singing In The Rain or Top Hat the camera stays generally static and in wide shot with minimal edits so the viewer can take in all the dancing brilliance performed by the film’s stars.
“I love Bob Fosse here, one of my huge inspirations,” Stahelski tells Indiewire. “You take Gene Kelly, the old Sunday Morning Sunday Parade or something like that. You watch Fred Astaire do his thing. And if you watch the way we shoot, it’s very simple. The way we train people [to perform stunts] is very, very, very dance oriented.”
Orloff elaborates on what this means to decisions in the cutting room.
“Musicals like back then were sort of like you edited around the dancing,” he says on an episode of The Rough Cut podcast. “You showed them dancing. They would do a move, finish, cut, start something else. And the way Chad talked about that really inspired me to do that with our characters and not use the editing to try to punch anything up.”
There are times when the stunt performance or that of Keanu Reeves aren’t quite perfect “they slip or there’s something not great about the timing of this or that, but not being so obsessive about perfection makes it just so much more real. When you’re cutting less, you’re able to absorb everything more. You feel more empathy for the characters because you feel like you’re just there.”
John Wick: Chapter 4 clocks in at 169 minutes, more than an hour longer than the original. Stahelski explains why he wanted a movie of this length.
“In our heads we knew that we wanted to show this constant decreasing circle that spirals closer and closer as [the stories] come together. So every act brings us closer together. That was the plan. It sounds like a very genius plan, but you don’t know until you cut the whole thing together. Our first cut was 3 hours 45 minutes.”
So how did the edit team set about cutting that down, and knowing which killing to leave in or excise?
“When you have 14 action sequences, you can’t just edit that sequence,” the director explained. “You’ll never know if a five minute car scene or a ten minute car scene is good to watch in the two and a half hour movie.
So the only way to truly know that you’re doing the right thing is step back and take that half day. We’d edit all morning but by four p.m. we’re like, Let’s watch the movie. And my editorial staff probably hates me. We’ve watched it so many times because literally even if we just took like 30 seconds out of something, I’d make everybody watch the movie again, because that’s the only way you know, you have the right pace.
He adds, “It’s the whole song that makes you rock out. I think that was a big learning experience with me and my editorial team to constantly watch a two and five hour movie and feel where the slow parts were and to work on those parts.”
Because John Wick is dispatching henchmen left and right in intricately planned and executed stunts, making the decision about what to cut was a tricky one admits the editor.
“There is definitely sometimes overkill when something is too similar to something else,” Orloff told Next Best Picture, “but going back to the music was a huge help in creating different tones and alternating what we were doing to avoid the things feeling the same. And to Chad’s credit, especially in the last act when we go from street fight to a car chase to a lengthy overhead shot that, even though the audience has watched non-stop action for 30-45 minutes the movie is structured so skillfully that you’re seeing something you’ve never seen before.”
“The Last Jedi” and “Glass Onion” director Rian Johnson on his new Peacock “howcatchem” series, “Poker Face,” starring Natasha Lyonne.
April 5, 2023
Posted
April 4, 2023
Step Into the Ring: Kramer Morgenthau’s Cinematography for “Creed III”
TL;DR
Even though the “Creed” movies are part of a expanded “Rocky” cinematic universe, this is the first in nine films that doesn’t have the original character as a part of the plot.
Director and star Michael B. Jordan collaborated with “Creed II” cinematographer Kramer Morgenthau to reinvent how boxing scenes are shot.
The filmmakers aimed for a heightened visual style influenced by Japanese anime, including what they called “Adonis vision,” a subjective POV from Adonis Creed as he’s clocking each fight.
“Creed III” was shot in IMAX format with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics.
Like the story arc of the majority of boxing movies, Creed III had a number of challenges to overcome in its production journey. Firstly, even though the Creed movies were part of a expanded Rocky cinematic universe, this was the first in nine films that didn’t have the original character as a part of the plot; Rocky had left the story.
Flipping this negative into the positive feel of a new start gave newbie director and still star Michael B. Jordan a chance to reinvent how to shoot the boxing scenes in particular. An easy reference, subconsciously or not, was Scorsese’s Raging Bull as its fighting is stylistically different from everything else.
Also, a new POV suited the storyline of a fight between a retired Adonis Creed and a significant person reappearing from his past with major issues to resolve.
Previous Creed II cinematographer Kramer Morgenthau and Jordan laid plans for a new “in ring” aesthetic as Max Weinstein explains in American Cinematographer. “Settling into his duties as a director, Jordan determined early in prep that he and Morgenthau would need to take two ‘big artistic swings’ to fully engross audiences in Donnie’s next chapter.”
The intention was to aim for a heightened visual style. “Michael is hugely influenced by Japanese anime — that’s completely his stamp on the movie. So, he brought that into the way we cover the fights,” Morgenthau says. “There’s this thing we call ‘Adonis Vision,’ where you’re seeing subjective point-of-view from Adonis as he’s clocking each fight, and that plays out in an anime style, with these hyper-real close-ups.”
For that, they switched to very wide-angle lenses, a 12mm Panavision H Series and a 14mm VA. “That again was part of Michael’s vision from the beginning. It’s very much an anime approach.”
But the action in general had to seen from the inside, not the outside, which is the problem for most sports action movies.
The Panavision website described how the boxing was shot within reach of the fighters. On both Creed II and III, Morgenthau was joined in his corner by A-camera and Steadicam operator, Michael Heathcote. “Mike and I came in early during prep to work with [2nd-unit director and supervising stunt coordinator] Clayton Barber and [assistant stunt coordinator] Eric Brown to help design the moves for the fight choreography. There’s an arc to what happens in the fights and the stories happening in the corners and in the ringside seats. That was all carefully choreographed, like shooting a piece of dance.”
Working with Panavision Atlanta, Morgenthau chose to shoot Creed III with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics. “We shot all the dramatic scenes with T Series and C Series anamorphic lenses, and for the fights, which are in the 1.90:1 aspect ratio for IMAX, we used [prototype] spherical VA primes that we customized to add a bit more softness and help them match the look of our anamorphic lenses,” the cinematographer explains.
Morgenthau also shot certain sections of Donnie’s bouts with the Phantom Flex4K, whose high-speed capabilities enabled him to create an “ultra-slow-motion analysis of some of the major moments in the fights, where we wanted to be inside the boxers’ heads.”
Other cameras used included prep cameras to rehearse moves, “We prepped by shooting each fight with small digital cameras, and shooting sketches of what it should be, figuring out the most impactful places to place a camera and trying to show what it’s like to be in the ring from a boxer’s perspective.”
The other big “artistic swing” was the unveiling of a new taller aspect ratio to give the fighters almost god-like statue. “In the film’s dramatic scenes, intimate glimpses of Donnie’s and Dame’s out-of-the-ring lives are framed for the 2.39:1 aspect ratio, but whenever a match is underway, the frame is expanded to 1.90:1 Imax. The filmmakers opted to shoot most footage for both aspect ratios with Sony Venice cameras certified by the ‘Filmed for Imax’ program,” Weinstein notes in American Cinematographer.
With up to 26% more picture, this third installment in the Creed franchise became the first sports-based film included in the “Filmed for IMAX” program.
“It was really exciting to be able to integrate the IMAX cameras into the filmmaking process, especially the way we used them to open the world up and to make it very immersive and visceral for the flight sequences,” says Morgenthau, according to a report by ReelAdvice. “And that’s how we chose to use it; there was just something very magical, especially the scene at Dodger Stadium, where MBJ is walking out onto the field and the image aspect ratio expands in shot and the black bars recede, and you get this really tall, beautiful, powerful image. It just elevates everything, there is just something hyperreal about it. And to be the first sports movie doing that, it was a creative high.”
Director and star Jordan, speaking to American Cinematographer, says “We were looking at these old photos of Muhammad Ali by Neil Leifer, and [we called] the shots that he would get of these outdoor fights ‘clouds to the canvas,’ where you can see everything in the frame. So, we just wanted to recapture that — get all that information up on the screen. Then, we’d ask, ‘Okay, when is it going to open up? When is it going to transition into that ratio?’ It was about picking those moments and balancing them.”
Morgenthau concluded with almost reverence for the sport and the fighters in an interview with Gary M. Kramer for Salon. “The way we photographed the bodies was like photographing sculpture. Their bodies are sculpted and beautiful, and covered in sweat and oil and very reflective. Shooting them was about how their bodies and faces were reflecting light and honoring their performances was showing them in their ‘best light,’ so to speak,” he said.
“I studied paintings by George Bellows, and the Ashcan school of painting was an inspiration. There was an Eakins painting in a museum in Philadelphia that I was looking at, and I referenced great boxing photography, like some of the Ali color photographs. These images inspired how we lit the boxers.”
How “The Boy, The Mole, The Fox and The Horse” Won Hearts and Minds
TL;DR
Based on the bestselling illustrated book by Charlie Mackesy, the Oscar-winning animated short film “The Boy, The Mole, The Fox and The Horse” has been described as “‘The Little Prince’ for a new generation.”
The international animation team that brought the film to life spanned 20 different countries, with artists working remotely due to the pandemic.
The filmmakers wanted to retain the signature style of Mackesy’s ink and watercolor illustrations, with Mackesy closely involved in the process to ensure that the film stayed true to his vision.
The Oscar-winning short The Boy, The Mole, The Fox and The Horse is like receiving an “emotion bomb” when you first see it. If you have any pent-up sentiment left over from the pandemic, Charlie Mackesy’s animated story of a young boy and his animal friends might extract it from you, so be warned.
The award-winning animated story, now streaming on Apple TV+ and the BBC iPlayer, is the realization of Mackesy’s beautifully rendered ink and watercolor drawings, which were immortalized into an illustrated book that ended up topping the bestseller lists in both the United States and in the UK.
Filmmakers then approached Mackesy to take the story to the next level, but how do you turn heavily characterized pencil drawings into moving images and keep the signature style of the artist?
Initially, Mackesy’s intentions were less about the bottom line than more spiritual and Christian ambitions. He explained to Ryan Fleming at Deadline that helping people was his driving force and he thought the film would add to that.
That the book even became a hit shocked him, Mackesy said. “When the book came out, I got so many emails, like thousands of emails, telling me how the book had moved them or helped them, particularly in the pandemic,” he said. “I felt like if the book had done that, could a film reach people in the same way?”
He soon had his answer. After reading Mackesy’s book in 2019, producer Cara Speller said she “completely fell in love with it and got in touch with Charlie and his partner, Matthew Freud, and talked to them about what we could potentially do in turning it into a short film.” After a discussion with the creators, Speller contacted Peter Baynton, who was ready to join as director.
Speller told Jérémie Noyer at Animated Views how important it was to have Mackesy front and center in the process. “It was always really important to me right from the start that Charlie be at the center of any team that we put together to make the film. You can tell immediately from the book that he has incredibly strong instincts about what works. To me, it didn’t make any sense to try and make that without having him so closely involved.”
The animation team worked remotely because of COVID, with a shared goal of creating a look that reflected as closely as possible the drawings in the book, which were ink and watercolor. “We wanted to make those drawings basically move but keep the spirit of the fluidity of the ink and the line and the varying thickness of line,” Mackesy says.
Director Peter Bayton underlines the connection between Mackesy’s style and his animation team, “Charlie’s drawing is underpinned by a great knowledge of anatomy. So, even though he draws extremely quickly and quite impressionistically, you can tell he knows horse or boy or fox anatomy so well. For the mole, it’s a little bit different.
“It was important not to lose that lovely loose quality and make things stiff. So, we came down to a system where we would animate quite tightly on detailed models, and then, on the ink stage, we encouraged the artists to find that looser way of inking. It was about finding that very fine line that sort of drifts around the characters.”
“It was a very international crew,” noted Speller, “coming from 20 different countries. We started the work on the film in the middle of the Pandemic, so everyone was working remotely from their homes. We built the team in the same way you build any team on a production. You’re always looking for the most talented artists you can find; it doesn’t matter where they are in the world, as long as you think they’re the right fit for the project and for the team.”
“The style warrants movement,” said Gladstone, “but how did you achieve it? The line halo that goes around the drawings, how is that translated to movement?” Director Pete Bayton explained the significance of the halos: “Charlie describes those lines as thinking lines and they’re very characteristic of his drawings,” he noted.
“The process is that we start with pencil rough character animation, to define the performance and then it goes through a clean-up stage where we adjust the model where everything looks like a model and then we go to an inking stage where we do these key ink drawings and at that point we would add these lines, these thinking lines or ‘thinkies,’” he continued.
“It was a careful balance as sometimes that would feel too stiff and attached like a wire so we found a way of making them dissipate and reappear.”
Art director Mike McCain summed up Mackesy’s style and how it was transferred to motion. “Charlie has such a beautiful economy with ink and the book has such a minimal approach to storytelling and it’s just what’s needed on the page,” he said. “As we were looking to bring that wilderness to life the biggest challenges were finding how to add and not to over add. Just put what’s needed on screen to make it feel like you’re surrounded by this world.”
Variety’s Peter Debruge calls the short “The Little Prince for a new generation.” He goes on to add, “Beautifully adapted from British illustrator Charlie Mackesy’s international bestseller. Those who know the book — a Jonathan Livingston Seagull-esque life preserver for many during the pandemic — will appreciate how the team managed to translate Mackesy’s unique ink-and-watercolor style, with its distinctive blend of thick brushstrokes and loose, unfinished lines.
“Isobel Waller-Bridge’s gentle score coaxes audiences into a receptive place, while the quartet of Jude Coward Nicoll (the Boy), Tom Hollander (the Mole), Idris Elba (the Fox) and Gabriel Byrne (the Horse) lend sincere voice to various affirmational ideas,” Debruge continues.
“Cynics may dismiss what one acquaintance called its ‘bumper sticker wisdom,’ but they miss how vital it is to plant ideas of this nature in the heads of young viewers: boosting their confidence and unpacking what it means to feel lost — or seen — before social media brainwashes them otherwise.”
The 2023 NAB Show will host a conversation with the creative team behind short film “The Boy, the Mole, the Fox, and the Horse.”
March 28, 2023
NAB Show Leads an Exploration of the Evolving Internet
TL;DR
The 2023 NAB Show will explore the impact Web3 and other internet advances are having on the media and entertainment industry.
Attendees can learn about the next generation of the internet through educational programming, demonstrations, special events, networking sessions, and exhibitor participation on the show floor.
NAB Show will also feature the Intelligent Content Experiential Zone that will serve as a home base for attendees interested in new internet technologies. The area will allow visitors to participate in collaborative workshops and presentations.
NAB Show will explore the impact Web3 and other internet advances are having on the media and entertainment industry.
Exploration of the next generation of the internet at the 2023 NAB Show will include educational programming, demonstrations, special events, networking sessions, and exhibitor participation on the show floor.
“Web3 and other emerging technologies like generative AI and the metaverse are opening an entirely new chapter for content creators,” said Chris Brown, NAB executive vice president and managing director, global connections and events. “The 2023 NAB Show is the ideal platform to dive into these new tools and ideas by meeting face-to-face with the experts, innovators and companies that are unleashing the possibilities pushing the limits of our imagination.”
The 2023 event, which marks 100 years for NAB Show, takes place from April 15-19 at the Las Vegas Convention Center.
Event educational sessions will span multiple conferences and tracks, looking at key trends surrounding the next era of internet tech. Topics covered include Web3, data and analytics, generative AI, metaverse, and blockchain and NFTs.
“Web3 is a rapidly evolving technology, and the most successful companies will be those that are willing to experiment with new approaches and collaborate with other industry players,” said Andrea Berry, head of development at Theta Labs and a member of the NAB Show Web3 Advisory Council.
The Web3 Advisory Council, which offers guidance and expertise on NAB Show programming regarding the next generation of the internet, will provide an update on April 17 on the state of Web3, the impact of technology and the current economic and cultural trends that are driving the next phase of content.
NAB Show will also feature the Intelligent Content Experiential Zone, which will serve as a home base for attendees interested in new internet technologies. The area will allow visitors to participate in collaborative workshops and presentations with products from companies such as Interra Systems, Shotshack, Veritone and Wiland. The zone will also feature roundtables, theaters, the AWS Partner Village, the Innovation Village and NABiQ.
In collaboration with StoryTech, NAB Show will offer attendees guided, curated tours. Options include the Data, Data, Data tour, focusing on data and meta data management; the New Production Modalities tour, covering Web3, virtual production solutions and immersive content creation tools; and the Evolution of Video tour, showcasing the current state of video.
A variety of companies will be exhibiting new Web3 and other next-gen internet tech and solutions at NAB Show. These companies include Amagi, AWS, Digital Nirvana, Microsoft, Oracle, SDVI, SSimWave, TSV Analytics, Veritone and Vistex.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
A comprehensive plan for cloud-to-edge computing and connectivity will be important for high-performance consumer metaverse experiences.
March 28, 2023
Resilience, Remote Collaboration, and Creativity on “The Boy, the Mole, the Fox, and the Horse”
Watch the NAB Show session above.
TL;DR
The 2023 NAB Show will host a Main Stage conversation with the creative team behind Academy Award-winning animated short film “The Boy, the Mole, the Fox, and the Horse.”
Open to all attendees, the session “How to Win and Oscar With a Fully Remote Creative Team” will take place Sunday, April 16 at 2:00 p.m. and will feature visual artists for the production.
Art Director Mike McCain and Animation Senior Support Specialist Ben Wood will chat with session host Dave Leopold, strategic development director at LucidLink, about how cloud workflows allowed the film’s creatives to collaborate.
The 2023 NAB Show will host a Main Stage conversation with the creative team behind Academy Award-winning animated short film “The Boy, the Mole, the Fox, and the Horse.”
Open to all attendees, the session “How to Win and Oscar With a Fully Remote Creative Team” will take place Sunday, April 16 at 2:00 p.m. and will feature visual artists for the production.
Art Director Mike McCain and Animation Senior Support Specialist Ben Wood will chat with session host Dave Leopold, strategic development director at LucidLink, about how cloud workflows allowed the film’s creatives to collaborate.
“The Boy, the Mole, the Fox, and the Horse” first aired on the BBC in December 2022 to more than seven million live viewers.
McCain, who has worked with a variety of studios, has credits on “Spider-Man: Across the Spider-Verse” and “The Ghost and Molly McGee.” Before focusing on animation and painting, he directed video games.
Wood, who has more than nine years of visual effects industry experience, has worked at multiple VFX studios including Smoke & Mirrors, DNEG, and NoneMore Productions. He began his career as a post-house runner and then progressed to senior-level IT positions.
Leopold has held roles across the media and entertainment industry, including editor, motion graphics artist, producer and post supervisor. He most recently worked at ViacomCBS where he created content of all types. At LucidLink, he brings remote collaboration solutions to the global creative community.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
NAB Show will look at how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18.
March 24, 2023
The Art of the Prompt
BY ROBIN RASKIN
TL;DR
Now that the initial knee-jerk reactions to having Generative AI as our companions have quieted down a bit, it’s time to get to work and master the skills so that Generative AI is working for us, not the reverse.
The Kevin Roose shockwave, goaded every tech columnist to write something about how they broke AI, through a combination of provocation and beta testing the hell out of publicly released platforms.
Educational institutions are trying to figure out whether to ban Generative AI or teach it to their students. We’re rolling up our collective sleeves for the human/machine beta test.
Now that the initial knee-jerk reactions to having Generative AI as our companions have quieted down a bit, it’s time to get to work and master the skills so that Generative AI is working for us, not the reverse. The Kevin Roose shockwave, goaded every tech columnist to write something about how they broke AI, through a combination of provocation and beta testing the hell out of publicly released platforms like Bing AI, Google’s Bard, and the wildly popular ChatGPT.
In the early days of ChatGPT’s general release, Cnet had some faux pas including plagiarism and misinformation seeping into its AI-generated journalism. This week Wired magazine very carefully spelled out its internal rules for how it will incorporate generative AI in its journalism. (No for images, yes for research, no for copyediting, maybe for idea generation.) Educational institutions are trying to figure out whether to ban it or teach it to their students. We’re rolling up our collective sleeves for the human/machine beta test.
Meanwhile, folks like Evo Heyning, creator/founder at Realitycraft.Live, and author of a lovely interactive book called PromptCraft has been doubling down to dissect, coach, and cheer us into the world of using Generative AI effectively. The book, co-written with a slew of AI companions like Midjourney, Stable Diffusion,ChatGPT, and more, looks at the art, science, and lots of iterations that will help get the most out of the creative man/machine communications. You can watch some of her fast-paced Promptcraft lessons on YouTube. They’re kind of the AI-generative of the arti-isodes of Bob Ross on PBS.
A Magic Mirror for Collective Intelligence
Heyning has worked in AI, as a coder, storyteller, and world-builder since the early days of experimentation. She’s also been a monk, chaplain, and just about everything else that defines a renaissance woman who thinks deeply about AI. “Are the models merely stochastic parrots that spit back our own model? Or are they giving us something that’s a deeper level of comprehension?” she asks.
“AI,” she continues, “is like querying our collective intelligence. Right now most of our chat tools are mirrors of everything that they’ve experienced. They’re closer to asking a magic mirror about collective intelligence than they are about any sort of unique intelligence.”
Our jobs are to learn the language or the query to coax the best out of the machine. “AI Whisperers,” those who can create, frame, and refine prompts are out of the gate with a valued skillset.
While prompts for generating text, images, movies, and music will vary, there are certain commonalities. “A prompt,” says Heyning, “starts by breaking down the big vision of what you’d like to see created, encapsulating it into as few words as possible.” She likens a lot of the process to a cinematographer calling the shots. “You’re thinking about what the focal point of your creation will be. The world of the prompt is about our relationships with AI, and it includes shifts in language that come from both sides, not just from the human side, but also from alternative intelligences.”
Five Easy Pieces
Heyning talks about her process of including five pieces in a prompt. They include the type of content, the description of that content, the composition, the style, and the parameters.
Content Type: In art prompts, the type of content might be a poster or a sticker. For text it might be a letter or a research paper.
Description: The description of the content defines your scene (a frog on a lily pond).
Composition: The composition is the equivalent of your instruction in a movie (frog gulping down a fly or in the bright sunshine).
Style: The style might be pointillism, (or for text the style of comedy writing).
Parameters: Finally, the parameter might be landscape or portrait, or, for text a word count.
Providing context is also a key component. Details about the setting, characters, and mood help you get the image you had in your mind’s eye. “Negative weights” — things that should not be in your creation can be important, too. Heyning discourages the use of using artists, especially living artists, names in the prompt. These derivatives beg copyright questions and remind us to use commas in our prompts to make them more intelligible to the machine. “They act as separators to help the generator parse a scene.”
Heyning’s quite the optimist about how humans and AI will work together, even in much-debated areas like education. “Kids are learning about art history from reading prompts created using Midjourney,” she marvels. “They are introduced to impressionism, realism and abstract art. They’re using terms like knolling (knolling is the act of arranging different objects so that they are at 90-degree angles from each other, then photographing them from above), once relegated to the realm of trained graphic designers.”
What I learned from my crash course in prompting? The power of a good prompt is the power of parsimonious thinking — getting to the essence of what you want to create. Similar to coding, but different, because you don’t need to learn a foreign language, this is a much more Zen-like effort. Stripping away all that’s unnecessary; down to the perfect phrase. (P.S. If you prompt ChatGPT to tell you how to write the perfect prompt you’ll read even more about the Art of the Prompt.)
Even with AI-powered text-to-image tools like DALL-E 2, Midjourney and Craiyon still in their relative infancy, artificial intelligence and machine learning is already transforming the definition of art — including cinema — in ways no one could have ever predicted. Gain insights into AI’s potential impact on Media & Entertainment in NAB Amplify’s ongoing series of articles examining the latest trends and developments in AI art
Executives at the 2023 South by Southwest conference urge users to consider AI tools as helpers for human activities such as brainstorming.
March 23, 2023
Posted
March 14, 2023
Blinding Lights: Creating Cinematic Beauty for The Weeknd’s Concert Special
TL;DR
Both nights of The Weeknd’s spectacular 90-minute show at Inglewood’s SoFi Stadium in LA were recorded for an HBO concert special, “The Weeknd: Live at SoFi Stadium,” which is now streaming on HBO Max.
Director Micah Bickham employed roughly 25 Sony Venice cameras outfitted with Angenieux and Canon Cine zoom lenses to capture footage of the live concerts.
The production team partnered with a company called Live Grain to add texture and grain to the concert footage to emulate 35mm film stock.
Last November The Weeknd, aka Abel Makkonen Tesfaye, put on a spectacular 90-minute plus show at Inglewood’s SoFi Stadium in LA. Both nights were recorded for an HBO concert special, The Weeknd: Live at SoFi Stadium, which is now streaming on HBO Max.
It was the last stop of the first leg of the “After Hours til Dawn” tour, and Tesfaye pulled out all the stops to reinforce his performance artist handle but still confound and confuse the critics as to what music genre to place him in.
Direction was by Micah Bickham, whose collaboration with the artist goes back to the Starboy album in 2015. He talked to NAB Amplify about how the show was created, recorded and broadcast.
“My focus with The Weeknd is particularly around live production,” Bickham said. “We have quite the partnership really from the Starboy era around 2015. It’s been a handful of years just to understand the world they’re creating from an album point of view and how that translates in to live.”
SoFi stadium was primarily chosen for the recording as The Weeknd was doing two nights there. Both nights would be recorded and then cut together. “Just thinking how I wanted to shoot it and present it, we had to shoot across the two nights, plus a handful of pickups that we ended up doing. Also being LA, it was just perfect.”
The discussion before the show about how they wanted the concert film to look took quite a few diversions but a cinematic theme was always front and center. “We talked a lot about cinematic integrity. Yes it’s a concert and yes it’s an artist performing these songs but with a world being created and shaped inside of it,” he said.
“We talked about what the DNA and visual language of this film was but in the end for me it had a lot to do with how we presented it more like a film and less like a concert. What I mean by that is when you watch the film the pace and the tension that the pace creates is pretty unusual for a typical concert film.
“We wanted you to sit with the artist and digest what was happening right in front of you not through an edit and cut that might pull you away too quickly. We wanted you to live in it, when you see it there’s something that resonates differently than a typical concert film.”
The Weeknd’s live shows have already made headlines, especially his 2021 Super Bowl halftime show, which he had reportedly underwritten to the tune of several million dollars. But the show was to be nominated for an Emmy for Outstanding Variety Special (Live), Outstanding Lighting Design/Lighting Direction for a Variety Special, and Outstanding Technical Direction, Camerawork, Video Control for a Special.
The SoFi concerts were specifically staged to allow viewers see the most of The Weeknd. Tesfaye had the run of the center of the stadium with an apocalyptic Toronto skyline at one end and a huge suspended moon at the other. No sign of the band, who were hidden out of sight. Tesfaye was on his own apart from 33 dancers parading as red-shrouded sirens who walked as one.
Concert films can be free-running, sometimes allowing the camera positions to operate without timecodes, picking and choosing shots as they go. Bickham wanted a tighter regime for SoFi. “For this particular project there were a couple of differences, just because of the scale of it. It was important to me early on that if I just monitored the board and did a pure just shoot for capture and I didn’t create a line cut, my feeling was that we weren’t going to able to hold that many cameras accountable to each moment,” he said.
“So the way I directed it was a little bit of a hybrid in the sense that I did cut a line cut. When a director cuts a line cut there’s an immediacy that takes place from the operators that you’re working with. Everybody sits up a little straighter and there’s a little more tension than if I asked them to ‘just shoot and I’ll nudge you around.’
“Certainly there are times when that’s important and the best way to approach it. For this I felt creating a little bit of tension and immediacy was important so everyone stayed focused. It’s a long show, top to bottom it’s just under two hours. It’s an easy scenario even for the best team to kind of settle in and perhaps not necessarily to be on top of every single moment throughout the show. So yes, we cut it but with a series of pick ups too.”
These were mostly single close-up Steadicam shots featuring The Weeknd and the dancers. Adding them to the two nights at the stadium gave Bickham a substantial editing job, but inevitably it was all about finding the show. “We wanted to break it apart and understand the shape of the narrative and how we could build it in the edit.”
With around 25 Sony Venice cameras in play — both first- and second-generation, but mostly second — there was a lot of footage to work with. Lenses were primarily Angenieux and Canon Cine zooms, with a couple of prime lenses employed on handheld cameras. “They were all human-operated and were all on my Multiview and so we’re cutting the full volume of the 25 throughout the night,” Bickham detailed. “From night one to night two we augmented the positions of some of those 25 cameras just to enlarge out coverage. That gave us a slightly different mindset going in to night two. It would just accentuate what we had already done on the previous night.”
Designing the cut, it was always planned to let it breathe, especially around Tesfaye who was mostly alone in such a huge space. Bickham explains the thinking: “It was partly because it gives the audience an opportunity to be on stage with him. That’s a very unusual experience especially for a stadium film. Additionally by doing that it creates a tension. The audience are expecting you to cut, they’re expecting to be moved on to something wide or something different and when you don’t do that and you stay kind of locked in to that position something really interesting happens; it makes the next shot that much more impactful.
“So in other words, we kind of lingered even if the song ramped up and became more manic. We didn’t let the pace of the moment dictate the pace of the film.”
Bickham and Le Mar Taylor, The Weeknd’s creative director, had talked a lot about letting moments develop in front of the lens and not blasting through the coverage. “We wanted our performance to be more like a film edit.”
The concert film was meant to be celebratory career moment for Tesfaye and the means of capture was always up for discussion, with even IMAX put forward as a medium. “We did think about using 35mm film, in fact through our discussion we did end up partnering with a company called Live Grain,” Bickham recounted. “We wanted the concert film to have representation of the texture of film to push in to a space where you don’t particular see it. So that was a huge part of our decision making even through the grade and the finishing. It’s got a timelessness with this textural element to it and just feels different from a typical concert picture.”
The use of the Live Grain process is usually for digitally shot movies. NAB Amplify previously reported on Stephen Soderbergh’s No Sudden Move using the process, but for a live production it’s new.
“The cameras didn’t have any filtration in place just to make sure the process wasn’t disrupted. Live Grain is essentially real time texture mapping. A lot of great films that were shot digitally used Live Grain to make it feel like it’s 35mm. In a multi-camera almost two hour production 35mm itself isn’t really practical with the mag changes and the amount of film you use.”
The use of Live Grain was in fact introduced by HBO, which has an ongoing relationship with the company. “It’s been tested by them many times on films but our film was maybe the first time being used for a live concert application or certainly one of the first.”
The post effect of film grain really nails the timeless cinematic effect, but was there ever an option to broadcast the concert live? “There was a time when we considered a one day IMAX special but when HBO got involved we realized we had a great partner for what eventually we wanted to do and it tied in with the upcoming The Idol drama series.
“Ultimately we were able to bring a more caring approach to it, we could take our time.”
Looking to stay ahead of the curve in the fast-changing world of live production? Learn how top companies are pushing the boundaries of what’s possible in live events and discover the cutting-edge tools and technologies for everything from live streaming and remote workflows to augmented reality and 5G broadcasting with these fresh insights from NAB Amplify:
Kendrick Lamar’s “The Big Steppers: Live from Paris” employed multiple digital cinema cameras to deliver a livestreamed outdoor broadcast.
March 26, 2023
Posted
March 10, 2023
Devoncroft Executive Summit Set For April 15 at NAB Show
TL;DR
The 2023 Devoncroft Executive Summit will take place April 15 in Las Vegas.
Running from 12 p.m. to 6 p.m. on the NAB Show Main Stage, the conference will feature speakers and moderated panel sessions with C-level executives.
This year’s event, with the theme “The Business of Media Technology”, will bring together thought-leaders from across the media technology sector.
The 2023 Devoncroft Executive Summit will take place April 15 in Las Vegas.
This year’s event, with the theme “The Business of Media Technology”, will bring together thought-leaders from across the media technology sector to discuss the issues facing the industry, share best practices and network with peers.
Running from 12 p.m. to 6 p.m. on the NAB Show Main Stage, the conference will feature speakers and moderated panel sessions with C-level executives from broadcasters, service providers, media technology vendors, and IT vendors.
The BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians and media technology managers.
March 6, 2023
YouTube Unveils 2023 Priorities As Shorts Monetization Struggles, Plus TikTok’s Surprising New Feature for Teens
By JIM LOUDERBACK
TL;DR
Neal Mohan reveals YouTube’s creator-centric priorities while Shorts monetization lags.
TikTok rolls out time limits for teens while the U.S., Canada, U.K. and the EU ratchet up the pressure.
Twitch’s never-ending creator problems, the surprising upside of paid verification, a call to restrict AI research and new crypto and consumer research.
This Week: Neal Mohan reveals YouTube’s creator-centric priorities while Shorts monetization lags. TikTok rolls out time limits for teens while the U.S., Canada, U.K. and the EU ratchet up the pressure. Plus, Twitch’s never-ending creator problems, the surprising upside of paid verification, a call to restrict AI research and new crypto and consumer research. It’s the first week of March and here’s what you need to know. Oh, and how’s that “in like a lion” working out for you?
New YouTube Chief Lays Out Priorities: A few weeks late, but Neal Mohan has laid out YouTube’s 2023 priorities in a blog post. There’s not a lot new – the most important message was Mohan’s strong endorsement of getting creators paid. Mohan did announce new AI tools – about time – although with “guardrails”. I think that means it’ll be a while before we see anything useful.
Trouble at Twitch Amidst Abundance: Congrats to Kai Cenat for becoming Twitch’s biggest streamer. His month-long subathon – a throwback to the original Justin.TV mission – pushed him over 300,000 subscribers. But it also renewed calls for Twitch to properly compensate creators as Drake suggested he get a $50M payout. Cenat, who just signed with UTA, seemed to agree. Could this be a Ninja repeat all over again? Twitch is trying to do better by creators at least in some ways. For example, the new “experiments page” provides transparency to streamers and provides an interesting lens for the Twitch curious (like me) too.
The Upside of Paid Verification: Although many (including me) decried the paid verification initiatives at Twitter and Meta, a few experts see a silver lining. Brendan Gahan sees a lessening of sensationalist clickbait stories and a renewed focus on quality content and user experience. Alex Kantrowitz goes even further, positing that because most platforms are now dominated by professional creators, it’s time for them to pay for the privilege of making money. I think Gahan’s vision is idealistic but unrealistic, while Kantrowitz ignores the paltry creator middle class that will likely pony up for the check. Decide for yourself – both takes are worth reading.
Thanks so much for reading and see you around the internet. Send me a note with your feedback, or post in the comments! And see you at SXSW!
Feel free to share this with anyone you think might be interested, and if someone forwarded this to you, you can sign up and subscribe on LinkedIn for free here!
For more on Mohan’s priorities, TikTok’s teen time limits, Jellysmack’s OTT plans and AI’s copyright dilemma, check out this week’s Creator Feed – the weekly podcast Renee Teeley and I produce – get it on Apple Podcasts, Spotify or Stitcher!
Three different topics impacting the Creator Economy: The ban or sale of TikTok, Meta’s latest layoffs, and the release of GPT4.
March 3, 2023
NAB Show’s BEIT Conference Dives Into Media Delivery
TL;DR
SMPTE President Renard Jenkins will deliver the opening keynote at the NAB Show Broadcast Engineering and IT (BEIT) Conference on April 15 at 10 a.m.
Running from April 15-18, the BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians, media technology managers, contract engineers, broadcast equipment manufacturers and distributors, engineering consultants, and R&D engineers.
The conference is produced in partnership with the Society of Broadcast Engineers, the Society of Cable Telecommunications Engineers and the North American Broadcasters Association.
SMPTE President Renard Jenkins will deliver the opening keynote at the NAB Show Broadcast Engineering and IT (BEIT) Conference on April 15 at 10 a.m.
Jenkins, currently senior VP of production integration and creative technology services at Warner Bros. Discovery, has spent more than 35 years in the television, radio, and film industries.
Running from April 15-18, the BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians, media technology managers contract engineers, broadcast equipment manufacturers and distributors, engineering consultants and R&D engineers.
“The BEIT Conference is the place for media professionals to discover the latest breakthroughs helping to make the content pipeline more effective, efficient and expedient,” said Sam Matheny, NAB executive vice president, Technology and chief technology officer. “We are looking forward to an impressive lineup of presentations at NAB Show that will provide our community with real-world insights into keeping pace with the rapid evolution in how content gets delivered.”
The conference is produced in partnership with the Society of Broadcast Engineers, the Society of Cable Telecommunications Engineers and the North American Broadcasters Association.
BEIT will also feature the presentation of technical papers on topics including NextGen TV, artificial intelligence, data analytics, cybersecurity, media workflows, innovation in radio, media in the cloud, hybrid radio, sustainability, streaming, 5G and video coding, among others. The papers will be included in the BEITC Proceedings, which will also be released by PILOT, the innovation wing of the National Association of Broadcasters, on April 15. For the full schedule, click here.
For more information on the BEIT Conference, click here.
NAB Show is the preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry.
March 3, 2023
NAB Show Is Immersed in… Immersive Storytelling
TL;DR
NAB Show will explore how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18 at 1 p.m. at the Las Vegas Convention Center.
The session, titled “Immersive Storytelling: Expanding Audiences with XR in Games, Education, and Location-Based Entertainment,” will feature leaders in advanced technology.
Panelists include Aaron Grosky, president and COO of Dreamscape Immersive and COO of Dreamscape Learn; Ted Schilowitz, futurist, Paramount Global; and Jake Zim, senior VP, virtual reality, Sony Pictures Entertainment.
NAB Show will explore how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18 at 1 p.m. at the Las Vegas Convention Center.
Immersive experiences have become easier to access than ever before. From headsets at home and in schools to location-based entertainment venues, consumers are embracing innovative ways to find their favorite stories.
Today’s entertainment technology has the ability to make every player the main character in their favorite worlds, expanding the universes they love and breathing new life into these stories and characters. We now have the ability to immerse our audiences’ minds into infinite architectures—from blasting ghosts in the Ghostbusters universe to teaching biology by having students solve the mystery of a dying species at an intergalactic wildlife sanctuary.
Sit down with our panelists as they discuss increasing convergence between traditional entertainment and advanced technology; how nostalgia fuels new technology adoption; and what’s next for VR/AR/XR in the entertainment industry.
Grosky oversees the creation of VR adventures for Dreamscape Immersive and Dreamscape Learn. The adventures are designed to give users the experience of watching a story unfold around them as they explore cinematic worlds, characters, and creatures. He previously served in strategic leadership and creative development roles for entertainment ventures focused on television, radio, music, online, and mobile productions.
Schilowitz, the first-ever futurist-in-residence for Paramount Global, works with the company’s technology teams to explore new and emerging technologies, including VR and AR. He previously served as consulting futurist at 20th Century Fox and worked on product development teams that have produced ultra-high resolution digital movie cameras, advanced hard-drive storage products, and desktop video software.
In his role at Sony Pictures Entertainment, Zim oversees global VR production and strategy for the motion picture group. He has produced a variety of interactive projects released across a spectrum of distribution channels. In addition, he works across business units to develop partnerships with technology and production companies in the emerging immersive entertainment space.
In the future photoreal synthetic humans will be a regular part of our day-to-day lives, and in China we can already catch a glimpse of this in action.
From brand ambassadors to virtual live streamers and virtual tour guides, digital human beings have become commonplace in China, not only in cyberspace but also in real life where their presence is growing.
These digital avatars are known as “meta-humans.” The term should be distinguished from a “posthuman” or “transhuman,” defined as an individual who has enhanced their physical and cognitive abilities beyond what is considered normal for a human being.
Epic Games also has software for creating realistic digital humans called MetaHuman.
In China, meta-humans are described as digital characters of such photorealism that they are getting to the point of being indistinguishable from real life.
Dao Insights, owned by London and Shanghai-based creative agency Qumin Group, says such digitized humans are at the core of China’s metaverse ambitions.
The country’s first hyper-realistic meta-human is called AYAYI, created by Chinese tech company RM Group.
In an interview with Dao Insights, RM Group co-founder Nicky Yu explains that there are two main applications for virtual humans: functional ones that might serve as the automated face and voice of virtual assistants for companies in hotels or banks, for example; and those intended for more creative media and entertainment-based ends.
These so-called IP-oriented virtual humans include anime-based characters and hyper-realistic humans, like AYAYI.
According to Yu, the commercial model of IP-oriented virtual beings is rather unstable. “Just as every movie can’t be a hit at the box office,” he says.
The value of the more service-oriented avatars depend as much as anything on how capable its AI is together with its cost, “whereas the appearance of the creation is less relevant,” according to Yu.
He describes creating a meta-human as similar to the production of a movie.
“We started with a script outlining the character’s persona and created a sketch based on those pre-set personalities and then modelled it. Once we were satisfied with the modelling, we launched a market survey, collecting feedback from the public to see if they think the appearance matches the persona we created. After that, we further polished and enriched the design of the character.”
It took RM Group just half a year from initial design to finish to deliver AYAYI.
One reason for the popularity of virtual stars in China, perhaps similarly to South Korean culture, too, is that they are insulated from celebrity scandal.
“In recent years, there has been a sense of disappointment and betrayal arising amongst the fan base. As a result, some fans have stopped following stars,” Yu explains.
“Whereas the image of virtual influencers is more controllable and they are always free from scandals. Therefore, they are a much safer option compared to their human counterparts.”
Brands can exploit the malleability and scandal-free persona of a virtual “idol” to engage customers by engendering an “emotional touch and maintaining a strong loyalty amongst fans.”
This is a classic extension of digital marketing.
Yu emphasizes that it is the story and content curated around digital characters that bring real impact for brands on their target audience.
“For example, if a digital human being can create music, which is powered by AI and is liked by audiences, then people are more likely to endorse the virtual being because of the work. Here, AI-generated music is the medium where digital characters can communicate with the public and further establish a relationship with them.”
The metaverse industry in China’s financial hub Shanghai alone is set to hit 350 billion RMB ($52.3 billion) by 2025.
Industrial parks in the city include two dedicated to the metaverse, two focusing on the digital economy, and three designated for intelligent terminal technology, “creating a comprehensive ecosystem that would enable the facilitation of the multi-layered virtual world.”
Yu says RM Group plans to integrate digital assets closer with the physical world, “strengthening the connection between the virtual and real spheres,” and believes the concept of meta-humans has barely scratched the surface.
“I believe a digital life will be a crucial component of virtual human beings in the future,” he says. “When each of us has a digital twin who can understand us in cyberspace or a robotic likeness to conduct daily activities and socialize in the virtual realm, that’s when we can say the era of digital humans and the metaverse has come.”
The metaverse may be a wild frontier, but here at NAB Amplify we’ve got you covered! Hand-selected from our archives, here are some of the essential insights you’ll need to expand your knowledge base and confidently explore the new horizons ahead:
Could South Korean K-pop singers competing as digital avatars in a virtual universe point the way to the future of entertainment?
February 27, 2023
Jim Louderback: YouTube Makes Global Domination Easier for Creators, Just as the Extreme Dangers of Social Media Are Revealed
By JIM LOUDERBACK
TL;DR
YouTube makes it easier to add multiple language tracks to videos – great news for creators and viewers alike.
Social media may be unhealthy for teens, especially for girls and regulators may step in. The industry needs to step up and address this.
AI-generated images are now essentially open source, mainstream media discovers creator-first brands and an AI video generator that actually works today.
This Week: YouTube makes it easier to add multiple language tracks to videos – great news for creators and viewers alike. However, social media may be unhealthy for teens, especially for girls and regulators may step in. Also, AI-generated images are now essentially open source, mainstream media discovers creator-first brands and an AI video generator that actually works today. Also I’m excited to welcome “wndr” as our sponsor this week, a cool new app that helps travel creators monetize their passion. It’s the end of February and here’s what you need to know now!
YouTube Opens Up Videos To The World:Nuseir Yassin (Daily Nas) has been telling creators to embrace other languages for years, saying that “80% of the world doesn’t speak English, so if you only make content in English, you are only talking to 20% of the world.” Now YouTube is making it easier to add multiple language tracks to a video. Jimmy Donaldson’s (Mr. Beast) company has been a leader here as well. He worked with Unilingo on his first Spanish language channel, and subsequently ramped up his own internal dubbing capability. Great news also for Papercup, an early leader in delivering AI-generated translations that preserve the cadence and voice of the source material. As Nuseir says, “You should localize your content because a kid in Egypt deserves to hear you just as much as a kid in Wisconsin”. Props to YouTube for making that easier for creators AND viewers.
Social Media Unhealthy For Teens: We’re seeing more and more evidence that social media is really bad for teen girls. Given that tweens and teens live on social, this is a problem. The industry needs to step up and address this – but I expect regulators to step in as well. China is leading here, as the local version of TikTok limits kids 13 and under to 40 minutes a day – and online gaming is restricted as well. Pinterest CEO Bill Ready is taking the lead in the U.S. as the company builds on its reputation for being a safe space. Expect this issue to only grow over the coming months.
AI-Generated Images Are Open Source: That’s right – AI-created images cannot be copyrighted. Eric Farber, founder of Creators Legal told me he wasn’t surprised, because “original works can be copyrighted if they are human created, not machine created.” This has huge ramifications for creators as it moves into chat results, video, 3D models and other areas. I also wonder just how much you would need to customize a GenAI creation to make it protectable. Farber responds that there’s “a lot left to play out. The most significant thing is that the copyright act hasn’t been truly updated in years and just doesn’t cover our world today.”
Mainstream Media Discovers Creator-First Brands: This Washington Post story on KSI and Logan Paul’s Prime brand concludes that community and cult will drive new brand development over the next 10 years. There are many more examples beyond Prime, but also beware of the cautionary tale that is Tesla. Elon Musk – arguably the world’s biggest influencer – drove Tesla to the top but is now destroying both Tesla and Twitter with his flailings. Cathy Hackl posts that creators shouldn’t be “afraid to launch new things”. But trust and community can be fleeting. If you launch a brand, be very careful that you don’t screw it up.
AI Generated Video Finally Arrives: Video generation platform Wochit released an AI experiment last week, which uses a ChatGPT-like AI to generate a surprisingly good video based on a 1-2 sentence description. This current version lacks a voice track but draws on Wochit’s decade-long storehouse of images, b-roll and attractive templates to build short but compelling videos that are ready for posting to Facebook, YouTube and other platforms. Read my deeper analysis and watch my first video for more insight. Full disclosure, I was on Wochit’s advisory board 7 years ago, but have no connection to the company today.
SPONSOR: Introducing wndr – the best way for travel creators to convert followers into hotel bookings. With wndr, creators can customize, personalize and connect their own travel storefront with over a million hotels worldwide, and offer discounted hotels to their followers directly from their social media profiles.
Wndr is revolutionizing travel creator monetization by democratizing the power of global booking platforms to creators, allowing them to generate up to 10% off every booking made on their wndr page.
Interesting essay on why fandom isn’t a job, with a bonus 8-year-old map of Tumblr. Oh, and Tumblr turned blue into green with a Twitter verification parody that hilariously actually made a bunch of money.
Tip of the Week: Setting up a Discord server is a non-trivial task with lots of pitfalls. But the Communityone newsletter just finished its three part series on how to set up Discord to perfection. Read Part 1 here (ht Brendan Gahan).
What I’m Watching Playing: Finally beat Pokémon Violet last week. Super fun for open-world gamers even if you’re not a Pokefan.
Thanks so much for reading, and see you around the internet. Send me a note with your feedback, or post in the comments! Feel free to share this with anyone you think might be interested, and if someone forwarded this to you, you can sign up and subscribe on LinkedIn for free here!
From Instagram’s new messaging platform to Susan Wojcicki’s YouTube exit, Jim Louderback has all the details.
February 22, 2023
Jim Louderback: Instagram’s Shockingly Awesome New Messaging Feature and the Platform That’s Turning Everyone Into a Creator!
BY JIM LOUDERBACK
TL;DR
Lots of chatter about YouTube’s long-time CEO stepping back last week. Some of the best: OG Creator Economy exec Leslie Morgan decries the loss of women leadership in Silicon Valley and the more bro-ish content direction YouTube has seen recently.
Roblox envisions everyone as creator, says CTO Daniel Sturman. They plan on using generative AI to allow every user to create items, skins, clothing, and even full-on experiences on the platform.
We need a word for AI anthropomorphism. Because that’s what this article is. And that’s what Ben Thompson, author of Stratechery, has spent countless hours falling victim to.
This Week February 21, 2023: What Susan Wojcicki’s departure means for the creator economy, all about Instagram’s innovative new messaging feature that lets creators talk directly to their fans, how Roblox wants to turn everyone into a creator, the weird anthropomorphism roiling the chatbot wars, and TikTok’s efforts to keep creators and increase traffic. It’s the end of February 2023 and here’s what you need to know.
YouTube CEO Wojcicki Steps Down: Lots of chatter about YouTube’s long-time CEO stepping back last week. Some of the best: OG Creator Economy exec Leslie Morgan decries the loss of women leadership in Silicon Valley and the more bro-ish content direction YouTube has seen recently. Longtime YouTube exec Priscilla Lau shares her experiences working with Wojcicki over the past 15 years and anticipates uncertainty to come. I took a look forward at YouTube under its new leader, former head of product Neal Mohan. I’m an optimist here, but I share Morgan’s concerns too – and hope Mohan is as creator-forward as Wojcicki was.
Instagram Messaging Arrives: Matt Navarra calls it “the best new feature in years”, as Instagram adds “Broadcast Channels” to the platform. Now creators can broadcast directly to followers with telegram-style messaging. Dylan Harari thinks this is a game-changer for creators who want to own their audience, because it works where that community already congregates. I agree, as it leans into the private communities that seem to be supplanting big unwieldy social platforms for GenZ and GenA. Alas it’s not broadly available yet – I tried to join Zuck’s “Meta Channel” but I’m still outside looking in. Related – Instagram and Facebook will start charging for verification. Uggh.
Everyone Will Be a Creator: That’s Roblox’s vision, as laid out by CTO Daniel Sturman. They plan on using generative AI to allow every user to create items, skins, clothing, and even full-on experiences on the platform. It’s not easy, given that items will need properties and characteristics that allow them to operate in a 3D world. I love the vision of pairing users with AI tools to turn everyone into a creator. Roblox is crushing it – their quarterly results were on fire with spending on Robux up significantly. And as we talked about last week, their daily usage among kids is almost 2x TikTok – and 17-24 year olds are using it more too.
Uncovering the Chatbots of Dawn: We need a word for AI anthropomorphism. Because that’s what this article is. And that’s what Ben Thompson, author of Stratechery, has spent countless hours falling victim to. From Bing to Sydney to Venom and Fury, Thompson has been uncovering NPCs inside of Microsoft’s chat engine. We definitely need the GenAI version of Asimov’s three laws of robotics. Because even anthropomorphism can get the vulnerable into trouble. Done right, though, these chatbots of dawn could be a tremendous force for good. How quickly the backlash has swelled.
TikTok Moves to Keep Creators, Boost Traffic: As the U.S. rattles its sabers, TikTok doubles down on creators and hopes to forestall a traffic slump too. Kaya Yurieff posted a number of scoops last week, including how growth is slowing, the company is readying a new fund that promises higher payout and a new video paywall too. Creator Fund 2.0 will likely limit itself to mid-level and above creators and may add other requirements as well. And in a related development, TikTok is launching an HQ Trivia clone to help promote a new movie – and perhaps juice engagement too. My take? I’m not a fund fan and creator payout will likely disappoint. And probably won’t keep creators from defecting at scale. I’m also bearish on a $1 paywall, but perhaps longer videos will juice the ooze. I do like the trivia feature – one that Lionsgate probably paid a boatload for.
— SPONSORED: CREATOR SQUARED COMING TO NAB —
So excited to be working with Robin Raskin to bring a creator focus to this year’s NAB Show. Just as new tools and a creator-first focus are transforming broadcasting, creators are building infrastructure that borrows from traditional media but with an iterative and innovative twist. I’m creating workshops, roundtables and discussions for Creator Squared that connect the dots between the two worlds. And I’ll be emceeing live! Want to get involved? Contact Gigi@virtualeventsgroup.org or me!
New per-post revenue analysis reveals the surprising state of creator earnings. Twitch is the most lucrative.
February 19, 2023
Posted
February 19, 2023
That’s How You Do It: Sam Pollard on Making “Bill Russell: Legend”
TL;DR
When former HBO Sports President Ross Greenburg approached Sam Pollard two years ago about doing a documentary on NBA legend Bill Russell, Pollard jumped at the chance.
Bill Russell: Legend premieres on Netflix February 8 and includes the last interview with Russell, an 11-time NBA champion with the Boston Celtics.
The two-part documentary, directed by Sam Pollard (MLK/FBI, Mr. Soul!), weaves interviews with archival footage and excerpts from Russell’s memoirs, to tell the basketball legend’s story.
When former HBO Sports President Ross Greenburg approached Sam Pollard two years ago about doing a documentary on NBA legend Bill Russell, Pollard jumped at the chance.
“I didn’t hesitate. I said yes, because I grew up in the ’60s,” Pollard told WNYC’s Allison Stewart. “I was very familiar with Bill Russell. I was familiar with the rivalry between Bill Russell and Wilt Chamberlain. I was excited to jump in and do this documentary.”
Bill Russell: Legend premiered on Netflix February 8 and includes the last interview with Russell, an 11-time NBA champion with the Boston Celtics. Russell died during the filmmaking process at his home in Mercer Island, Washington on July 31, 2022. He was 88.
The two-part documentary, directed by Pollard (MLK/FBI, Mr. Soul!), weaves interviews with archival footage and excerpts from Russell’s memoirs, to tell the basketball legend’s story. Corey Stoll narrates while Jeffrey Wright reads the excerpts from Russell’s memoirs.
Pollard told Clint Krute during an episode of the Film Comment podcast that one of the biggest challenges “was to say to ourselves, ‘when do we have too much basketball? When do we need to stop and go to something that he was doing off the court?’ And then when we got to his activities off the court, the question we had to ask ourselves was, ‘how long did we stay with that material before we get back to the basketball?'”
The director added that the classic narrative structure they originally had after the tease was to follow Russell’s life chronologically. But Pollard said they decided to show Russell getting drafted in 1956 by the Celtics instead “to create the drama.”
“[Y]ou [Pollard] play with a bit of the established or traditional sports documentary time structure by reversing what we would usually think was gonna happen after the tease that we would start with the origin story narrative,” scholar Samantha Sheppard said during the Film Comment podcast with Pollard and Krute. “But you move us and shift us along and then take us back to more of a familial historical narrative in that sense. And I think that in that way, in watching this, it’s like a trick. Often it does feel quite traditional. It feels even with the time change, still quite chronological at times.”
Sheppard, an associate professor of cinema and media studies in the Department of Performing and Media Arts at Cornell University, authored Sporting Blackness: Race, Embodiment, and Critical Muscle Memory on Screen, which explores sports documentaries and how they represent blackness.
“It [sports documentaries] finally gives these athletes larger context. It lets them speak, it lets them be culturally and critically framed, and it lets them, it lets us as audiences see their sport not divorced from the sociality in which they live,” said Sheppard. “So it’s not a narrative of shut up and dribble, it’s actually ‘tell us more and also show us the sport at the same time.’ So these films become really, really important as a way to provide a greater context to black athletes in ways that we have not seen them on the court, and more particularly off the court in terms of their social or cultural impact.”
Russell played with the Boston Celtics from 1956-1969. During Russell’s career, he scored a long list of achievements, including 11 NBA championships with the Boston Celtics (two of those as a player/coach), 5 NBA Most Valuable Player awards, 12 NBA All-Star games, two NCAA championships, and an Olympic Gold medal.
“What’s interesting about Russell is from one perspective, he seems like this imposing, 6’9 center for the Boston Celtics. Winner, winner, winner, right? But there’s the other side to Bill Russell where he’s extremely thoughtful,” Pollard told Esquire’s Alex Belth. “He’s extremely nuanced about everything in life, not only as a basketball player but as a Black man in America. And he had opinions about everything.”
Off the court, Russell was very involved in the Civil Rights movement, attending the 1963 March on Washington with Dr. Martin Luther King and the 1967 Cleveland Summit as well as speaking out against the Boston bussing issues.
“This man was a real activist,” Pollard told NECN’s Clinton Bradford. “He didn’t just want to be known as a great basketball player, which he was, he wanted to be known as a human being who was well rounded, who had other things on his mind and other issues he wanted to articulate and talk about.”
Pollard wouldn’t have been able to tell Russell’s story without the mountain of archival footage, stills, and articles dug up by archival producer Helen Russell.
“Documentary filmmaking is really being like an anthropologist,” Pollard told Film Comment’s Krute. “You’re doing a deep dive, you’re doing a tremendous amount of research. And the more research you do, the more you find gold, you really find gold.”
But because Russell played in the 1950s and 1960s, some of that footage wasn’t the greatest.
“The one challenge that we as documentarians always face is that when you see this old footage, you say, ‘It looks pretty crappy, and there wasn’t a lot of coverage,'” Pollard told Variety’s Addie Morfoot. “So, you have to sort of take a leap of faith. [We looked at the archives] and would say – ‘Is that Bill Russell?’ But we also knew that we were never going to get the same kind of coverage and quality we see today.”
Even with the at times grainy footage, Pollard still managed to weave a narrative that makes Bill Russell: Legend stand out.
“What helps set the documentary apart is that Pollard has assembled a treasure trove of vintage game footage and vintage interviews, as well as a wealth of new or new-ish interviews with Russell, [Bill] Cousy, Satch Sanders and many of their contemporaries including the aforementioned [Jerry] West, Bill Bradley, Walt Frazier and more,” wrote The Hollywood Reporter’s Dan Fienberg. “There’s a very good balance between the game footage, which accentuates Russell’s grace and athleticism, and the interviews, which concentrate on his intensity and, perhaps more than anything, his intellect.”
Russell was a student of the game, spending hours studying.
“[H]e understood that the game of basketball is just not about being physical, it’s about being mental,” Pollard told WNYC’s Stewart. “It’s about understanding how to position yourself and play against other players, where you should be, where one of your teammates should be to get the ball to take it down to court to get a basket, to know when to get a rebound and where to get the rebound and how to use that. When he was at USF with his future teammate, K.C Jones, they came up with the strategies. That’s what they would call themselves rocket scientists.”
Pollard added: “They were really thinking about the physics of basketball. It just goes to show you that athletes are very intelligent people, they’re not just jocks, they’re very intelligent. Bill took it to another level in terms of understanding the science and the physics of the game and how to use the game to his advantage.”
Alex Belth in his introduction to his interview with Pollard summed the documentary and Russell up: “Bill Russell: Legend reminds us that in the world of team sports, the biggest team player of them all was also perhaps the most singular individualist, too.”
As the streaming wars rage on, consumers continue to be the clear winners with an abundance of series ripe for binging. See how your favorite episodics and limited series were brought to the screen with these hand-picked articles plucked from the NAB Amplify archives:
Adam McKay new docudrama for HBO, “Winning Time: The Rise of the Lakers Dynasty,” shows how the Lakers changed the way basketball is played.
February 19, 2023
Posted
February 18, 2023
Roundtable Discussion: How Will AI Impact Filmmakers and Other Creative Professionals?
TL;DR
Innovations like ChatGPT and DALL·E 2 highlight the incredible advances that have taken place with AI, causing professionals in countless fields to wonder whether or not such innovations mean the end of thought leadership or if they should instead focus on the opportunities presented by such tools.
What do filmmakers and other creative professionals really think about these developments though? What are the top concerns, questions and viewpoints surrounding the recent surge of available AI generative technologies that have recently hit the open market?Should we be worried or simply embrace the technology and forge ahead and let the bodies fall in the wake?
“As the saying goes – with great power comes great responsibility, and sadly, I think that may not end well for many developers who can’t control the who/where/how the end users utilize these amazing technologies,” writes ProVideo Coalition (PVC) contributor Jeff Foster.
“AI has already generated huge legal and ethical issues that I suspect will only grow larger. But the genie is out of the bottle – indeed he or she emerged at the Big Bang itself – so let’s work together to figure out how to work with this fast-emerging reality to continue to be storytellers that speak to the human condition,” writes PVC contributor Mark Spencer.
What do filmmakers and other creative professionals really think about these developments though? What are the top concerns, questions and viewpoints surrounding the recent surge of available AI generative technologies that have recently hit the open market? Should we be worried or simply embrace the technology and forge ahead and let the bodies fall in the wake?
Below is how various PVC writers explored those answers in a conversation took shape over email. You can keep the discussion going in the comments section or on Twitter.
I’m definitely not unbiased as I’m currently engaging with as much of it on a user level as I can get my hands on (and have time to experiment with) and sort out the useful from the useless noise, so I can share my findings with the ProVideo community.
But with that said, I do see some lines being crossed where there may be legitimate concerns that producers and editors will have to keep in mind as we forge ahead and not paint ourselves into a corner – either legally or ethically.
Sure, most of the tools available out there are just testing the waters – especially with the AI image and animation generators. Some are getting really good (except for too many fingers and huge breasts) but when it gets indistinguishable from reality, we may see some pushback.
So the question arises that people generating AI images IN THE STYLE OF [noted artist] or PHOTOGRAPHED BY [noted photographer] if they are in fact infringing on those artists’ copyrights/styles or simply mimicking published works?
It is already being addressed in the legal system in a few lawsuits against certain AI tool developers that will eventually shake out how exactly their tools gather diffusion data it creates (it’s not just copy/paste) so that will either settle the direct copyright infringement argument against artists, or it will be a nail in the coffin for many developers and forbid further access to available online libraries.
The next identifiable technology that raises potential concern IMO are the AI tools that will regenerate facial imagery in film/video for the purpose of dubbing and ratings controls for possible misuse and misinformation.
On that note, I’ve mentioned ElevenLabs in my last article as a highly advanced TTS (Text To Speech) generator that not only allows you to customize and modify voices and speech patterns reading scripted text with astounding realism, but also lets you sample ANY recorded voice and then generate new voice recordings with your text inputs. For example, you could potentially used any A-list celebrity to say whatever marketing blurb you want in a VO or make a politician actually tell the truth (IT COULD HAPPEN!).
But if you could combine those last two technologies together, then we have a potential for a flood of misuse.
I’ve been actively using AI for a feature documentary I’ve been working on the past few years, and it’s made a huge difference on the 1100+ archival images I’ve retouched and enhanced, so I totally see the benefits for filmmakers already. It does add a lot of value to the finished piece and I’m seeing much cleaner productions in high-end feature docs these days.
As recently demonstrated, some powerful tools and (rather complex) workflows are being developed specifically for video & film, to benefit on-screen dubbing and translations without the need for subtitles. It’s only a matter of time before these tools are ready and available for use by the general public.
As the saying goes – with great power comes great responsibility, and sadly, I think that may not end well for many developers who can’t control the who/where/how the end users utilize these amazing technologies.
I am not sure we will see a sudden shift in the production process regarding AI and documentary filmmaking. There is something about being on location with a camera in hand, finding the emotional thread, and framing up to tell a good story. It is nearly impossible to replace the person holding the camera or directing the scene. I think the ability of a director or photographer to light a scene, light multi-camera interviews, and be with a subject through times of stress is irreplaceable.
Yet, AI can easily slip into the pre-production and post-production process for documentary filmmaking. For example, I already use Rev.com for its automatic transcription of interviews and captions. Any technology to make the process of collaborating and increasing the speed of the editing process will run through the post-production work like wildfire. I can remember when we paid production assistants to log reality tv footage. Not only did the transcription look tedious, but it was also expensive to pay for throughout the shoot. Any opportunity to save a production company money will be used.
Then we get to the type of documentary filmmaking that may require the recreation of scenes to tell the story of something that happened sometime before the documentary shoot. I could see documentary producers and editors turn to whatever AI tool to recreate a setting or scenes or even an influential person’s voice. The legal implications are profound, though, and I can see a waterfall of new laws giving notable people intellectual property to a family member’s former image and voice no matter how long ago they passed or at the very least 100 years of control of that image and voice. Whenever there is money to be made from a person’s image or voice, there will be bad actors and those who ask for forgiveness instead of permission, but I bet the legal system will eventually catch up and protect those who want it.
The rights issues are extremely knotty (I’ve recently written about this). On one hand, the extant claims that a trained AI contains “copies of images” are factually incorrect. The trained state of an AI such as Stable Diffusion, which is at the centre of recent legal action, is represented by something like the weights of interconnections in a neural networks, which is not image data. In fact, it’s notoriously difficult to interpret the internal state of a trained AI. Doing that is a major research topic, and our lack of understanding is why, for instance, it’s hard to show why an AI made a certain decision.
It could reasonably be said that the trained state of the AI contains something of the essence of an artist’s work and the artist might reasonably have rights in whatever that essence is. Worse, once an AI becomes capable of convincingly duplicating the style of an artist, probably the AI encompasses a bit more than just the essence of that artist’s work, and our inability to be specific about what that essence really is doesn’t change the fact that the artist really should have rights in it. What makes this really hard is that most jurisdictions do not allow people to copyright a style of artwork, so if a human artist learns how to duplicate someone else’s style, so long as they’re upfront about what they’re doing, that’s fine. What rubs people the wrong way is doing it with a machine which can easily learn to duplicate anyone’s work, or everyone’s work, and which can then flood the market with images in that style which might realistically begin to affect the original artist’s work.
In a wider sense this interacts with the broad issues of employment in general falling off in the face of AI, which is a society-level issue that needs to be addressed. Less skilled work might go first, although perhaps not – the AI can cut a show, but it can’t repair the burst water main without more robotics than we currently have. One big issue coming up, which probably doesn’t even need AI, is self-driving vehicles. Driving is a massive employer. No plans have been made for the mass unemployment that’s going to cause. Reasonable responses might include universal basic income but that’s going to require some quite big thinking economically, and the idea that only certain, hard-to-automate professions have to get up and go to work in the morning is not likely to lead to a contented society.
This is just one of a lot of issues workers might have with AI and so the recent legal action might be seen as an early skirmish in what could be a quite significant war. I think Brian’s right about this not creating sudden shifts in most areas of production. To some extent the film and TV industry already does a lot of things it doesn’t really need to do, such as shooting things on 65mm negative. People do these things because it tickles them. It’s art. That’s not to say there might not likely be pressures to use more efficient techniques when they are available, as has been the case with photochemical film, and that will create another tension (as if there aren’t already a lot) between “show” and “business”. As a species we tend to be blindsided by this sort of thing more than we really should be. We tend to assume things won’t change. Things change.
I do think that certain types of AI information might end up being used to guide decision-making. For instance, it’s quite plausible to imagine NLE software gaining analysis tools which might create the same sort of results that test screenings would. Whether that’s good or not depends how we use this stuff. Smart application of it might be great. Allowing it to become a slave driver might be a disaster, and I think we can all imagine that latter circumstance arising as producers get nervous.
While AI has a lot to offer, and will cause a great deal of change in our field and across society, I don’t think it’ll cause broad, sweeping changes just yet. Artificial Intelligence has been expected to be the next big thing for decades now, and (finally!) some recent breakthroughs are starting to have a more obvious impact. Yet, though ChatGPT, Stable Diffusion, Dalle and Midjourney can be very impressive, they can also fail badly.
ChatGPT seems really smart, but if you ask it about a specialist subject that you know well, it’s likely to come up short. What’s worse than ChatGPT not knowing the answer? Failing to admit it, but instead guessing wrong while sounding confident. Just for fun, I asked it “Who wrote Final Cut Pro Efficient Editing” because that’s the modern equivalent of Googling yourself, right? It’s now told me that both Jeff Greenberg and Michael Wohl wrote the book I wrote in 2020, and I’m not as impressed as I once was.
Don’t get me wrong: if you’re looking for a surface level answer, or something that’s been heavily discussed online, you can get lucky. It can certainly write the script for a very short, cheesy film. (Here’s one it wrote: https://vimeo.com/795582404/b948634f34.) Lazy students are going to love it, but it remains to be seen if it’s really going to change the way we write. My suspicion is that it’ll be used for a lot of low-value content, as AI-based generators like Jasper are already used today, but the higher-value jobs will still go to humans. And that’s a general theme.
Yes, there will be post-production jobs (rotoscoping, transcription) done by humans today which will be heavily AI-assisted tomorrow. Tools like Keyper can mask humans in realtime, WhisperAI does a spectacular job of transcription on your own computer, and there are a host of AI-based tools like Runway which can do amazing tricks. These tasks are mostly technical, though, and decent AI art is something novel. Image generators can create impressive results, albeit with many failures, too many fingers, and lingering ethical and copyright issues. But I don’t think any of these tools are going away now. Technology always disrupts, but we adapt and find a new normal. Some succeed, some fail.
A saving grace is that it’s easy to get an AI model about 95% of the way there, but, the last 5% gets a bit harder, and the final 1% is nearly impossible. Now sometimes that 5% doesn’t matter — a voice recording that’s 95% better is still way better, and a transcription that’s nearly right is easy to clean up. But a roto job where someone’s ears keep flicking in and out of existence is not a roto job the client will accept, and it’s not necessarily something that can be easily amended.
So, if AI is imperfect, it won’t totally replace humans at all the jobs we’re doing today. Many will be displaced, but we’ll get new jobs too. AI will certainly make it into consumer products, where people don’t care if a result is perfect, but to be part of a professional workflow, it’s got to be reliable and editable. There are parallels in other creative fields, too: after all, graphic designers still have a livelihood despite the web-based templated design tool Canva. Yes, Canva took away a lot of boring small jobs, but it doesn’t scale to an annual report or follow brand guidelines. The same amount of good work is being done by the same number of professionals, and there’s a lot more party invitations that look a little better.
For video, there will be a lot more AI-based phone apps that will perform amazing gimmicks. More and better TikTok filters too. There will also be better professional tools that will make our jobs easier and some things a lot quicker — and some, like the voice generation and cleanup tools, will find fans across the creative world. Still, we are a long, long way from clients just asking Siri 2.0 to make their videos for them.
Beyond video, the imperfection of AI is going to heavily delay any society-wide move to self-driving cars. The world is too unpredictable, my Tesla still likes to brake for parked cars on bends, and to move beyond “driver assistance”, self-driving tech has to be perfect. A capability to deal with 99.9999% of situations is not enough if that 0.00001% kills someone. There have been some self-driving successes where the environment is more carefully mapped and controlled, but a general solution is still a way off. That said, I wouldn’t be surprised to see self-driving trucks limited to predictable highway runs arrive soon. And yes, that will put some people out of work.
So what to do? Stay agile, be ready for change. There’s nothing more certain than change. And always remember, as William Gibson said: “The Future is Already Here, it’s Just Not Very Evenly Distributed.”
AI audio tools keep growing. Some that come to mind are Accusonus ERA (currently being bought), Adobe Speech Enhancement, AI Mastering, AudioDenoise, Audo.ai, Auphonic, Descript, Dolby.io, Izotope RX, Krisp, Murf AI Studio, Veed.io and AudioAlter. Of those, I have personally tested Accusonus ERA, Adobe Speech Enhancement, Auphonic, Descript and Izotope RX6.
I have published articles or reviews about a few of those in ProVideo Coalition.
There’s a lot of use of AI and “smart” tools in the audio space. I often think a lot of it is really just snake oil – using “AI” as a marketing term. But in any case, there are some cool products that get you to a solid starting point quickly.
Unfortunately, Accusonus is gone and has seemingly been bought by Meta/Facebook. If not directly bought, then they’ve gone into internal development for Facebook and are no longer making retail plug-ins.
In terms of advanced audio tools, Sonible is making some of the best new plug-ins. Another tool to look at is Adobe’s Podcast application, which is going into public beta. Their voice enhancement feature is available to be used now through the website. Processing is handled in the cloud without any user control. You have to take or leave the results, without any ability to edit them or set preferences.
AI and Machine Learning tools offer some interesting possibilities, but they all suffer from two biases. The first is the bias of the developers and the libraries used to train the software. In some cases that will be personal biases and in others it will be the biases of the available resources. Plenty has been written about the accuracy of dog images versus cat images created by AI tools. Or that of facial recognition flaws with darker skin, including tattoos.
The second large bias is one of recency – mainly the internet. More general and specific data is available from the last 10-20 years using internet resources, than prior. If you want to find niche information prior to the advent of the internet, let’s say before 1985, then it can be a very difficult search. That won’t be something AI will likely access. For example, if you tried to have AI mimic the exact way that Cinedco’s Ediflex software and UI worked, I doubt it would happen, because the available internet data is sparse and it’s so niche.
I think the current state of the software is getting close enough to fool many people and could probably pass the famous Turing test criteria. However, it’s still derivative. AI can take A+B and create C or maybe D and E. What it can’t do today (and maybe never), is take A+B and create K in the style of P and Q with a touch of Z. At least not without some clear guidance to do so. This is the realm of artists to be able to make completely unexpected jumps in the thought process. So maybe we will always be stuck in that 95% realm and the last 1-5% will always be another 5 years out.
Another major flaw in AI and Machine Learning – in spite of the name – is that it does not “learn” based on user training. For instance, Pixelmator Pro uses image recognition to name layers. If I drag in a photo of the Eiffel Tower it will label it generically as tower or building. If I then correct that layer name by changing it to Eiffel Tower, the software does nothing to “learn” from my correction. The next time I drag in the same image, it still gets a generic name, based on shape recognition. So there’s no iterative process of “training” the library files that the software is based on.
I do think that AI will be a good assistant in many cases, but it won’t be perfect. Rotoscoping will still require human finesse (at least for a while). When I do interviews for articles, I record them via Skype or Zoom and then use speech-to-text to create a transcript. From that I will write the article, cleaning up the conversation as needed. Since the software is trying to create a faithful transcription to what the speaker said, I often find that the clean-up effort takes more time and care than if I’d simply listened to the audio and transcribed it myself, editing as it went along. So AI is not always a time-saver.
There are certainly legal questions. At what point is an AI-generated image an outright forgery? How will college professors know whether the student’s paper is original versus something created through ChatGPT? I heard yesterday that actual handwriting is being pushed in some schools again, precisely because of such concerns (along with the general need to have legible writing). Certainly challenging ethical times ahead.
I think that in the world of film we have a bit of breathing room when it comes to advances in AI bringing significant changes and perhaps a bit of an early warning of what might be to come. Our AI tools are largely technical rather than creative, and the creative ones less well developed compared to the image and text creation tools, so they don’t yet pose much of a challenge to our livelihoods and the legal issues aren’t as complicated. For example, AI noise reduction or upscaling – they are effectively fixing our mistakes – and there isn’t much need for the models to be trained on data they might not have legal access to (though I imagine behind the scenes this is an important topic for them, as getting access to high quality training data would improve their product).
I see friends who are writers or artists battling to deal with the sudden changes in the AI landscape. I know copywriters whose clients are asking them if they can’t just use ChatGPT now to save them money or others saying their original writing has been falsely flagged as AI-generated by an AI analysis tool and while I’m sure the irony is not lost on them, it doesn’t lessen their stress. So in terms of livelihoods and employment I think there are real ethical issues, though I have no idea how they can be solved, aside from trusting that creative people will always adapt, though that takes time and the suddenness of all this has been hard for many.
On the legal side, I feel like there is a massive amount of catching up to do and it will be fascinating to see how these current cases work out. It feels like we need a whole new set of legal precedents to deal with emerging AI tools, aside from just what training data the models can access. Looking at the example of deepfakes, I love what a talented comedian and voice impersonator like Charlie Hopkinson can do with it – I love watching Gandalf or Obi-Wan roasting their own shows – but every time I watch, I wonder what Sir Ian McKellen would think – though somehow I think he would take it quite well. Charlie does put a brief disclaimer on the videos, but that doesn’t feel enough to me. I would have thought the bare minimum would be a permanent disclaimer watermark, let alone a signed permission from the owner of that face! I think YouTube has put some work into this, focusing more on the political or the even less savoury uses, which of course are more important, but more needs to be done.
I think we in the worlds of production and post would be wise to keep an eye on all the changes happening so we can stay ahead and make them work to our advantage.
I have been experiencing a sense of excitement and wonderment over the most recent developments in AI.
It’s accelerating. And at the same time, I’m cynical – I’ve read/watched exciting research (sometimes from SIGGRAPH, sometimes from some smaller projects) that never seems to see the light of day.
About six years ago, I did some consulting work around machine learning and have felt like a child in a candy store, discovering something new and fascinating around every corner.
Am I worried about AI from a professional standpoint?
Nope. Not until they can handle clients.
If the chatbots I encounter are any indicators? It’s going to be a while.
For post-production? It’s frustrating when the tools don’t work. Because there’s no workaround that will fix it when it fails.
ChatGPT is an excellent example of this. It’s correct (passing the bar, passing the MCAT), until it’s confidentially incorrect. It gave me answers that just don’t exist/aren’t possible. How is someone to evaluate this?
If you use ChatGPT as your lawyer, and it’s wrong, where does the liability live?
That’s the key in many aspects – it needs guidance, a professional who knows what they’re doing.
In creating something from nothing: There are a couple of areas that are in the crosshairs
Text2image. That works sorta well. The video is a little harder.
Music generation. I totally expect this to be a legal nightmare. When the AI generates something close to an existing set of chords, who (if anyone) gets a payment? If you use it in your video, who owns the rights to that synthetic music
Speech generation. We’ve been cloning voice decently (see Descript’s lyrebird and the newer Elevenlabs voice synthesis). Elevenlabs has at least priced it heavily – suddenly, audiobook generation with different voices for different characters will make it more difficult to make a living as a voice artist.
Deepfakes. It’s still a long way from easy face replacement.
These tools excite me most in the functional areas instead of the “from scratch” perspective.
Taking painful things and reducing the difficulty.
That’s what good tools should always do, especially when they leave the artist the ability to influence the guidance.
OpenAI’s Whisper really beats the pants off other speech-to-text tools. I’m dying just to edit the text. Descript does this, which is close to what I want.
Colourlab.ai‘s matching models – 100% what I’m talking about. Different matching models, a quick pick, and you’re on your way. (Disclaimer, I do some work for Colourlab.
Adobe’s Remix is a great example of this. It’s totally workable for nearly anyone and is like magic. It takes this painful act of splicing music to itself (shorter or longer) and makes it easy.
The brightest future.
You film an interview. You read the text, clean it up, and tell a great story.
Except there’s an issue – something is unclear in the statements made. You get clearance from the interviewee about the rephrasing of a statement. Then use an AI voice model of their voice to form the words. And another to re-animate the lips to look like the subject said it.
This is “almost here.”
The dark version of this?
It’s society-level scary (but so are auto-driving cars that can’t really recognize children, which one automaker is struggling with.)
Here’s a scary version: You get a phone call, and you think it’s a parent or significant other. It’s not. It’s a cloned voice and something like ChatGPT trained on content that can actually respond in near-real time. I’ll leave the “creepy” factor up to you here.
Ethical ramifications
Jeff Foster brings up this question – what happens when we can convincingly make people say what we want?
At some level, we’ve had that power for over a decade. Just the fact that we could take a word out of someone’s interview gives us that power. It’ll just make that easier/more accessible. As well as “I didn’t say that; it was AI” being a defense.
It’s going to be ugly because our lawmakers, and our judicial system, can’t educate themselves quickly enough if the past is any indication.
Generative AI’s isn’t “one-click”
As Iain pointed out in the script he had ChatGPT write, it did the job, it found the format, but it wasn’t very good.
I wonder how it would help me around writer’s block?
Generative text is pretty scary – and may disrupt Google.
Since Google is based on inbound/outbound links – it’s going to be very soon that the blog spam will explode even more, and it’ll be harder to tell what content is well written and what is not.
Unless it comes from a specific person you trust.
And as Oliver pointed out, it’s problematic until I can train it with my data – it needs an artist.
The lack of being able to re-train will mean that failures will consistently fail. Then we’re in workaround hell.
Personally I believe that AI technologies are going to cause absolutely massive disruption not just to the production and post-production industries, but across the entire gamut of human activity in ways we can’t even imagine.
In the broadest sense, the course of evolution has been one of increasing complexity, often with exponential jumps (e.g., Big Bang, Cambrian explosion, Industrial Revolution). AI is a vehicle for another exponential leap. It is extraordinarily exciting and terrifying, fraught with danger, yet it will also create huge new opportunities.
How do we position ourselves to benefit from, or at least survive, this next revolution?
I’d suggest moving away from any task or process that AI is likely to take over in the short term. Our focus should be on what humans (currently) do better than AI. Billy Oppenheimer, in his article on The Coffee Cup Theory of AI, calls this Taste and Discernment. Your ability to connect to other humans through your storytelling, to tell the difference between the great and the good, to choose the line of dialog, the lighting, the composition, the character, the blocking, the take, the edit, the sound design…and use AI along the way to create all the scenarios from which you use your developed sense of taste to discern what will connect with an audience.
AI has already generated huge legal and ethical issues that I suspect will only grow larger. But the genie is out of the bottle – indeed he or she emerged at the Big Bang itself – so let’s work together to figure out how to work with this fast-emerging reality to continue to be storytellers that speak to the human condition.
(These words written by me with no AI assistance :-))
AI-generated art has advanced by leaps and bounds, but studios and audiences alike might not yet be ready for its Hollywood closeup.
June 7, 2023
Posted
February 17, 2023
South Korea’s Synthetic Pop Stars: What Do “They” Mean for the Metaverse?
TL;DR
South Korea is the world’s testing ground for tech, so when K-pop singers compete in a virtual universe what does this tell us about the future of entertainment?
The popularity of synthetic pop stars in South Korea may be peculiar to that culture. Or is it?
Could the merger of virtual with the real create a new genre of content?
With its highly digital and device literate, young and ultra competitive society, South Korea is looked on as the world’s petri-dish for future media. The current vogue for K-pop stars who use avatars or the popularity of entirely virtual singers and influencers means the country is one to watch.
South Korean tech company Kakao Entertainment, for example, is billing Mave, its artificial band, as the first K-pop group created entirely within the metaverse. It is using machine learning, deep fake, face swap and 3D production technology.
To give them global appeal, the company wants the “girls” of Mave to eventually be able to converse in, say, Portuguese with a Brazilian fan and Mandarin with someone in Taiwan, fluently and convincingly.
The idea, the project’s technical director, Kang Sung-ku, tells Jin Yu Young and Matt Stevens at The New York Times, is that once such virtual beings can simulate meaningful conversations, “no real human will ever be lonely.”
Kakao also runs the virtual world called “Weverse” or simply “W.” In part of this world there’s a game show called Girl’s Re:verse that features 30 singers, eliminated over time, until the last five standing form a band. All are members of established K-pop bands or solo artists. But they are all masked as animated avatars.
Strictly speaking, this is not a metaverse, says the NYT. They are instead proprietary platforms users have to log in to, accepting terms of service and with no sign of any cryptographic features.
But the complete blurring of the virtual with the real is surely one core trait of what will become the metaverse.
Another example is a TV reality show, not dissimilar from The Masked Singer, but with a difference. Avatar Singer is a 15-episode music competition that ran live on Korean TV channel MBN . It features celebrity competitors masked as digital 3D avatars complete with superpowers.
As explained by one of the vendors behind the project, the show live motion capture, facial capture, Unreal Engine, and augmented reality. These enabled the team to “expand the conventional stage into an evolved universe.”
Compared with their Korean counterparts, media companies in the United States have only engaged in “light experimentation” with the metaverse so far, Andrew Wallenstein, president and chief media analyst of VarietyIntelligence Platform, tells The New York Times.
Countries like South Korea “are often looked at like a test bed for how the future is going to pan out,” Wallenstein said. “If any trend is going to move from overseas to the US, I would put South Korea at the front of the line in terms of who is likeliest to be that springboard.”
Already Korean “virtual influencers” like Rozy have Instagram followings in six figures and promote real brands like Chevrolet and Gucci.
“We want to create a new genre of content,” said Baik Seung-yup, Rozy’s creator, who estimates that about 70% of the world’s virtual influencers are Korean.
“From a Western perspective, it can seem strange,” Enrique Dans writes in a blog post on Medium. “The [virtual pop] groups all look pretty similar (the manga-style avatars have huge eyes and heart-shaped faces), and are deeply rooted in the cultural codes of the country’s youth.”
He adds, “Young Koreans follow their favorite bands, attend concerts, and celebrate their bands’ rise to popularity as a reflection of their competitive society, where they must gain access to certain schools and universities if they want to find a good job.”
Son Su-jung, a producer for the show, also says that part of the point was to give K-pop singers — “idols,” as they are called — a break from the industry’s relentless beauty standards, letting them be judged by their talent, not their looks.
“Idols in the real world are expected to be a product of perfection, but we hope that through this show, they can let go of those pressures,” she said.
The metaverse may be a wild frontier, but here at NAB Amplify we’ve got you covered! Hand-selected from our archives, here are some of the essential insights you’ll need to expand your knowledge base and confidently explore the new horizons ahead:
Synthetic media, sometimes referred to as “deepfake” technology, is already impacting the creative process for artists (and non-artists).
February 17, 2023
What Does Susan Wojcicki’s Exit Mean For YouTube?
BY JIM LOUDERBACK
TL;DR
With Susan Wojcicki stepping back at YouTube, it’s certainly the end of an era. But I’d rather look forward than back.
YouTube’s new leader Neal Mohan has led product at YouTube since 2015, but he arrived at Google with the DoubleClick acquisition in 2008.
The new YouTube chief has an impressive background in strategy, operations and product – all essential to chart the future path of YouTube.
With Susan Wojcicki stepping back at YouTube, it’s certainly the end of an era. But I’d rather look forward than back. For insight into what it means for YouTube, study her replacement, Neal Mohan.
First off, I am a Mohan fan. He’s been a regular speaker at VidCon and I’ve always been appreciative of his expertise and insight into YouTube’s future product direction.
Mohan has led product at YouTube since 2015, but he arrived at Google with the DoubleClick acquisition in 2008. While at DoubleClick he grew and ran the business, ultimately running strategy and leading the sale to Google.
At Google he built the display and video ad business and grew it to the industry leader it is today. Interestingly Mohan also worked as a strategy intern at Microsoft while getting his Stanford MBA.
The new YouTube chief has an impressive background in strategy, operations and product – all essential to chart the future path of YouTube. His strong strategy expertise should lead to a heightened focus on the opportunities for YouTube to continue to become the global uber-video app, while also leaning into more ways for YouTube to grow revenue for creators and ultimately for Alphabet.
The tide has shifted at YouTube, with first Robert Kyncl and now Wojcicki out as senior leaders. Tellingly, it seems Mohan won’t take on the “CEO” role but will remain as senior VP while becoming head of YouTube.
I’ve frequently referenced interviews and articles about and by Mohan in my “Inside the Creator Economy” newsletter. Over the last few years each February Mohan penned an annual look at upcoming product, tools and features on the Inside YouTube blog.
The cultural impact a creator has is already surpassing that of traditional media, but there’s still a stark imbalance of power between proprietary platforms and the creators who use them. Discover what it takes to stay ahead of the game with these fresh insights hand-picked from the NAB Amplify archives:
The ‘70s-Inspired Visuals of Benjamin Caron’s “Sharper”
TL;DR
For his debut feature “Sharper,” director Benjamin Caron wanted cinematographer Charlotte Bruus Christensen to be the “Princess of Darkness” in homage to cinematographer Gordon Willis.
Willis famously shot “The Godfather,” “Klute” and other movies in next to no light; in the case of “The Godfather” that creative choice was driven by Marlon Brando’s makeup.
“Sharper” is a grifter movie that revels in the use of shadows and underexposed long takes.
Prior to “Sharper,” Caron had notable success directing episodes of “The Crown” and Disney’s Star Wars episodic “Andor.”
Not knowing what will happen is the ultimate tease for a grifter movie like Sharper — the darkness just adds to the mystery
The British director of Sharper, now streaming on Apple TV+, wanted his DP Charlotte Bruus Christensen to become the “Princess of Darkness” in homage to cinematographer Gordon Willis, who famously shot The Godfather and other movies in next to no light.
Rather obviously, Vanity Fair’s Richard Lawson takes a romantic view of using film, unkindly describing the digital alternative’s look as “the plastic dullness of a toss-off digital Netflix thriller.” With Bruus Christensen’s film aesthetic, however, he warmly welcomed “the grain and light of what movies used to look like.”
In truth, Willis’ approach to lighting — particularly in the initial scene of The Godfather — occurred to him only at the last minute as a means to counter the strange makeup Marlon Brando was using. Just 20 minutes prior to the shoot, the only technique he could think of was to use a top light. Ultimately, this decision sealed the look of the movie from that point on. But maybe the die was already cast with his moody aesthetic for Klute, which he shot the year before, in 1971.
But Lawson’s coupling of the use of film with an old-fashioned con artist tale is understandable, clumsy as it might be, as Sharper is a thriller that revels in the use of shadows and underexposed long takes.
The director, Benjamin Caron, was new to feature films but had notable success in directing episodes of The Crown and Disney’s brilliant Star Wars episodic Andor. But for Sharper, he had asked Bruus Christensen “to think about these sophisticated compositions of using light and darkness,” as he told SlashFilm’s Ben Pearson. “But probably one of the biggest reference points for me was Klute. There was just something about the atmosphere of that film that I’ve always loved.”
Describing Willis’ work, Caron says, “He just basically infused every frame with meaning and atmosphere, and there was a beautiful delicacy to it. So it was a heavy leaning into the feeling of that film.” (As an aside, this 1971 film has been having a hell of a cultural resurgence as of late, BJ Colanelo notes at SlashFilm, with director Matt Reeves also citing the film as a massive influence on The Batman.)
Caron also referenced The Color of Money, Drive and especially Fincher’s Seven. “What I loved about that film is that you were so claustrophobic for such a long period of time. You were held in that city. It was all mainly shot at night and it was rain, but then right at the very end of the film, you suddenly had this big desert expanse where there was nothing else.”
He could see that same scenario working for Sharper, he told Pearson. “We had all these characters penned into Manhattan, where the sight lines are limited and you can rarely see the horizon. But then, as in Seven, I love the end where suddenly you’re in this open space where you can see nothing but sky, and ultimately the characters have nowhere to hide.”
Apple’s own description of Sharper does harken back to thrillers of the past: “No one is who they seem. A neo-noir thriller of secrets and lies, set amongst New York City’s bedrooms, barrooms and boardrooms. Characters compete for riches and power in a high stakes game of ambition, greed, lust and jealousy that will keep audiences guessing until the final moment.”
Pete Hammond’s review of the film for Deadline describes the pull of this new swindler story. “Seeing the nifty grifter drama Sharper reminded me how rarely we encounter this kind of clever cat-and-mouse game that might fall into the noirish genre but really relies on diving into a world filled with characters who reveal slices of their lives that keep changing moment to moment,” he writes.
“It is the kind of movie I find enormously difficult to review because its ultimate success for a viewer is just watching it unfold, beat by beat, never quite knowing exactly where it is heading but still glued to the screen to find out,” Hammond continues.
“Written in a non-linear style and separated by chapters identified on the screen with character’s names, the focus keeps changing as we see events unfold, and eventually intertwine, as the story takes twists and turns and then twists right back again.”
Julianne Moore in “Sharper,” CR: AppleTV+
Sebastian Stan, Julianne Moore in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Sebastian Stan, John Lithgow in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Julianne Moore in “Sharper,” CR: AppleTV+
Briana Middleton in “Sharper.” Cr: Apple TV+
John Lithgow in “Sharper,” CR: AppleTV+
Sebastian Stan, John Lithgow in “Sharper,” CR: AppleTV+
Justice Smith, Briana Middleton in “Sharper.” Cr: Apple TV+
But it is director Caron, in his first feature, who kept the lid on what the characters were thinking, not wanting to clue the audience into the deceit. “Deception is definitely the defining feature of this film, and I’m always interested in character’s motivations and how people talk or flirt or lie or impersonate in terms of getting what they want,” he said to Pearson.
“I thought it was really important in this film that we never had a nod and a wink to the audience at any moment that something was about to happen. Sometimes I think there’s a tendency, whether it be from the storyteller or even from the performer, to show too much.
“And I think right from the very beginning, even in conversations with the actors, we wanted to hold all of that back. Because I really remember reading the script and I really remember those moments where I was floored and I was genuinely shocked and surprised. So it was really important they held onto that integrity.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Cinematographer Felix Wiedemann uses the ARRI Alexa LF to create a naturalistic look for Netflix’s hit psychological thriller series.
June 21, 2023
Posted
February 7, 2023
Creating the Lo-Fi, VHS-Vibe Visuals for “Skinamarink”
TL;DR
In a world of crystal-clear 4K smartphone videography, the detuned aesthetic of indie horror feature “Skinamarink” is even more distinct.
Working under a no-budget budget of just $15,000, writer-director-editor Kyle Edward Ball found that micro-budget limitations fueled his creative vision.
Ball used his short film “Heck” to develop a technique the indie filmmaker calls “filming by implication.”
This technique demanded a set of steadfast rules: “We never see someone’s face. We avoid showing people on screen for too long. Whatever dialogue is delivered is always delivered off-screen. We never go outside. We never leave the house.”
The trailer for Skinamarink shows just how much work was involved in making the indie horror film look so bad. In this world of crystal-clear 4K smartphone videography, a detuned aesthetic is even more distinct and perhaps welcome. Writer-director-editor Kyle Edward Ball absorbs any sense of clarity out of his movie, visually or psychologically. This could be down to the no-budget budget (which reached a final tally of $15,000), but in fact proclaims the skills of the indie filmmaker and his small crew.
Skinamarink has been acquired by horror streamer Shudder and is currently in theaters via IFC Midnight. It will debut on Shudder later in 2023.
FilmmakerMagazine’s Natalia Keogan describes the incredibly loose narrative. “It follows young siblings Kevin (Lucas Paul) and Kaylee (Dali Rose Tetreault) as they patter around their family’s strikingly ordinary middle-class house in the dead of night circa 1995,” she writes.
“Their parents are nowhere to be found, all of the doors have mysteriously vanished and the lights eventually stop working. While this phenomena is enough to chill any child, their well-being is most threatened by a supernatural presence that beckons the siblings to obey increasingly disturbing requests.”
Keogan’s description continues in nightmarish terms: “Skinamarink does not rely on typical genre conventions, barely even showing the protagonists in full, opting for shots of disjointed limbs and obscured faces. The film’s bone-numbing terror comes from somewhere deeper and more genuine than a cheap jump-scare, like an early childhood nightmare extracted from our collective subconscious, transferred to a VHS tape and screened on an old CRT television set at three a.m.”
In his interview with Ball for RogerEbert, Isaac Feldberg was keen for the filmmaker to unveil his production techniques. “Ball found that micro-budget limitations fueled his creative vision, necessitating all manner of trick photography and unconventional angles to mimic a child’s-eye view.”
His short film Heck was a proof-of-concept exercise for what was to come. “Through doing my YouTube series, I developed a technique of filming by implication, instead of showing. So, instead of showing actors, I was doing point-of-view shots or filming different parts of the room while we had audio off-screen. And, after a while, I thought, ‘Maybe I could do a feature like this…’”
Ball also detailed some of his steadfast shooting and framing rules, a practice not uncommon in episodics. “I set rules in place that I wasn’t allowed to break. We never see someone’s face. We avoid showing people on screen for too long. Whatever dialogue is delivered is always delivered off-screen. We never go outside. We never leave the house. We’re always in the house.”
With those visual constraints in place Ball looked to the audio to seal the horror. “I didn’t just want Skinamarink to look like an old movie,” he told Feldberg. “I wanted it to feel and sound like one. I wanted to go really [hard] with that. I didn’t just want to make the dialogue sound like it was recorded on an old microphone. I wanted the audio to feel like an old, scratched-up re-taping of a film that wasn’t preserved from the ‘70s — lots of hiss, lots of hum.”
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Writer-director-editor Kyle Edward Ball’s “Skinamarink.” Cr: Shudder
Ball’s idea for the visuals was to shoot as near to darkness as possible and then grade to further distress what he shot with his DP Jamie McCrae. He explained to Lex Briscuso at Inverse how they created the look. “When I was doing my YouTube channel, I was also gravitating toward the lo-fi look. I thought ‘Why can’t I make a movie like it’s from the ‘70s? Or the ‘50s? The ‘30s?’ It evolved into, ‘What if I did an entire movie in this style?” So I started writing my script,’” he said.
“Working with my amazing director of photography, Jamie McCrae, I said, ‘OK let’s get a really good camera that’s really good in low light and see if we can just use practicals.’ I set some rules for myself. We can only use practical lights: flashlights, light coming off a TV, a lamp.”
Another big issue were the scenes set in pitch black, Ball told Briscuso. “Obviously, we couldn’t shoot 100% pitch black unless we used infrared, so we developed this technique of putting a sun gun on top of the camera, putting a blue filter over it, and grading with it,” he said.
McCrae selected the Sony FX6 as the main camera, Ball recounted, adding, “I forget what lenses we used, but the great thing about a modern digital camera — and that one in particular — is that it almost sees in the dark, almost better than the human eye, with somewhat minimal artifacting or grain.”
But when Ball reached the post-production stage, he discovered that he couldn’t edit the film and then age the material after the fact. “I had to do it in tandem,” he told Briscuso. “The mood is so intrinsically tied to the lo-fi aspect of it that it was impossible. So I did it step by step; that’s really why the editing took four months.”
To make the footage appear old, Ball employed a package of 16mm film grain overlays he already had on hand. “In editing, I picked different overlays, graded and played with the levels shot-by-shot, and I just did that until it looked right and read well. It wasn’t just one overlay I looped a hundred times. I took my time to make sure there were enough varieties, so you didn’t subconsciously say, ‘Oh, I’ve seen this overlay before.’
“As far as the special effects, a lot of it was just simple old Hollywood tricks that you can get away with if you’re using a layer of grain over it. There’s a few parts where things appear on the ceiling, floating. That was literally just me holding it up and photoshopping myself out. The doors and windows, I just Content Awared them out.”
Sam Theilman‘s review of the movie for Slate is perhaps the most discerning, “I think Skinamarink is the first movie I’ve seen that is shot in such a way as to show only what its child protagonist can understand. I can’t imagine another film doing this successfully, or even wanting to see this particular film again, but it’s a remarkable achievement,” he writes.
“It evokes the nameless dread of barely verbal childhood so thoroughly and uncompromisingly that it remains frightening long after it ends, not because it forces us to question the rational world, but because it makes us remember a time before we could understand anything at all.”
Variety’s William Earl has the scoop on what’s next for Ball. “He’s currently kicking around two ideas that both sound like a logical extension of Skinamarink. One is a take on the Pied Piper legend, the other about three strangers who all see the same house in a dream.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Jordan Peele taps Swiss cinematographer Hoyte Van Hoytema to build a custom IMAX camera rig to capture the wide-open landscapes of “Nope.”
October 15, 2023
Posted
February 7, 2023
“Kendrick Lamar Live in Paris” Brings Cinematic Production to a Streamed Event
TL;DR
The video production of the recent Kendrick Lamar concert in Paris employed multiple digital cinema cameras in a livestreamed outdoor broadcast.
The production relied heavily on Sony equipment, including the company’s digital cine flagship Venice camera in both Super 35 and full-frame 6K configurations.
Other equipment included an ARRI Trinity rig spanning the area from the stage to the floor, a spidercam, and a robotic rail-cam system “that acted like a sniper,” able to boom up and boom down precisely while maintaining a beautiful frame above stage height.
Camera technology that started out in the upper echelons of cinema have now become so accessible that the use of digital cine cameras and lenses is being use to photograph sports and music concerts too.
Normally, such cameras are used sparingly for cinematic depth of field cut-aways in live sports or in glossily post produced video concert footage.
The video production of the recent Kendrick Larmar tour took this to another level by using multiple digital cinema cameras in a livestreamed outside broadcast.
Perhaps that isn’t surprising given an artist of Lamar’s caliber. The Big Steppers: Live From Paris, part of Lamar’s “Big Steppers Tour,” was streamed live exclusively on Amazon Music and Prime Video from the Accor Arena in Paris this past October.
Kendrick Lamar’s The Big Steppers Tour LIVE from Paris.
“We didn’t want to just use a prefab camera plot,” Ritchie explains. “We really wanted to understand what would be dynamic, what would be a great storytelling device, what lenses would feel more immersive versus objective.”
The amount of technology used for the shoot was astonishing, as detailed in the Sony case study. An ARRI Trinity went from the stage to the floor for specifically choreographed moments. Two additional Steadicams, one on stage for fluid live moments, and one in the audience, captured moments with fans. They had a robotic rail-cam system “that acted like a sniper,” able to boom up and boom down precisely while maintaining a beautiful frame above stage height.
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
Kendrick Lamar’s “The Big Steppers: Live from Paris” livestream concert event. Cr: Amazon Music
They also had a spidercam for very specific cinematic moments, a 25-foot tower camera and Technocrane gliding slowly over the audience that captured waves of hands as it made its way to the stage.
Principal photography was from 16 Sony Venice cameras and Sony’s new cinematic pan-tilt-zoom camera, the FR7.
Ritchie used the Venice at 6K in full-frame, along with lenses like Signature Zooms or Fujinon Premistas and primes.
“The beauty of full-frame is you can see a nice wide shot of a stadium or an arena, but stay focused on the person right there in front of you,” he said. “To be able to control someone’s attention with more shallow depth of field in certain moments is critical to the narrative. I can show you 80,000 people and a massive stage, and by using a shallow depth of field I can ensure the audience stays laser focused on the artist while still offering an epic sense of depth and grandeur.”
He also used Venice in Super 35 mode, allowing him to employ longer cinema zooms and converted broadcast lenses that can offer both tight and wide coverage from all angles.
“One of the biggest challenges in live spaces is distance to the subject,” says Ritchie. “Feature films happen between eight and 20 feet. However, it’s often challenging to maintain the inner ring of close coverage in a live space, especially when you have massive stages and catwalks in excess 120 feet, while trying not to impede on the audience experience. Having that second ring of coverage is crucial to maintain coverage throughout the film.”
Live Grade LUTs were applied, adjusting exposure and black levels and accounting for any variances between lenses and the environment, which as you can imagine means battling with constantly changing extreme contrasts, bright LED screens, and highly saturated lighting.
“We’re doing that with 16 to 20 cameras in the live space where every one of these needs to be as close to perfect as possible,” adds Ritchie.
“When you’re shooting for a film, you have the luxury of time and an edit. You can just shoot Log and tweak the exposure and color later. But in the live space it’s real-time. In line LUT boxes apply our base look and our truck RCPs control Iris as well as subtle variances between cameras. The cinematographer, DITs, LD and video engineers are all working in perfect sync, safeguarding the image through every crucial step.”
Donald and Stephen Glover bring season 3 of “Atlanta” to FX and Hulu in what critics call “a true American masterpiece.”
March 10, 2023
Posted
January 30, 2023
“Poker Face:” The Sunday Mystery Movie But It’s Streaming
TL;DR
“Knives Out” and “Glass Onion” director Rian Johnson talks about his exciting new Peacock case-of-the-week series “Poker Face,” starring Natasha Lyonne as a mystery-solving fugitive.
Johnson discusses the challenges of writing a mystery series where the main character has the superhuman ability to recognize when someone is lying, and the importance of crafting standalone TV episodes even in an increasingly serialized era of TV.
Johnson calls this mystery subgenre a “howcatchem,” where it’s very much about the detective versus the guest star of the episode.
To make new television, it helps if you’ve watched a lot of old television. That’s a lesson evident in Poker Face, the crime-thriller series created by Rian Johnson and starring Natasha Lyonne, which makes its debut January 26 on Peacock.
Lyonne — creator and star of Netflix series Russian Doll — plays Charlie Cale, a woman employed by a casino with a preternatural ability to tell when people are lying.
As Johnson, the writer and director of Knives Out and Glass Onion, explained to Dave Itzkoff of The New York Times, the self-contained installments of Poker Face are a deliberate throwback to a style of TV storytelling that Johnson grew up with in the 1970s and ‘80s.
“That’s when I had control of the television,” Johnson said. “And it was typically hourlong, star-driven, case-of-the-week shows.”
They weren’t only detective programs like Columbo and Murder, She Wrote, he said, but also adventure series like Quantum Leap, The ATeam, Highway to Heaven and The Incredible Hulk, which were notable for “the anchoring presence of a charismatic lead and a different set of guest stars and, in many cases, a totally different location, every single week.”
Those ever-changing elements kept things fresh and surprising, he said.
In an interview with Alison Herman for The Ringer, Johnson was asked about similarities between Poker Face and Columbo. “I’d include The Rockford Files and Quantum Leap, but also Highway to Heaven and [the 1978 TV series] The Incredible Hulk,” he told Herman. “It’s kind of got the DNA of all that stuff. And that’s the stuff that I was sitting on the rug in front of my family’s TV watching reruns of every single afternoon as a kid. It’s the TV that I was raised on.”
“The ‘DNA’ of these shows is certainly there when you watch an episode of Poker Face play out,” Ernesto Valenzuela writes at SlashFilm. “Bill Bixby’s man-on-the-run character of Bruce Banner in The Incredible Hulk, who helps bring justice to whatever town he ends up in, is recreated with a much less gloomy angle with Charlie’s fugitive status. And much like Jim Rockford in The Rockford Files, Charlie is also down on her luck, living in a mobile home in the first episode where she is very much not an officer of the law. Poker Face thrives off of its influences, and the structure’s repetitive nature isn’t a detriment to the show — it’s actually a big reason viewers should tune in every week.”
The show’s resemblance to Colombo is a good thing for fans, Amos Barshad notes at Wired. Johnson, says Barshad, “tiptoed” around the issue at first but the jig was up following an interview in Vulture with The Mountain Goats’ John Darnielle: “I was probably bugging [Johnson] about something and he texted, ‘Want to talk to you about this TV show I’m doing with Natasha Lyonne. It’s basically Columbo with her as the detective,’” Darnielle said.
“To everyone who loves Columbo, this is a great thing,” Barshad writes. “Even beyond the unique format, with the murderer reveal happening first — which means it’s a howcatchem, not a whodunit — Poker Face embodies the rough, throwback, blaringly uncool charms of its spiritual antecedent. Like Columbo, Cale is often going after the rich and powerful, the kind of people who think they shouldn’t have to atone for their sins. Like Columbo, she’s constantly underestimated, a trait she finesses to her ends. Peter Falk’s portrayal of the fumbling detective is an all-timer; the way he pivots from buffoon to razor-sharp gumshoe is a thing of beauty and joy, which means Lyonne has her work cut out for her if she wants to put Cale up on the mantle with Columbo in the TV Sleuths Hall of Fame. But in the handful of episodes available so far (a new one dropped today), it’s clear Lyonne — salty, resilient, irrationally confident — is presenting a very unique kind of crimefighter.”
In his review for The New York Times, chief television critic James Poniewozik calls Poker Face “the Best New Detective Show of 1973,” noting that Lyonne “has one of TV’s most distinctive presences, with an old-soul rasp and a hipster-next-door bearing that’s simultaneously down-to-earth and cosmic.”
He adds: “The logo may say Peacock — the streaming service that premieres the series on Thursday — but the vibe says NBC weeknights in the 1970s.”
The series, Poniewozik says, draws you in with its retro style, “but the vintage echoes are also deeply thematic. The ‘70s loved a beautiful loser, like James Garner’s Jim Rockford, the ex-con private eye whom the world gave the bum’s rush no matter how many cases he cracked.”
Poker Face is not a whodunit but an “open mystery” because the audience starts out each episode by seeing who did it, how, and why, before Charlie begins to investigate. Johnson himself calls this mystery subgenre a “howcatchem,” where it’s very much about the detective versus the guest star of the episode, as Johnson also confirms to Brandy Clark at Collider: “These are not whodunits, these are howcatchems. Show the killing, and about Natasha [Lyonne] vs. the guest star.”
As Clark points out, the benefit of these types of shows is that a viewer can jump in at any time, without wondering or worrying if they need to see the previous episodes to understand the story or the plot.
Of course, Columbo is the key reference point and an acknowledged part of Daniel Craig’s character Benoit Blanc in the Knives Out mysteries. Johnson told Rolling Stone’s Alan Sepinwall that he binged the entire series during lockdown.
“My big revelation from bingeing it is, I wasn’t coming back for the mysteries. Although the mysteries are fun, I was coming back to hang out with Peter Falk. And in that way, I feel like those shows have as much in common with sitcoms as they do anything else.”
He added, “It’s not really about the story or the content. It’s about just hanging out with somebody that you like, and the comforting rhythms of a repeated pattern over and over with a character that you really liked being with. That’s kind of what I saw when I watched Natasha in Russian Doll, that made me think this could be interesting.”
Lyonne also said that she loved characters such as Columbo, Elliott Gould’s Philip Marlowe in The Long Goodbye and Dennis Franz’s Andy Sipowicz in NYPD Blue, as reported by Deadline’s Peter White.
Speaking at NBCUniversal’s TCA press tour, Lyonne said that Charlie is “floating above a situation trying to crack a riddle, but also an everyman who has their nose to the grindstone and figuring out the sounds of the street.”
Once Johnson had decided to make her a human bullshit detector, rather than a detective or a mystery writer, he realized he had a problem, but this became the key to unlocking how the show might unfold.
“How was the show just not over within the first five minutes, if she can tell when people are lying?” he told Rolling Stone. “I had her give a speech in the pilot about how it’s less useful than you think because everyone’s always lying. It’s about looking for the subtlety of why is somebody lying about a specific thing. And we found really fun ways to play that at different episodes going forward.”
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Peacock
Hong Chau as Marge in “Poker Face.” Cr: Peacock
Natasha Lyonne as Charlie Cale and Chelsea Frei as Dana in “Poker Face.” Cr: Peacock
Brandon Micheal Hall as Damian in “Poker Face.” Cr: Peacock
Benjamin Bratt as Cliff Legrand in “Poker Face.” Cr: Peacock
Simon Helberg as Luca in “Poker Face.” Cr: Peacock
John Hodgman as Narc/Dockers in “Poker Face.” Cr: Peacock
Natasha Lyonne as Charlie Cale and Benjamin Bratt as Cliff Legrand in “Poker Face.” Cr: Peacock
Lil Rel Howery as Taffy in “Poker Face.” Cr: Peacock
Judith Light as Irene Smothers and S. Epatha Merkerson as Joyce Harris in “Poker Face.” Cr: Peacock
Dascha Polanco as Natalie in “Poker Face.” Cr: Peacock
John Darnielle as Al, Chloë Sevigny as Ruby Ruin and G.K. Umeh as Eskie in “Poker Face.” Cr: Sara Shatz/Peacock
Chuck Cooper as Deuteronomy and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Sara Shatz/Peacock
Chloë Sevigny as Ruby Ruin and G.K Umeh as Eskie in “Poker Face.” Cr: Sara Shatz/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Sara Shatz/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
POKER FACE — “The Orpheus Syndrome” Episode 108 — Pictured: Natasha Lyonne as Charlie Cale — (Photo by: Karolina Wojtasik/Peacock)
Luis Guzman as Raoul and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Adrien Brody as Sterling Frost Jr. and Benjamin Bratt as Cliff Legrand in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Sara Shatz/Peacock
Jack Alcott as Randy, Charles Melton as Davis, and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Danielle MacDonald as Mandy in “Poker Face.” Cr: Karolina Wojtasik/Peacock NUP_197591_00014-
Lil Rel Howery as Taffy in “Poker Face.” Cr: Karolina Wojtasik/Peacock
Adrien Brody as Sterling Frost Jr in “Poker Face.” Cr: Phillip Carus/Peacock
Dascha Polanco as Natalie and Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Evans Vestal Ward/Peacock
Megan Suri as Sara in “Poker Face.” Cr: Evans Vestal Ward/Peacock
Natasha Lyonne as Charlie Cale and Megan Suri as Sara in “Poker Face.” Cr: Evans Vestal Ward/Peacock
Natasha Lyonne as Charlie Cale in “Poker Face.” Cr: Phillip Caruso/Peacock
Although Johnson is red hot and you’d think people would be biting his hand to work with him, he says pitching a more old-fashioned TV format got push back.
“I was unprepared for the blank stares. And then the follow-up questions of, “Yes, but what’s the arc over the season?” I think there is right now this odd assumption that that’s what keeps people watching, just because there’s been so much of that in the streaming world that I think people equate the cliffhanger at the end of an episode with what gets people to click ‘Next.’ But TV before incredibly recently, was entirely in this episode mode. So I know it can work because I grew up tuning in every day for it.”
One reason it’s harder to do episodic case-of-the-week stories is the expense and the production challenge. For example, you keep have to bringing in new guests and visiting new locations.
“Holy crap, it was a headache,” Johnson admits to Rolling Stone. “I don’t think we even realized what we’re up against. No standing sets. No recurring characters besides Natasha and occasionally Benjamin Bratt. But we’re very purposefully going for the Columbo approach of big fish guest stars. So every single one of these episodes, we try and get somebody very exciting to play either the killer or the victim. And it was a lot.”
Indeed, the cast list across the season includes Adrien Brody, Ellen Barkin, Nick Nolte, Stephanie Hsu, Joseph Gordon-Levitt, Ron Perlman, Chloë Sevigny, Lil Rel Howery, Clea Duvall, Tim Blake Nelson, and many more.
Asked during a Q&A panel at the Winter Television Critics Association Presentation whether he writes specifically to those guest stars, he replied: “In the room, sometimes we’d have a placeholder actor, and it would end up being them, or surprisingly someone else. A benefit of this subgenre is that it is the guest star’s episode, and you see them go head-to-head with Natasha.”
Johnson continued to sing the praises of television in front of the ballroom full of television reporters and critics — saying he preferred the “pace” of this newfound process vs. film. Each hour-long Poker Face episode took about three weeks (one for prep, two for shooting) to complete. Compare that with making one film over the course of “several years,” as he put it.
“I loved that in each episode we’re in a different environment, it’s a whole new cast— it’s like making 10 mini movies,” Johnson told IndieWire’s Tony Maglio. “I literally dove into it like it was one of my movies. I really jumped completely into the deep end of the pool.”
Johnson has previously directed for TV, notably on two episodes for Breaking Bad including the show finale “Ozymandias.” Episode two of Poker Face, which he directed, was shot in Albuquerque.
“I haven’t been back there since we shot ‘Ozymandias,’” he told Rolling Stone. “It was so much fun being back in town. A lot of the same Breaking Bad crew were on our crew, and it felt like a little homecoming.”
Johnson explained to Angela Watercutter at Wired that while Poker Face does have a throughline, any given episode is a standalone. That was “a hugely conscious choice,” he said, “something that I had no idea was gonna seem so radical to all the people we were pitching it to. The streaming serialized narrative has just become the gravity of a thousand suns to the point where everyone’s collective memory has been erased. That was not the mode of storytelling that kept people watching television for the vast history of TV. So it was not only a choice, it was a choice we really had to kind of fight for.”
Johnson discussed director Robert Altman’s influence on the pilot episode of Poker Face in a Q&A with Joshua Encinias for MovieMaker Magazine:
“Altman’s Nashville was definitely a big reference point for me when I was approaching how to shoot the pilot. I will say each one of the episodes very much has its own personality. Colombo was set in Los Angeles, but he was diving into a different profession every single time, based on what the killer did. There’s an anthropological element to it, where you’re doing a little deep dive into a different world every time. That’s very much a part of the show going forward and we allow ourselves, tonally, to give ourselves over to that. There’s an episode, for instance, set in a regional dinner theater with Ellen Barkin and Tim Meadows that’s absolutely hilarious and very comedic in tone, almost like a Noises Off style. The one I did with Joseph Gordon-Levitt is set in this snowed-in motel within the Rockies and it’s almost more like a Coen Brothers horror movie. But yes, absolutely, in the pilot I was looking at a lot of Altman. Also in terms of the looseness of the style of shooting. It seemed like a fun route to go.”
Another Altman film also provided influence in Poker Face. “Janicza Bravo directed one of our episodes and I think she put a very subtle, intentional California Split reference in her episode,” Johnson said. “We talk about that movie a lot on set.”
As the streaming wars rage on, consumers continue to be the clear winners with an abundance of series ripe for binging. See how your favorite episodics and limited series were brought to the screen with these hand-picked articles plucked from the NAB Amplify archives:
Editor Bob Ducsay, ASC on the layers of structure and sleight-of-hand behind writer-director Rian Johnson’s “Knives Out: Glass Onion.”
January 31, 2023
Posted
January 29, 2023
“Troll:” Norway’s Motion Blur Makes a Modern (Ancient) Kaiju
TL;DR
Pushing aside “RRR” in the global marketplace, Norway’s “Troll” has become Netflix’s best performing non-English film.
Partly inspired by “King Kong” and Godzilla films, “Troll” employs many classic monster movie tropes with a distinctly Norwegian spin.
The character design for the titular troll was inspired by paintings by Theodor Severin Kittelsen, one of Norway’s most popular artists.
Espen Horn, producer and CEO of production company Motion Blur, said it was important that the production use Norwegians as crew, SFX and VFX vendors as much as possible, “because we wanted to show the world that this was genuinely a Norwegian or Nordic film.”
Troll from Netflix has seen some highly impressive viewing figures since its arrival on the platform and quickly became its best performing non-English film. This breakdown comes from Naman Ramachandran at Variety: “With a total of 128 million hours viewed and still counting, the film has taken the top spot on the non-English Netflix Top 10. It is in the Top 10 in 93 countries including Norway, France, Germany, the US, the UK, Japan, South Korea, Brazil and Mexico.”
Monster movies have always had a wide fan base and Troll has all the attractions and tropes those fans like — a cityscape destruction, believable and well-executed VFX, a credible folklore backstory, and a monster with feelings and a purpose. Something that Renaldo Matadeen picked up on in his review for Comic Book Resources.
“The remains of his tribe got left in a palace under the Royal Palace, which means the troll king’s domain has been desecrated. So, he’s stomping his way to Oslo to destroy the place for what happened to his family and to crush the symbol of Christianity, politics and corruption.”
Yes, Troll was partly inspired by King Kong, including Godzilla vs. Kong, but don’t forget Cloverfield with its clever “monster in a city” reality. But one of the most important aspects of the production was to keep it very Norwegian notwithstanding the monster action at its core. Espen Horn, producer and CEO of Motion Blur, explained the vision. “It was a big and important dream for us. That we should use Norwegians as crew, SFX and VFX vendors as much as possible because we wanted to show the world that this was genuinely a Norwegian or Nordic film,” he said.
“That was very important as even as the film has a classic monster genre formula to it, as some people claim, it was important to us to maintain originality in terms of the characters, mythology and the nature of how we are as people. I think the audience were happy that we kept the Norwegian originality.”
Mads Sjøgård Pettersen as Captain Kristoffer Holm, Ine Marie Wilmann as Nora Tidemann, and Kim S. Falck-Jørgensen as Andreas Isaksen in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Gard B. Eidsvold as Tobias Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Kim S. Falck-Jørgensen as Andreas Isaksen, Mads Sjøgård Pettersen as Captain Kristoffer Holm and Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann, Mads Sjøgård Pettersen as Captain Kris, and Kim S. Falck-Jørgensen as Andreas Isaksen in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Mads Sjøgård Pettersen as Captain Kris, Kim S. Falck-Jørgensen as Andreas Isaksen, and Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Mads Sjøgård Pettersen as Captain Kristoffer Holm, Ine Marie Wilmann as Nora Tidemann, and Kim S. Falck-Jørgensen as Andreas Isaksen in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Mads Sjøgård Pettersen as Captain Kristoffer Holm in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Ine Marie Wilmann as Nora Tidemann and Anneke von der Lippe as Berit Moberg in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Gard B. Eidsvold as Tobias and Ine Marie Wilmann as Nora Tidemann in writer-director Roar Uthaug’s “Troll.” Cr: Netflix
Half-jokingly, Espen saw that owning the Troll story included some gentle reprisal for a fellow Scandinavian country’s appropriation of another folklore legend. “It was after Finland ‘stole’ the copyright of Santa Claus from Norway. Finally, we could copyright the Troll and insert some Norwegian DNA in to the story,” he said.
“It was essential that the people as well as the Troll were the heroes. Very often when you watch a monster movie it feels like explosions and the fights are more important than the love of the characters or the creature.”
Again reviews bought in to the sympathy for the monster, this from Noel Murray at the Los Angeles Times. “As with many other films about lumbering beasties, Troll alternates between making the big guy terrifying and sympathetic. It’s to the credit of Uthaug and his special effects team (as well as the refreshingly unfussy Espen Aukan screenplay) that this troll inspires such conflicted emotions and isn’t merely menacing or laughably goofy.”
It’s also to the production’s credit that the positive environmental messages baked into the story survived without the feeling of being spoon fed any kind of propaganda and without diluting the monster thrill ride. “We tried to do it in our own Norwegian modest fashion,” Espen demurred.
Jesse Hassenger at Polygon agreed on the lack of a Hollywood ponderous third act in his review. “But there are plenty of advantages to shedding Hollywood-approved bloat while maintaining a kind of gee-whiz energy. Specifically, it resembles Emmerich’s 1998 version of Godzilla, reconfigured for greater speed and efficiency.”
Motion Blur is the production company behind Troll and over the last ten years have been making films and TV including two shows that were signed up for Netflix before Troll, the series Post Mortem and feature Kadaver. Espen, however, was sure it was the right time to attempt a huge monster movie with Scandinavian VFX houses of such quality like Denmark’s Ghost and Copenhagen Visuals, the Norwegian Gimpville and Sweden’s Swiss International. “We realized that with these Scandinavian facilities we had the ability to realize our dreams of creating such a monster, it seemed like the right time to do it.”
Espen describes the origins of the Troll project. “Around ten years ago we were developing a Troll story with another director. Roar Uthaug, the eventual director of Troll, was also developing a similar idea for a film. For various reasons both productions had to be stopped.
“But eventually we got together around three-and-a-half years ago to make a Troll movie. He had a very particular idea of how the story should evolve and had carved out the story and started working with screenwriter Espen Aukan.” The script was written fairly quickly and Motion Blur started to finance it.
Originally Troll was to be financed as a movie theater event with support from the Norwegian Film Institute, “We started to finance it as a typical cinema movie but then Netflix came on board — they bought so much into our vision for the film and how to accomplish it. They were very accommodating to fulfil this vision; it was actually a fairly easy choice to go along with Netflix and then not do a theatrical release.”
It was a 55-day shoot, which isn’t very long, but compared to Norwegian films is on the high side. “Shot in seven different locations including Oslo, so for the production it was very much a road movie in that sense. In one place for three or four days then move the 150 crew and cast to the next location. All were quite difficult to reach. Either up in the mountains or down deep inside a tunnel or a cave. But we got so much help from the local community including neighbors, farmers, engineers, even helping with extras.
“To shoot in rural Norway was a fantastic experience. It was extremely rewarding that when you come to a small place the whole community gathers up and are so supportive. We also had fantastic help from the Norwegian army who were very accommodating in helping us first of all get it right in terms of language and rules and regulations as well as uniforms, guns and helmets, tanks etc.”
But what was Motion Blur’s inspiration for the creation of the Troll? They didn’t want the blundering and small cave troll from Lord of the Rings; in fact, Espen even derided them as “Hairless and stupid trolls.”
“For VFX we used Norway’s Gimpville, Ghost and Copenhagen Visuals in Copenhagen and Swiss International in Stockholm. The monster itself was partly derived from folklore. There is a famous painting by [Theodor Severin] Kittelsen who was one of Norway’s most popular artists, it was the painting that Roar Uthaug had as his inspiration. He always thought, ‘what would happen if we got a real troll walking in to Oslo down Karl Johan, how would everyone respond?’ ”
He worked with a Norwegian artist Einar Martinsen and they started conceptualizing the troll with that thought of Oslo in crisis from a giant creature.
But Kittelsen’s paintings set the scene, “The old Norwegian trolls had trousers, pine trees sticking out of their heads, had extremely large noses, were clumsy and a little bit stupid. We wanted a troll that looked badass but also would have the warm tender eyes of emotions, we wanted him to have memory to show feelings and emotions and the ability to camouflage itself.”
They presented their Troll design to Netflix and the streamer loved it, “We then started to work on the troll with Ghost who did most of the CGI on it. It was important that he was originated from Norwegian folklore — important to Norwegians and to Netflix. It was important that it had good heritage from the old Kittelsen painting and from the old fairy tales of Asbjørnsen and Moe from the 19th century.”
Big content spends, tapping emerging markets, and automated versioning: these are just a few of the strategies OTT companies are turning to in the fight for dominance in the global marketplace. Stay on top of the business trends and learn about the challenges streamers face with these hand-curated articles from the NAB Amplify archives:
Director S.S. Rajamouli’s breakout Tollywood hit “Rise Roar Revolt” is the first and only Telugu-language film to smash the US box office.
March 19, 2023
Posted
January 29, 2023
“M3GAN:” James Wan, Gerard Johnstone, and Jason Blum Know What You Want
TL;DR
Hit movie “M3GAN” has busted the $100 million worldwide ticket sales barrier on a $12 million production budget.
Director Gerard Johnstone was inspired by horror-comedies like Edgar Wright and Simon Pegg’s rom-zom-com “Shaun of the Dead.”
New Zealand actress Amie Donald played the demented AI and ended up doing her own stunts.
Perhaps it’s our underlying fear of what AI will lead to, or a horror jolt that we needed to kickstart our year, but the hit movie M3GAN is busting the $100 million worldwide ticket sales barrier on a $12 million production budget. Also, a generous PG-13 rating has lured in the teenage market with even younger kids finding a way into theaters to catch horror-comedy at its best.
Vanity Fair’s Julie Miller looked into the toy slayers and analyzed the genre. “The killer doll trope is nothing new — 60 years ago, a pigtailed doll in ribbons and ruffles named ‘Talky Tina’ took out an evil stepfather in a Twilight Zone episode,” she writes.
“In the decades since, there have been knife-wielding dolls, deranged puppets, demonic fetish figures, and diabolical porcelain dolls fronting horror films.” But maybe the effect is easily explained by Frank McAndrew, a psychologist who has researched the concept of creepiness.
“They have eyes and ears and heads and all of the things that normal human beings have,” explains Frank “But there’s something off — the deadness in their eyes, their blank stares. They’re cute and they’re supposed to be for children,” he says, but the human realism causes “our brain to give off conflicting signals. For some people that can be very discomforting.”
McAndrew further defines that dolls are especially effective horror-movie antagonists because murderous streaks seem so unlikely in a child’s toy.
But perhaps the most interesting aspect of M3GAN is how a seemingly CGI-laced movie was made for only $12 million. The mid-sized budget was perhaps a consequence of shooting in New Zealand during COVID — since at the time the country hadn’t yet been exposed to the pandemic. But it was also due to the skills of a young local actress, Amie Donald, who played the demented AI and ended up doing her own stunts.
Amie Donald as M3gan in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan, Allison Williams as Gemma, and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Allison Williams as Gemma in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan, Allison Williams as Gemma, and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Ronny Chieng as David in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Amie Donald as M3gan, Allison Williams as Gemma, and Violet McGraw as Cady in director Gerard Johnstone’s “M3GAN.” Cr: Blumhouse
Jen Yamato at the Los Angeles Times tracked down the actress’s movement coaches. “Casting local performer and international competitive dancer Donald, now 12, to physically embody M3GAN turned out to be fortuitous. Although it was her first film role, the actor, who has also since appeared on Sweet Tooth, was off book within a week and loved doing her own stunts. ‘She was just extraordinary,’ says director Gerard Johnstone,” Yamato reports.
“Working with movement coaches Jed Brophy (The Lord of the Rings) and Luke Hawker (Thor: Love and Thunder) and stunt coordinator Isaac ‘Ike’ Hamon (Black Adam), she developed M3GAN’s physicality, which becomes more humanlike the longer she’s around humans. She adopted barely perceptible movements — a slight cock of the head, a step a bit too close for comfort — to maximize the unsettling effect M3GAN has on people.”
Donald proved to the director how well she could do her own stunts and even on the first day of shooting, nailed the all-fours forest move you can see on the trailer after perfecting it at home. “All of a sudden we get this video from her mother, where Amie had just figured out how to do this on the carpet at home,” said Johnstone. “And she could run on all fours!”
CGI was definitely minimized in the movie, but WETA Workshop contributed additional designs to the film, and Oscar-nominated Adrien Morot and Kathy Tse of Montreal-based Morot FX Studios were entrusted to smooth out the joins of animatronics, puppets, posable and stunt M3GANs, as well as Donald herself.
Director Gerard Johnstone was also keen to bring a level of humor to the movie and find ways to echo his own experience of parenthood, as he told Gregory Ellwood at The Playlist. “But what I brought to it was definitely my own sense of humor and my own experiences as a parent. I wanted to put as many of my own struggles and anxieties and frustrations that I was having as a parent into this movie. Parenting in the age of AI and iPads isn’t easy.”
Speaking to Valerie Ettenhofer in an interview for Slash Film, Johnstone cited Edgar Wright and Simon Pegg’s rom-zom-com Shaun of the Dead as teaching him a significant lesson in style. “My big lesson from them when I first watched Shaun of the Dead… was just how seriously they took both genres,” the director shares. “If I was going to do this, I had to treat the horror as seriously as I did the comedy.”
Johnstone struck a balance between horror and comedy with his first film, Housebound, which he continued with M3GAN, Ettenhofer notes, “a movie that offsets its most violent and unsettling scenes with moments in which the titular android does a hair-twirling dance or breaks into spontaneous song.”
Johnstone also namedrops a few other greats that he considers fun horror touchstones. “I’m a big fan of Sam Raimi, Drag Me to Hell and The Evil Dead trilogy.” He also commends Wes Craven, plus the “very deadpan” films of Joel and Ethan Coen, which he says employ “just a very dry tone, but you can tell they’re secretly making comedies.”
All the film references in the world mean for nothing, however, when your movie becomes a litany of Internet memes, which M3GAN quickly generated. Karla Rodriguez at Complex put it to the director that once a part of your movie or a part of the trailer becomes a meme, you know you’ve struck gold.
“And they were amazing,” picked up Johnstone, “and I just couldn’t believe how many of them there were. But I thought they were giving too much away in the trailer of the dance scene. I was like, ‘I just want a hint of it, something weird happening to tease people.’ And Universal said, ‘You don’t know what you’re talking about.’ And I didn’t know what I was talking about clearly because people just took it, recut it, put it to different music and it was just the gift that kept on giving.”
So where does that leave the psychotic M3GAN doll? A scary range of merch maybe, but definitely at least one sequel because, like artificial intelligence, we just can’t get enough of her. Producer Jason Blum has already said as much. “Blum did something he’d never done in his nearly 30-year career: He publicly admitted his desire to make a sequel before the movie even opened in theaters, Rebecca Rubin reports at Variety. “He just felt certain that audiences would instantly fall in love with M3GAN, short for Model 3 Generative Android, whose chaotic dance moves, pithy one-liners and killer tendencies turned her into an internet icon as soon as Universal debuted the first trailer.”
“We broke our cardinal rule,” he says. “I felt so bullish that we started entertaining a sequel earlier than we usually do.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
In a world where the wealthy and greedy thrive, the working class are fighting back, and the power dynamics are reversed — on screen, at least.
Class warfare has come out into the open with the release of “Glass Onion,” “The Menu,” and “Triangle of Sadness” as the gap continues to widen between the rich and everyone else in a meme that will no doubt be solidified in Netflix’s “Squid Game 2.”
One critic thinks these films don’t go far enough and — tongue in cheek — calls for a mainstream movie to go all the way without pulling any punches.
Given that $26 trillion of new wealth created since the start of pandemic went to the richest 1%, reports charity Oxfam, that billionaire Donald Trump’s organization was found guilty of tax fraud but fined a paltry $1.6 million, and that Elon Musk made history by losing a record $165 billion but is still worth $178 billion, you’d be forgiven for hating the rich just a little bit. Hollywood is banking on it.
With seemingly little irony — given the wealth of senior studio execs and owners at streamers like Amazon and Apple — it is open season on the ultra-rich.
Several recent movies, and at least one TV show, set their sights on the oligarchy pulling the strings of the world, “promising brutal, if only imagined, comeuppances that us plebs could cheer on from the pit,” Richard Lawson notes in Vanity Fair.
The main projects being called out for this meme are the 2022 trio of Knives Out sequel Glass Onion, The Menu, and Triangle of Sadness, all of which all depict outsiders unseating the so-called elites for our viewing pleasure.
“The consequences they suffer in these films feel like the world is beginning to right itself,” Kimber Myers at Mashable suggests, “a triumph seemingly impossible off screen. Throughout each movie, the filmmakers create feelings of disgust at these archetypes of privilege and power. We don’t feel jealousy of their success; it’s righteous anger at the unfairness in how they achieved it and delight at their fall from grace.”
True enough, but hardly new. You could read 2000’s Gladiator, itself a retread of sorts of Spartacus, about the working class heroically fighting back against the oppressed and privileged. For which also read the populist narrative of RRR in which plucky Indians defeat the British Raj in style.
N.T. Ramo Rao JR as Komaram Bheem and Ram Charan Teja as Alluri Sitarama Raju in “Rise Roar Revolt.” Cr: Netflix
Gladiator director Ridley Scott is reportedly advanced on making a sequel to his Oscar-winning Roman epic, so look for more of the same.
One to watch before then is Squid Game 2, the follow-up to the Korean satire that took the world by storm in 2021. The show was a naked assault on capitalism in which very few winners of the game of life actually survive.
Also out of Korea was Parasite, garlanded with the Best Picture Oscar (much to Donald Trump’s displeasure) in 2020. This was a transparent metaphor for the underclass taking revenge on those complacent enough not to see their riches as reason enough for attack. The director, Bong Joon-ho’s had form. Snowpiercer (2013), his movie set on a train — which was later adapted into a TNT TV series — was an us-against-them attack on the layers of class and privilege that extends throughout every society.
Edward Norton as Miles Bron and Daniel Craig as Detective Benoit Blanc in writer/director Rian Johnson’s “Glass Onion: A Knives Out Mystery.” Cr: Netflix
In Glass Onion, The Menu, and Triangle of Sadness the ultra-rich squander their privilege. The villain of Glass Onion manages to escape the pandemic by stowing away on his own private island in a literal bubble of his own making. Miles Bron, is of course, a thinly veiled Musk-type of techpreneur who is revealed as being not that bright after all.
Ruben Östlund’s Cannes Palmes D’Or winner Triangle of Sadness targets the relationship between money, power, and beauty, getting quite ugly in the process, Myers found. It’s never subtle, but its most direct condemnations of greed are voiced by the superyacht’s American captain (Woody Harrelson). As passengers gorge on truffles, sea urchin, and heaping spoonfuls of caviar, he has a hamburger.
“The central set piece, an operatic spew of vomit and other fluids on a doomed private cruise ship, is grotesquely amusing — even cathartic,” finds Lawson.
Charlbi Dean as Yaya, Alicia Eriksson as Alicia, Sunnyi Melles as Vera, Woody Harrelson as The Captain, Vicki Berlin as Paula, Zlatko Buric as Dimitry, Harris Dickinson as Carl in director Ruben Östlund’s “Triangle of Sadness.” Cr: NEON
Mark Mylod’s target in The Menu are customers who think nothing of paying more than $1,000 for lunch. Meanwhile, in the real world, groceries cost 11% more than they did a year ago. The Chef (Ralph Fiennes) plots the deaths of his guests, as they quite literally get their just desserts.
An example on TV is HBO’s deliciously entertaining The White Lotus, which took its second season to a fabulous resort in sun-drenched Sicily.
Creator Mike White aired an interesting theory that his show is concerned with the psychology of being astronomically rich. Here, the rich are eating themselves.
“When you’re wealthy and you don’t have situational problems that have to do with money, then your problems become existential,” White told NPR’s Terry Gross during a recent episode of Fresh Air.
“You have all of the tools to figure out your life, and you can’t figure out your life,” he said, adding that “if you’re in paradise and you feel like something’s missing or you’re melancholy or you’re tortured, you know it’s not the ambient nature of what’s going on — it’s something in you.”
Jennifer Coolidge in season two of “The White Lotus,” courtesy of HBO
For all the rage against the machine, most of these stories don’t actually leave the billionaire’s in tatters. In Squid Game it is the dog-eat-dog world of capitalism that sees the working class killing itself for a rich master’s enjoyment.
Vanity Fair’s Lawson also finds the results less than satisfying.
“I’ve no doubt that Triangle of Sadness despises witless, unfeeling wealth as much as it says it does, but it has disdain for everyone else too,” he says. “That’s not really the righteous us vs. them fantasy Iwent looking for. I realize that may be the point, but still.
Fiery as the finale of The Menu may be, it feels awfully narrow, to Lawson “even safe,” he says. “The film strides up to the idea of bloody rebellion and then gets scared of its deepest implications.”
Glass Onion is too “Twitter-speak snarky to register as anything truly condemnatory,” he critiques. It’s a goof: teeming with pop-culture references to imply urgency, but never transgressive.
And The White Lotus, he says, is less concerned with skewering the rich since it is “also guiltily glad to be along for the trip” in terms of showing the audience Instagram-friendly luxury and Love Island-like bodies.
Lawson calls for a show that truly upends the status quo rather than simply gesturing toward it. “I want to see the rich really eaten, chased from their mansions, and reduced to rubble,” he says.
Perhaps something like The Purge (2013) in which is the wealthy family being attacked without and within by an unleashing of violence mixed with Barbarian, Disney’s breakout horror from 2022 in which a smug Hollywood star gets his comeuppance in the underworld of Detroit.
After a decade of streaming, TV delivered online looks remarkably like cable, writes New York Times TV critic James Poniewozik.
Enticing viewers to binge watch or dropping episodes weekly enables creators to extend story arcs and go deeper into character, but sometimes the dramatic tension gets lost in the process.
Interactive experiments like Netflix’s “Kaleidoscope” are dismissed as not being the revolution streaming once promised.
The biggest impact that streaming TC has had on the business and aesthetics of television could be its ability to tell stories over longer arcs — but that’s not always a good thing.
Assessing a decade of streaming, The New York Times TV critic James Poniewozik says binging has transformed storytelling and viewing habits but we may be starting to hit that transformation’s limits.
“Giving viewers the option to binge when they please has encouraged a form of storytelling more focused on the season and less on the episode,” Poniewozik writes, adding, “I think something more nuanced is going on: Decision by decision, TV is collectively feeling its way toward figuring out which viewing experience works best for which kind of series.”
He cites two examples: Game of Thrones, though its episodes only occasionally focused on single stories, “might not have become as big a phenomenon without the weekly hype cycle.”
On the other hand, FX on Hulu’s The Bear, whose entire season dropped at once last summer, prompted more buzz and discourse than many of FX’s weekly series. “It may be that this kind of dramedy — character-based, relatively short, not driven by big plot detonations — is better taken in one gulp,” Poniewozik suggests.
Either way, though, streaming TV is remarkably the same as cable shows of old since in almost every case “you progress, scene by scene, episode by episode, through a narrative order chosen by a creator, not by you or by the roll of some automated dungeon master’s eight-sided die.”
Poniewozik is referring to experiments in non-linear storytelling, which puts the onus on the viewer to chop and change story order and endings.
Netflix heist drama Kaleidoscope is the most recent example, but there have been several others. Netflix’s interactive film/show/game Black Mirror: Bandersnatch was perhaps the most successful in allowing viewers to choose the path the story followed. So did the Unbreakable Kimmy Schmidt special Kimmy vs. the Reverend; the animated Cat Burglar added a trivia-game element. Netflix was not alone in this either, with Steven Soderbergh going the choose-your-own-adventure route in the HBO series/app Mosaic.
Giancarlo Esposito as Leo Pap and Tati Gabrielle as Hannah Kim in episode “Green” of “Kaleidoscope.” Cr: Netflix
Giancarlo Esposito as Leo Pap in episode “Blue” of “Kaleidoscope.” Cr: Netflix
Peter Mark Kendall as Stan Loomis, Paz Vega as Ava Mercer, Jai Courtney as Bob Goodwin and Rosaline Elbay as Judy Goodwin in episode “Blue” of “Kaleidoscope.” Cr: Netflix
Rufus Sewell as Roger Salas in episode “Blue” of “Kaleidoscope.” Cr: Netflix
Giancarlo Esposito as Leo Pap in episode “Pink” of “Kaleidoscope.” Cr: Netflix
Giancarlo Esposito as Leo Pap in episode “Green” of “Kaleidoscope.” Cr: Netflix
Giancarlo Esposito as Leo Pap and Tati Gabrielle as Hannah Kim in episode “Yellow” of “Kaleidoscope.” Cr: Netflix
Paz Vega as Ava Mercer and Giancarlo Esposito as Leo Pap in episode “White” of “Kaleidoscope.” Cr: David Scott Holloway/Netflix
Giancarlo Esposito as Leo Pap in episode “Yellow” of “Kaleidoscope.” Cr: Netflix
Rufus Sewell as Roger Salas in episode “Blue” of “Kaleidoscope.” Cr: Netflix
Tati Gabrielle as Hannah Kim in episode “Yellow” of “Kaleidoscope.” Cr: Netflix
Giancarlo Esposito as Leo Pap and Tati Gabrielle as Hannah Kim in episode “White” of “Kaleidoscope.” Cr: Netflix
Giancarlo Esposito as Leo Pap and Paz Vega as Ava Mercer in episode “Red” of “Kaleidoscope.” Cr: Netflix
Rosaline Elbay as Judy Goodwin in episode “Pink” of “Kaleidoscope.” Cr: Netflix
Peter Mark Kendall as Stan Loomis and Rosaline Elbay as Judy Goodwin in episode “Pink” of “Kaleidoscope.” Cr: Netflix
Rosaline Elbay as Judy Goodwin in episode “Orange” of “Kaleidoscope.” Cr: Netflix
Jordan Mendoza as RJ in episode “Orange” of “Kaleidoscope.” Cr: Netflix
Paz Vega as Ava Mercer in episode “Orange” of “Kaleidoscope.” Cr: Netflix
Paz Vega as Ava Mercer and Niousha Noor as Nazan Abassi in episode “Orange” of “Kaleidoscope.” Cr: Netflix
Rosaline Elbay as Judy Goodwin and Peter Mark Kendall as Stan Loomis in episode “Pink” of “Kaleidoscope.” Cr: Netflix
Peter Mark Kendall as Stan Loomis in episode “Green” of “Kaleidoscope.” Cr: Netflix
Giancarlo Esposito as Leo Pap and Peter Mark Kendall as Stan Loomis in episode “Green” of “Kaleidoscope.” Cr: Netflix
S.J. Son as Liz in episode “Blue” of “Kaleidoscope.” Cr: Netflix
The critic dismisses Kaleidoscope with a shrug, calling it “not especially noteworthy, except for one gimmick,” and more broadly says attempts at interactivity “have gotten no more traction than Smell-o-Vision, maybe in part because our culture already has a popular and relatively young form of interactive amusement, the video game.”
Instead, TV’s dominant format continues to be the static season, in which episodes are served up in a set progression. Often — even on streaming — they arrive once a week.
“The only choosing viewers do is what to watch, when to watch and whether to fill their couch-side snack bowl with chips or pretzels.”
The downside to this is the flabby nature of some shows which are padded out to retain viewers over more hours (or weeks if being gradually released) than is necessary for the story.
I’m a fan of William Gibson’s The Peripheral adaptation made for Amazon Prime but this show could have benefitted from a tighter runtime over fewer episodes. Arguably, Andor, the Star Wars prequel series for Disney+, got its quotient of run time to emotion right. Importantly, its episodes ranged around the 40-minute mark. In an interview with Rolling Stone, showrunner Tony Gilroy, dismissed “the idea that you have to wrap up every episode in a bow” and defended the series’ slow-burn start as a necessary “investment.”
The three-season arc of His Dark Materials produced by Bad Wolf for the BBC is a superb example of treating the source material and the audience with respect, devoid of unnecessary padding, when in other hands you can imagine the storylines being strung out and the drama dissipating.
As Poniewozik concludes, streaming has “added to TV’s bag of tricks, giving creators the option of making more unitary long-form works.” Other times it imposes the expectation of length where it isn’t needed.
Seamlessly combining physical and virtual elements, virtual production comprises a range of tools and techniques. LED walls synced with cutting-edge real-time graphics software take up one part of the spectrum, while other methods go back nearly as far as cinema itself.
With VP become more prevalent than ever, there’s no excuse for not learning the basics. Kelsey Opel, on MediaSilo’s blog, has compiled an excellent primer on virtual production techniques, including how we can expect these technologies to shape the future of filmmaking.
“Everything You Need to Know About Virtual Production,” courtesy of disguise
“The innovative advancements in virtual production have led to more seamless collaboration among different departments,” she notes. “Crews can effectively carry out the creative vision from pre-production all the way to completion.”
As an example, she cites the FX series Snowfall, which began using LED walls during production of Season 5. Series star Damson Idris, who also served as producer on Season 5, employed an LED wall to create real-time backdrops of Los Angeles.
“The show has saved up to $49,000 an episode by reducing shooting time, transportation between locations, and crew quantity. Shooting on a virtual stage can also reduce the production’s carbon footprint.”
Pre-Visualization
One of the most central components of virtual production is pre-visualization, or previs. “As part of the pre-production process, creative teams implement storyboards and digital software to plan the design of animated characters and virtual locations. Any complex scenes or intricate camera movements can also be blocked out before shooting even begins.”
During the previs stage, productions can also perform virtual set scouting, which employs augmented reality technology for exploring virtual environments. Wearing VR headsets or using customized “meta-human” avatars, crew members can virtually “walk” through locations to plan shots, lighting, and more.
Virtual scouting is especially useful for crews tasked with building interior sets, says Opel, as well as backlot sets or even full-scale virtual environments developed to serve as digital assets while filming. These digital assets can also be seamlessly altered throughout the production process, making previsualization a vital tool for virtual production.
The production team for HBO’s Game of Thrones partnered with visualization studio The Third Floor on virtual set scouting for the hit show’s eighth season. TTF generated virtual copies of the sets for the art department to survey before beginning physical construction:
Post-Visualization
Pre-visualization allows filmmakers to design their vision for their projects, but post-visualization, or postvis, allows creative teams to actually carry out their plans to completion, even while the camera is still rolling.
“Like previs, postvis is meant as a guide to represent the creative team’s vision on set. Once the shot is approved, the digital assets are handed over to the editorial team for final animation and compositing,” Opel says.
“Cameras can be calibrated to sync with digital assets to give accurate perspectives in the shot. The virtual set, or animated characters, can be observed on monitors. This advanced technology aids the cinematographer in accurately setting the frame. The director can also give feedback in association with their vision.
“VR trackers help align the camera with the virtual world so that when the camera moves, the digital elements move as well.”
Watch Film Riot’s Ryan Connolly demonstrate how he used Unreal Engine coupled with the Vive Mars CamTrack system to sync his physical camera with his virtual camera in order to view digital assets on-screen while filming live-action sequences, which helped him successfully execute camera movements while remaining immersed in the virtual environment:
Giving filmmakers the ability to bring fantastical animated creatures and characters into live-action environments, motion capture technology outfits performers with a motion capture suit (or mo-cap suit), a wearable device that records the body movements of the user. This data is combined with digital character assets so the production team “can view the animated character on the monitors during filming. After wrap, the assets are sent to post-production to finalize the character’s animation.”
Live Projection
Live projection just might be the most cost-effective VP solution for independent filmmakers, and can be utilized to add color and texture to shots, or to create a reference for actors to enhance their eye-line.
There are two types of live projection within virtual production, as Opel explains, front projection, which reflects light off the screen, and rear projection, which diffuses light from behind the screen.
While live projection offers filmmakers more creative options and can provide more control on set, “it’s important to ensure that the lighting matches both the projected background and the live-action filming. The closer those elements match, the more realistic your shot will look. When set up correctly, live projection can look practical and add simplicity to your project.”
In the video below, Indy Mogul visits Charles Haine, a professor of cinematography at the Feirstein Graduate School of Cinema and a writer at NoFilmSchool, to learn how filmmakers can use the same projection techniques employed for Damien Chazelle’s First Man (2018), Alfonso Cuarón’s multi-Oscar-winning Gravity (2013) and Joseph Kosinski’s Oblivion (2013) using an $80 projector:
Green Screen
Green screen technology, or chroma key compositing, is a visual effects and post-production technique for layering two images or video streams together based on color hues. As the most traditional form of virtual production, chroma keying has its roots in double exposure techniques, which were used to introduce elements into a scene not present in the initial exposure. In 1903, Edwin S. Porter’s The Great Train Robbery famously used double exposure to add background scenes to windows that were black when filmed on set.
In modern filmmaking, green screen technology made a major leap forward with Star Wars: Episode V: The Empire Strikes Back, when VFX supervisor Richard Edlund created a quad-optical-printer able quickly and cheaply interweave images from multiple reels.
“Now, the crew can use virtual production software on set to observe the digital assets on the monitor. News stations replace green screens with live weather reports while on air. Creative teams can view the imaginary world on the monitor during filming,” says Opel.
“Virtual production has taken the guesswork out of green screen technology with more accuracy in camera movements and realistic elements, saving your production time and money.”
LED Walls
Opel calls LED walls “the most advanced form of virtual production,” pointing to The Mandalorian’s use of this groundbreaking technology to bring the Star Wars spinoff’s visual effects to the next level.
The use of LED walls and LED volumes can be traced back to the front- and rear-projection techniques common in filmmaking throughout much of the 20th century in classic films such as Alfred Hitchcock’s 1959 spy thriller North by Northwest.
For production of The Mandalorian, ILM unleashed StageCraft, its end-to-end solution supporting all aspects of virtual production. Central to ILM’s StageCraft is The Volume, a 360-degree customized dome environment comprised of LED panels synced with a real-time graphics engine.
While certainly the most expensive of virtual production solutions, LED walls solve many of the problems inherent in green screen techniques, Opel observes.
“An LED wall also creates realistic lighting and reflections that match your digital assets, while green screens can cast shadows and spill over additional green light. A green screen requires more time in post-production, while there is a quicker turnaround time with an LED wall.”
However, she notes, “if your production involves explosives or dangerous special effects, a green screen would be the better option to avoid damaging an expensive LED wall.”
A Brief Voyage Through the History of Virtual Production
From season 3 of “The Mandalorian.” Cr: Lucasfilm and Disney+
While virtual production is definitely having a “moment” in Hollywood and beyond, VP technologies and techniques have by no means just appeared overnight, cinematographer Neil Oseman observes in a recent blog post. The use of LED walls and LED volumes — a major component of virtual production — can be traced directly back to the front- and rear-projection techniques common throughout much of the 20th century, he notes.
Oseman takes readers on a trip through the history of virtual production from its roots in mid-20th century films like North by Northwest to cutting-edge shows like Disney’s streaming hit, The Mandalorian. Along the way, he revisits the “LED Box” director of photography Emmanuel Lubezki conceived for 2013’s VFX Academy Award-winner Gravity, the hybrid green screen/LED screen setups used to capture driving sequences for Netflix’s House of Cards, and the high-resolution projectors employed by DP Claudio Miranda on the 2013 sci-fi feature Oblivion. Oseman also includes films like Deepwater Horizon (2016), which employed a 42×24-foot video wall comprising more than 250 LED panels, Korean zombie feature Train to Busan (2016), Murder on the Orient Express (2017), and Rogue One: A Star Wars Story (2016), as well as The Jungle Book (2016) and The Lion King (2018), before touching on more recent productions like 2020’s The Midnight Sky, 2022’s The Batman and Paramount+ series Star Trek: Strange New Worlds.
The use of LED walls and LED volumes — a major component of virtual production — can be traced directly back to the front- and rear-projection techniques common throughout much of the 20th century.
ETC@USC’s Erik Weaver discusses the making of “Fathead,” a new proof-of-concept for virtual production and cloud-based workflows.
January 22, 2023
Posted
January 19, 2023
The Bonkers Format for Peacock Series “Paul T. Goldman”
TL;DR
Peacock’s docuseries straddles an uneasy line between real life, true crime and satire. “Borat Subsequent Moviefilm” director Jason Woliner tells the incredible and possibly unbelievable story of one man and his very bad second marriage.
The series dramatizes a real-life story and alternates between the polished narrative footage and documentary footage of the production itself, along with interviews and a fly-on-the-wall cinéma vérité style.
The project took 10 years to get off the ground, in large part because the show’s format and content is so uniquely bizarre.
“In 2012, a man named Paul T. Goldman tweeted at me,” is how director Jason Woliner (Borat Subsequent Moviefilm) begins, discussing his new Peacock series. “He said that he had an incredible story to tell and had written a book — and a screenplay — about it. He asked for my help bringing it to the screen.”
A decade later and the resultant Peacock series Paul T. Goldman is judged to be “hard to describe, impossible to forget, and one wild ride,” by Consequence TV’s Liz Shannon Miller, who says the bar for the “weirdest TV show of 2023” has been set pretty damn high.
Ostensibly about a man’s failed marriage and claims that he was a victim of his wife’s scam, this is less a shocking tale of sex and crime and more a fascinating portrait of a man and his ambitions: his desire for fame, for revenge. The series depicts its central character “through a lens that is alternately dark, strange, bizarre, and, more often than not, very funny,” Shannon Miller adds.
From the Peacock series, Paul T. Goldman as Paul T. Goldman, Jason Woliner as Jason Woliner, photo by: Evans Vestal Ward/Peacock)
From the Peacock series, Paul T. Goldman as Paul T. Goldman, Jason Woliner as Jason Woliner, photo by: Evans Vestal Ward/Peacock)
Paul T. Goldman and Jason Woliner on the set of Peacock series “Paul T. Goldman.” Cr: Tyler Golden/Peacock
MovieWeb’s Matthew Mahler calls it “a mind-melting blend of cringe comedy, character study, and meta documentary.”
Ben Pearson of SlashFilm is not the only critic to call the show “bonkers.” He adds, “Since the rise of streaming, many shows have felt as if they were designed by an algorithm and stretched out with the sole intention of keeping viewers engaged with a platform for as long as possible. Not this one.”
First of all there is the show’s odd format, which combines fiction with sort-of reality, scripted with a behind-the-scenes “making of” docuseries.
“Quirky and odd, the show’s main point feels like the fact we’re all the heroes of our story, at least in our own highly subjective eyes,” Brian Lowry reviews for CNN. “It’s honestly hard to know where to begin in describing the program.”
Initially conceived as a feature, the project was then due to be made for Quibi — Jeffrey Katzenberg’s ill-fated shortform mobile video platform. When it folded, Peacock picked the idea up and made it into a six-part limited series backed by Seth Rogan’s production company.
Filmed on and off for a decade, with much of the filming crammed into 15 days last Summer, Woliner hired documentarian Jason Tippet in 2017 to bring his fly-on-the-wall style.
“He’ll find a spot and plant the camera and walk away and just kind of roll until something interesting happens,” the director explained to SlashFilm. “So we decided early on to make him that third camera, he’s the part of the process. Sometimes he would just roam around the set and basically follow Paul and be far enough away that people didn’t feel like they were on camera. But everyone on set knew that was the deal, that they were mic-ed and we were recording behind the scenes.”
Goldman had written a book, “Duplicity: A True Story of Crime and Deceit,” about the events, along with a screenplay for a film or TV deal that was ignored by everyone except Woliner.
“Every page had mind-blowing things on it,” Woliner tells MovieWeb. “It’s just kind of an amazing peek into this person’s mind and his experience and his perspective, which in many ways was completely different from my own. And then I would find parts of his book that were completely relatable at its core, being about a desire to be loved and lead what you’d consider a normal life.”
PAUL T. GOLDMAN — “Chapter 5: The Chronicles“ Episode 105 — Pictured: Paul T. Goldman as Paul T. Goldman — (Photo by: Evans Vestal Ward/Peacock)
Paul T. Goldman as himself and Frank Grillo as Dan Hardwick in “Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
“Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
“Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
Melinda McGraw as Audrey Munson in “Paul T. Goldman,” directed by Jason Woliner. Cr: Tyler Golden/Peacock
“Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
“Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
“Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
“Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
“Paul T. Goldman,” directed by Jason Woliner. Cr: Peacock
From the Peacock series, Paul T. Goldman as Paul T. Goldman, photo by: Evans Vestal Ward/Peacock)
Woliner not only decided that he would direct a documentary about his production of Goldman’s story, but that Goldman would write and star in all of it. “It was just like, we are filming his writing, and we’re going to see what it reveals,” he said.
“It really was just this kind of falling in love with his mind and then trying to figure out how to translate that into a series, but in a newer thing that is separate from bad or good.”
A tension, which Woliner was keen to exploit, is about how he as a documentarian (and a director of comedy) is telling the story of Paul telling his story. In his interview with Mahler, he calls the filmmaker the “villain of a documentary,” but admits that the show is ultimately his version.
“It’s me telling the story of him telling his story, but it is all filtered through my own perspective. And at the end of the day, I’m the one controlling the edit and not Paul.”
A documentary filmmaker, he says, is “this person who has descended upon the life of a real person and use their life to explore something, to make a point about the human condition or whatever, but they’re the one with all the power, and there is always an imbalance. I hope Paul is happy with the show. I know if he controlled it fully, it would be a very different show.”
As the streaming wars rage on, consumers continue to be the clear winners with an abundance of series ripe for binging. See how your favorite episodics and limited series were brought to the screen with these hand-picked articles plucked from the NAB Amplify archives:
Hulu’s new feature documentary examines the rise and fall of a “disruptive” $47 billion unicorn led by hippie-messianic figure Adam Neumann.
January 11, 2023
“Aftersun:” How Do You Remake Memories?
TL;DR
The emotional weight of the debut feature from Scottish filmmaker Charlotte Wells has been lauded by critics.
Wells discusses how she baked certain visual choices either into her script, when she discovered others on set, or during the edit.
The indie film is produced by Barry Jenkins’ production company Pastel and bears some of the hallmark’s of his Oscar-winning film “Moonlight.”
Sight and Sound, the prestigious international film magazine, selected Charlotte Wells’ debut feature Aftersun as the best film of 2022.
Inspired by, but not based on, the director’s experiences as the child of young parents, the ‘90s-set film stars newcomer Francesca Corio as Sophie, an 11-year-old girl on a package holiday to Turkey with her father Calum (Paul Mescal).
The film, which also won seven British Indie Film Awards, is described by the magazine as an “exquisitely subtle yet deeply affecting and honest depiction of mental illness, father-daughter love, and memory.”
Developed and produced with the support of the BFI Film Fund, using funds from the National Lottery, Aftersun was one of the most talked about films at last year’s Cannes Film Festival and was picked up for international distribution by A24.
IndieWire’s Eric Kohn judged it “the most evocative look at an adolescent gaze coming to terms with the adult world since Moonlight.”
Several critics compare the way Aftersun paints its characters’ interior lives to that of Moonlight director Barry Jenkins. Not coincidentally, perhaps, Jenkins and his producing partner Adele Romanski served as producers on the film.
The 35-year-old was born and raised in Edinburgh, but moved to the US in 2012 to study film at NYU. There, her standout short films including Laps and Blue Christmas caught the attention of Romanski, who encouraged Wells to develop the script.
“Her short films were pretty fucking brilliant,” Romanski tells Kohn. “I was curious to hear what she was working on and how the storytelling style for her shorts would translate into that longer format. Then we waited patiently for years.”
That was in 2018. Wells finally retreated into a two-week writing frenzy in 2019, but held onto her first draft for another half a year before sending it to Romanski. “I spent six months pretending to rewrite but in actual fact just spellchecking it over and over again,” she said.
Her film is very much about memory — how certain moments stay with us forever, but also how our interpretation of events can differ from what actually happened. The story’s “beautiful elusiveness — its accumulation of seemingly inconsequential fragments that gradually accrue in emotional power,” per Tom Grierson in the Los Angeles Times, makes it a difficult movie to encapsulate, even for its maker.
Deadline’s Damon Wise isn’t the only interviewer to observe Wells appearing “somewhat shell-shocked by her film’s progress in the world,” adding “I’m actually a little in awe of the fact that this film has — and could — reach so many people.”
That’s perhaps because, as she tells Marshall Shaffer at Slant Magazine, “Mental health struggles are messy, symptoms overlap and diagnoses are often [incorrect]. It’s incredibly difficult to pinpoint many mental illnesses.”
Frankie Corio as Sophie and Paul Mescal as Calum in writer-director Charlotte Wells’ debut feature, “Aftersun.” Cr: A24
Frankie Corio as Sophie and Paul Mescal as Calum in writer-director Charlotte Wells’ debut feature, “Aftersun.” Cr: A24
Frankie Corio as Sophie in writer-director Charlotte Wells’ debut feature, “Aftersun.” Cr: A24
Paul Mescal as Calum in writer-director Charlotte Wells’ debut feature, “Aftersun.” Cr: A24
Paul Mescal as Calum in writer-director Charlotte Wells’ debut feature, “Aftersun.” Cr: A24
Frankie Corio as Sophie and Paul Mescal as Calum in writer-director Charlotte Wells’ debut feature, “Aftersun.” Cr: A24
Frankie Corio as Sophie and Paul Mescal as Calum in writer-director Charlotte Wells’ debut feature, “Aftersun.” Cr: A24
Writer-director Charlotte Wells on the set of her debut feature, “Aftersun.” Cr: A24
Writer-director Charlotte Wells on the set of her debut feature, “Aftersun.” Cr: A24
Of the film’s deliberate ambiguity Wells says to Alex Denney of AnotherMag, “I think inherent in whatever style it is that I have there is space for people to bring their own experiences. It’s both conscious and not: I think when you avoid a certain kind of exposition it does create ambiguity and people will fill that ambiguity with their own experiences, their own reference points that they enter the cinema with.”
Withholding information “is kind of the point of the film” she tells IndieWire. “I think the ambiguity is inherent in the subtlety and my aversion to exposition. But for me, the answers are all in the film.”
Her reticence to talk in concrete terms about her work is also warning not to label it an autobiography. “It’s very much fiction, but rooted in experience and memory,” she reveals to Denney. “It’s personal in that the feeling is mine and I allowed my own memories and anecdotes through all of childhood to form the kind of skeleton outline [of the first draft]. But after that point it did become very much about the story I was trying to tell, and that frequently required pushing it away from my own experience.”
Cinematographer Gregory Oke records on lush 35mm and partly masks Calum’s appearance throughout the film, rendering him as a semi-ghostly presence.
“We worked hard to keep Calum at arm’s length, to keep more physical distance between him and the camera in order to create the feeling that he is in some sense unknowable,” Wells tells Denney.
Interspersed throughout the narrative is a jarring dreamlike rave sequence, which finds the adult Sophie confronting her father under strobe lights on the crowded dancefloor.
“In a lot of ways, there was a mystery to the process of discovering exactly what this was,” Wells explains to IndieWire. “So much of the process found its way into the film. The process of rooting through the past and memories and allowing some to rise to the surface, transforming them or reframing them.”
Noting Aftersun’s impressionistic style, Deadline’s Wise wonders whether Wells achieved that by taking things away in the edit, or scripting it.
“Both,” is her reply. “I didn’t shoot anything I didn’t want to be in the film. But there is plenty that isn’t in the final cut, that was lost in service of the edit. There were discoveries in the edit that were originally just strategies that we used to solve problems but which ended up being quite a meaningful strategy in terms of creating a sense of memory.”
The way Aftersun deceptively drifts from scene to scene — punctuated by meditative cutaways of shots like a person’s hand or a random passerby yelling at their child kid — are all painstakingly crafted.
“Some of [those shots] were whole scenes reduced to an image,” Wells tells IndieWire’s Kohn. “Some were details in the script, and some were discovered on set based on months, if not years, of conversations with my cinematographer.”
When it’s suggested the deft execution of Aftersun feels like a magic trick, she demurs. “I don’t have an answer as to what it is,” she says. “We didn’t set out to pull off an emotional heist.”
Writer/director Mike Mills discusses the making of his black-and-white comedy-drama starring Joaquin Phoenix and Woody Norman.
March 1, 2023
Posted
December 15, 2022
“All the Beauty and the Bloodshed:” Art and Activism (on Both Sides of the Camera)
TL;DR
Documentary feature “All the Beauty and the Bloodshed” is about how photographer and artist Nan Goldin became an activist to provide support for people dealing with opioid addiction, and to protest the “artwashing” of the Sackler family, owners of Purdue Pharma.
Specifically, the film charts the ways that Goldin has leveraged her position in the art world to pressure museums and galleries to deny future funding from the pharmaceutical-giant — and to take down their family name from their walls.
Oscar-winning documentary filmmaker Laura Poitras says she was determined to make a film about the unfiltered artist that wasn’t a normal biopic but one that showed how Goldin’s life and work intersect with her activism.
All the Beauty and the Bloodshed, an unconventional biopic of artist and activist Nan Goldin, is as much societal critique as a portrait and illustrates the extent to which the personal is political.
Its director, Laura Poitras, said to The Guardian’s Nadia Khomami that Goldin’s story is “a challenge to other artists“ to use their power to expose “the toxic philanthropy and whitewashing of blood money and institutions.”
The film examines the life and career of Goldin and her efforts to hold Purdue Pharma, owned by the billionaire Sackler family, accountable for the opioid epidemic. Opioid addiction has been linked to more than 500,000 deaths in the US over the last two decades.
Nan Goldin, always documenting. Images courtesy of Nan Goldin and Neon Filmes
All the Beauty and the Bloodshed won the Golden Lion at the Venice International Film Festival, prompting HBO Documentary Films to acquire it for US television and streaming rights, Matthew Carey reports at Deadline. HBO will premiere the documentary — the only film to play at the Venice, Telluride Film Festival, Toronto International Film Festival, and New York Film Festival in 2022 — on March 19 at 9 p.m. ET/PT. The documentary will also stream on HBO Max.
Goldin, a photographer whose work documented LGBTQ+ subcultures and the AIDS crisis, founded the advocacy group PAIN (Prescription Addiction Intervention Now) in 2017 following her own addiction to OxyContin. The group puts pressure on museums and other arts institutions to end collaborations with the Sacklers, who have long been financial supporters of the arts.
Goldin herself said she sought to demand accountability. “They have washed their blood money through the halls of museums and universities around the world,” she told Rolling Stone’s David Fear, and made good on her promise to make the political personal.
She succeeded too. With PAIN-led protests forcing The Met, The Guggenheim, the Louvre, and other art institutions to stop accepting Sackler money and take their names off their walls, leaving only a few institutions as hold-outs — and the Sackler name permanently tarnished.
Photograph of Nan Goldin from “All the Beauty and the Bloodshed.” Cr: Nan Goldin/Neon
A self-portrait of Nan Goldin in “All the Beauty and the Bloodshed.” Cr: Nan Goldin/Neon
Photograph of Nan Goldin from “All the Beauty and the Bloodshed.” Cr: Nan Goldin/Neon
Photograph of Nan Goldin from “All the Beauty and the Bloodshed.” Cr: Nan Goldin/Neon
Photograph of Nan Goldin from “All the Beauty and the Bloodshed.” Cr: Nan Goldin/Neon
But All the Beauty and the Bloodshed is about more than her fight with the Sackler family. Goldin originally planned for the documentary to just tell the story of PAIN, but after contacting Poitras to make it she was persuaded to weave her own personal life into the picture.
Poitras is best known for her Academy Award-winning 2014 documentary Citizenfour about NSA whistleblower Edward Snowden. She also made Risk, a documentary about another social and political pariah, Julian Assange.
The filmmaker started documenting Goldin’s contemporary activism, but soon found herself wanting to talk more about the rest of Goldin’s life. “There was a shift,” Poitras tells Anne Thompson at IndieWire. “As every film happens, you start to learn more, and then, ‘Oh, we need to talk about other things.’ ”
That’s when Poitras made a deal with Goldin to do a series of candid audio interviews in which the artist opened up about her history of drug abuse and domestic violence, her previously undiscussed sex work, her sister’s suicide as a teenager, and her art career — including the controversial AIDS exhibition, “Witnesses: Against Our Vanishing,” which was censored by the National Endowment for the Arts.
To illustrate these aspects of Goldin’s life, the film draws heavily on the artist’s own work, as explained by Luke Hicks at Paste Magazine. Known best for her slideshows, Goldin flips through hundreds of pictures and tells story after story — each one gripping, culminating, well-delivered, giving way to an eagerness for the next — often returning to her most famous collection of over 700 photos on 35mm from 1983-2022, titled “The Ballad of Sexual Dependency.”
“It’s my story told through my photographs — there’s not a lot of footage shot by other people,” Goldin told Artnet’s Sarah Cascone. “Poitras is telling my story in my voice, but it’s not exactly my version as I would tell it.”
Nonetheless, Goldin had final cut. “Nan and I could speak really freely,” Poitras tells Thompson, “and she would have an opportunity later before it would be shared with anyone wider to see if there was anything that went too far.”
The editing team, led by Joe Bini (We Need to Talk About Kevin), “had these ideas of the interweaving of past and present and an inner and outer world,” Poitras continues. “It was very [challenging] to keep the drama and the subtlety and subtext and that storyline going. And pointing the blame where it belongs: to the Sackler family, and a society that doesn’t hold people accountable, or provide health care for its citizens.”
The film is also a snapshot of New York City, where Goldin rubbed shoulders with the likes of John Waters and Jim Jarmusch. “She’s as much a chief creative force as Poitras on the outcome of the film (a ‘collaboration’ they called it at the New York Film Festival premiere), especially when you consider how much of it is Goldin’s slideshows,” Hicks notes.
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Nan Goldin in documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Nan Goldin in documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Nan Goldin in documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed.” Cr: Neon
Giving it an A+ rating, Sophie Monks Kaufman at IndieWire says audiences are “given a whistle-stop tour through the subculture, with anecdotes from Tin Pan Alley, a bar where only women worked and Nan was ‘the dominatrix.’ Each vignette comes with its own colorful detail or punchline. It turns out that Goldin the orator cuts through the fugue of conformity with the same wallop as Goldin the photographer, and Poitras is there to give her the sharp edit that she deserves.”
According to Esther Zuckerman at The New York Times, the title of the film — conceived by Poitras — comes from the hospital records of Goldin’s sister Barbara, who died by suicide at 18. The director found that the phrase, taken from a report about what Barbara interpreted on a Rorschach test, encompassed the tragedies on display on-screen but also the celebration of resistance.
Fear says that the documentary is a “portrait of someone who’s taken family trauma, inspiration from her fellow outliers and the scars of a bohemian life, then used them to fuel a body of work that’s akin to a four-alarm fire.
“But it’s also a portrait of an activist and a major work of protest art in and of itself, sharing bio-doc screen time with footage of Goldin’s guerilla warfare against Big Pharma and calling out of bullshit. One is an extension of the other.”
From the latest advances in virtual production to shooting the perfect oner, filmmakers are continuing to push creative boundaries. Packed with insights from top talents, go behind the scenes of feature film production with these hand-curated articles from the NAB Amplify archives:
Daniel Roher’s Oscar-winning documentary follows Russian opposition leader Alexei Navalny’s quest to identify who poisoned him in 2020.
January 22, 2023
Posted
December 1, 2022
Director Alek Keshishian and Selena Gomez Get Real in “My Mind & Me”
“Selena Gomez: My Mind & Me.” Cr: Apple TV+
TL;DR
In the Apple TV+ documentary “Selena Gomez: My Mind & Me” we see the toll fame has often taken on the star’s mental health.
We also learn what director Alex Keshishianfeels was important while making a documentary of her life: the intrusion of social media and inane questions thrown at the singer on press tours.
The 93-minute feature was made from more than 200 hours of reality-shot and archive material.
Though Disney channel star turned pop singer and TV producer Selena Gomez hasn’t shied away from speaking publicly about her mental and physical health struggles over the years, the new Apple TV+ documentary is deeper, darker, and more specific about these incidents. In Selena Gomez: My Mind & Me we see the toll this has often taken on her mental health.
The film is directed by Alek Keshishian, whose previous credits include acclaimed music documentary Madonna: Truth or Dare (1991) that broke new ground in presenting a fly-on-the-wall snapshot of the public and private effect of fame on an artist and those around them.
They struck up a friendship, and “she asked if I would consider doing a tour doc with her. I said, ‘I don’t think you really want me to do a tour doc with you, because I don’t make the sort of tour docs that everyone’s been doing in your lifetime.
“I shoot cinéma vérité and I’m spoiled because my first experience was with Madonna who gave me access to everything all the time.’ ”
Despite that knock-back, Gomez seemed even more keen to work with Keshishian. So they agreed a trial.
It didn’t go well.
Gomez and Keshishian while filming “Selena Gomez: My Mind & Me.” Cr: Apple TV+
“I brought in my crew and we shot for two weeks, then I cut it down to a five-minute [short] so she could see the kind of film I would make. She was like, ‘Wow, it’s beautiful, but could you not show me crying? I don’t want my fans to see me break down like that.’ And so I said I didn’t think it was the right time for me to make a documentary with her. We agreed to just shelve the footage.”
What changed was a charity trip that Gomez made in 2019 to Kenya. Keshishian agreed to go “because it was for a good cause,” and found in the course of shooting in Africa and then press events with the singer in Europe that there was a doc to be made inspired by the conflicting way people treated her fame and her reaction to that.
Gomez herself tells Rachel Handler at Vulture that in Kenya she “realized that people in every part of the world are dealing with the same thing: their minds. Your mind is everything. It provides for your body, for your soul. But when I got to London, I gotta be honest, I was kind of frustrated and didn’t even want anyone to film anything. I was just a little frustrated with some of the questions… the press-tour moments in London and Paris. Those questions were shitty.”
To Keshishian this was the story. “I was more interested in some levels of implicating the paparazzi who are unbelievably cruel,” he told Variety’s Jazz Tangcay. “I showed unrelenting interest [of Gomez] in the press. I wanted to show how cruel some of that stuff that’s yelled to a 24-year-old girl is, how brutal it is. On another level, there’s this misogyny, that the woman is always somehow that dumped one, and that the woman should be jealous.”
To Handler he added, “There’s a real part of me that wanted to make a statement to young people that pursuing the artifice of fame and whatever — it isn’t a bunch of roses. It’s not perfect, and in some ways, it can prevent actual human connection. That’s what you see in London and Paris. She’s not connecting with human beings after connecting so deeply with human beings in Kenya. That’s really the shock to her system. That’s what makes her feel sad.”
It would seem there is a darker side to fame than thirty years ago and the always on pressure of social media is to blame.
“Selena Gomez: My Mind & Me.” Cr: Apple TV+
“Selena Gomez: My Mind & Me.” Cr: Apple TV+
“Selena Gomez: My Mind & Me.” Cr: Apple TV+
“Selena Gomez: My Mind & Me.” Cr: Apple TV+
“Selena Gomez: My Mind & Me.” Cr: Apple TV+
“You’re constantly working on presentation,” he tells Mia Galuppo at The Hollywood Reporter. “I’m talking about people who are really doing social media. So, I wanted to indict fame to a certain degree. I wanted to make people realize this is not all fun and games. She’s not in Paris having a great time. Granted, these are first world problems, you can say, but if you want to know what it does to the mental health of somebody, that level of isolation — it doesn’t make people joyful.
He whittled down more than 200 hours of footage for the 90-minute feature, explaining to Tangcay at Variety that much of that was archival footage. “I knew I had to tell parts of this story through archival, which is very time-consuming. It took us six months just to do the string outs which meant taking each scene and shortening it.”
He could have released a two-hour, 30-minute cut, “and pleased her fans who would never tire of her,” he added to Galuppo. “But I wanted us to mean something for people down the line who aren’t her fan. One of the things I always kept telling my editors is: I’m not looking to make a room spray of Selena Gomez. I want the most distilled and concentrated version of this story so that you spend 93 minutes and hopefully you come out feeling differently about your own life as well as Selena’s.”
As Handler says, many musicians have done their versions of “personal documentaries,” in which there is a sense that they’re still controlling the final product — that there’s a level of PR machinations going on behind the scenes. So did Gomez want final cut?
“There are a lot of things that I didn’t put into this,” Keshishian tells Tangcay. “It’s a potent experience, but it doesn’t have everything.”
Director of Photography Jenna Rosher captured a year in the life of the pop star’s meteoric rise from social media cult to global celebrity.
November 1, 2022
“The Banshees of Inisherin:” Martin McDonagh Tells a Wonderful/Terrible Tale
TL;DR
Writer-director Martin McDonagh fuses his trademark dark humor with something altogether more profound about the nature of friendship, creativity and mortality in his new drama, The Banshees of Inisherin.
The Banshees of Inisherin follows a soured friendship between the cheerful but dim Pádraic (Colin Farrell) and the more tortured, artistic Colm (Brendan Gleeson).
Cinematographer Ben Davis used the landscapes of two Irish islands, Achill Island and Inishmore Island, to convey the dueling personalities of the film’s two main characters.
With the The Banshees of Inisherin, writer-director Martin McDonagh has fused his trademark dark humor with something altogether more profound about the nature of friendship, creativity and mortality.
It follows a soured friendship between the cheerful but dim Pádraic (Colin Farrell) and the more tortured, artistic Colm (Brendan Gleeson), who summarily tells Pádraic one morning that he no longer wants to be pals. Over the course of the film, Pádraic’s initial bafflement curdles into resentment, while Colm’s attempts to stay away from him in their tiny community repeatedly fail.
On the face of it, a relationship breakup is a thin plot on which to hang a film, but this was McDonagh’s starting point.
“I just wanted to tell a very simple breakup story,” he told Deadline’s Joe Utichi. “And to see how far a simple comedic and dark plot could go.”
For all its comedy, the drama is best described as a melancholic ballad. McDonagh, who won best screenplay at the Venice film festival , says he tried to imbue the friends’ breakup “with all of the sadness of the breakup of a love relationship… because I think we’ve all been both parties in that equation,” he told Miranda Sawyer at The Guardian. “And there’s something horrible about both sides. Like knowing you have to break up with someone is a horrible, horrible thing as well. I’m not sure which is the best place to be in.”
Depicting that sadness accurately was his intent, he explained to AV Club’s Jack Smart: “It was about painting a truthful picture of a breakup, really. A sad breakup, a platonic breakup, which can be as heavy and sad and destructive as a divorce, as a sexual or loving relationship coming to an end.”
Colin Farrell as Pádraic Súilleabháin and Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Searchlight Pictures
Colin Farrell as Pádraic Súilleabhá in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Colin Farrell as Pádraic Súilleabhá in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Writer-director Martin McDonagh on the set of “The Banshees of Inisherin.” Cr: Aidan Monaghan/Searchlight Pictures
Brendan Gleeson and writer-director Martin McDonagh on the set of “The Banshees of Inisherin.” Cr: Aidan Monaghan/Searchlight Pictures
Kerry Condon on the set of “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Brendan Gleeson as Colm Doherty and Colin Farrell as Pádraic Súilleabhá in “The Banshees of Inisherin.” Cr: Aidan Monaghan/Searchlight Pictures
Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Searchlight Pictures
Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Colin Farrell as Pádraic Súilleabhá in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Kerry Condon as Siobhan Súilleabhá and Barry Keoghan as Dominic Kearney in “The Banshees of Inisherin.” Cr: Searchlight Pictures
Colin Farrell as Pádraic Súilleabhá and Barry Keoghan as Dominic Kearney in “The Banshees of Inisherin.” Cr: Searchlight Pictures
Writer/director Martin McDonagh and Colin Farrell on the set of “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Colin Farrell as Pádraic Súilleabhá and Kerry Condon as Siobhan Súilleabhá in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Kerry Condon as Siobhan Súilleabhá in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Colin Farrell as Pádraic Súilleabhá in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Jon Kenny as Gerry, Brendan Gleeson as Colm Doherty, Colin Farrell as Pádraic Súilleabháin, and Pat Shortt as Jonjo Devine in “The Banshees of Inisherin.” Cr: Searchlight Pictures
Colin Farrell as Pádraic Súilleabhá in “The Banshees of Inisherin.” Cr: Searchlight Pictures
Colin Farrell as Pádraic Súilleabháin and Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Colin Farrell as Pádraic Súilleabháin and Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Colin Farrell as Pádraic Súilleabháin and Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Jonathan Hession/Searchlight Pictures
Colin Farrell as Pádraic Súilleabháin and Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Searchlight Pictures
Colin Farrell as Pádraic Súilleabháin and Brendan Gleeson as Colm Doherty in “The Banshees of Inisherin.” Cr: Searchlight Pictures
There’s more to the film than this. Setting the story in Ireland in 1923, with the Irish Civil War playing out in the background, is a metaphor that spins the tale a wider web.
“You don’t need any knowledge of Irish history,” McDonagh told The Atlantic’s David Sims. “All you need to know, really, is that [the civil war] was over a hairline difference of beliefs which had been shared up until the year before. And it led to horrific violence. The main story of Banshees is that, too: negligible differences that end up, well, spoiler alert, not in a good place.”
The divide between the one-time friends spirals into violence so quickly that the original relatively mild cause for dispute is forgotten. “I think that’s what was interesting about this story, that things unravel and get worse and worse, sometimes without, oftentimes without intending to,” McDonagh told UPROXX’s Mike Ryan. “And then become unforgivable and irreparable. And I guess that’s true of wars as much as is true of this little story about the two guys.”
There are other layers too. Not least of which is what IndieWire’s Eric Kohn shares as McDonagh’s “deep questions about national identity,” both within the series and his own personal identity. Despite writing Irish characters (in this film and his debut, In Bruges) and setting previous theatrical plays in the country, McDonagh hails from London, although his parents are indeed from west coast Ireland.
McDonagh’s last movie set in the country was the 2004 short Six Shooter, which won an Academy Award. McDonagh’s first trilogy of plays, starting with The Beauty Queen of Leenane in 1996, took place in Galway. His second trilogy — which remains unfinished — took place on the Aran Islands, and Banshees was shot on Inishmore and Achill, two islands off Ireland’s west coast.
Inisherin itself is fictional, partly to put the real events of the civil war at one remove from the events onscreen, and also because he and cinematographer Ben Davis use the landscapes of the two islands to convey the dueling personalities of his two main characters.
“All in all, it certainly seems like McDonagh wants to grapple with the history and personality of the country after setting it aside for almost two decades,” Kohn notes.
At the same time, the filmmaker’s depiction of Ireland risks backlash. “There’s a certain degree of unease in Ireland about McDonagh’s post-modern, heightened versions of Irishness,” shares Irish film critic Donald Clarke. “The films and plays do well here. But there is a tension in Ireland about his treatment of the country.”
Critics also point to supposed southern stereotypes in the Oscar-nominated Three Billboards Outside Ebbing, Missouri. Kohn indicates that McDonagh was often lambasted on the promotional tour of that movie for depicting a racist police officer (Sam Rockwell) with some measure of empathy.
“His characters are exaggerated to an almost allegorical degree in order to comment on the society around them, which has led some American audiences to see his view of the country as naïve,” Kohn writes. “Banshees burrows into the stereotype of Irish people at pubs, guzzling pints to the tune of ebullient folk music, and molds it into an emotionally resonant character study.”
That character study is also linked to a meditation on death and how an artist should make best use of their time. In the film, Colm is a musician and wants to use the rest of his days creatively, rather than sitting in the pub with Pádraic talking nonsense. Which raises several questions, including: do you have to be selfish and cruel in order to create? Can an artist be nice?
That is accompanied with a threat: If Pádraic doesn’t leave him alone, then Colm will start lopping off his own fingers.
“I thought it was interesting that an artist would threaten the thing that allows him to make art,” McDonagh said. “Does that thing make him the artist?”
It’s clearly something that preys on McDonagh’s mind. “I’m 52. You start thinking, Am I wasting time? Should I be devoting all my time, however much is left, to the artistic?” he commented to Sims. “That’s something that’s always going on in my head — the waste of time, the duty to art, all that. So you start off being on [Pádraic’s] side and understanding the hurt, but you have to be completely truthful to the other side… You should feel conflicted.”
McDonagh says decided that he’s going to spend what creative time he has left — he reckons “around 25 years” — making films rather than plays. His reasoning? Films are quicker.
“I always used to think they took longer than plays, but with this one we were filming it a year ago, and now it’s out,” he tells Sawyer. “But if you’re lucky enough to have successful plays… to get that right with each move, to cast it and take care of it, go to rehearsals, that’s five years of your life.”
It was also clearly nagging at him to unleash the genie of Gleeson and Farrell’s chalk and cheese interplay that audiences lapped up in the 2008 cult hit In Bruges.
“It feels like it was two days ago that we made In Bruges together but time passes so quickly,” he said in response to The Playlist’s Gregory Ellwood wondering if there will be a third collaboration. “None of us are getting any younger. I don’t have an idea now, but just that little ticking bomb is somewhere in me. So, I do want to get them back together.”
In The Banshees of Inisherin, McDonagh reunites the pair only to break them up in the first scene. “A delectable bit of cruelty for the audience,” observes Sims.
Although he made In Bruges to his satisfaction, the director apparently faced pressure from execs at Focus Features at every turn. He now insists on having the final cut, which he got for Banshees, a movie produced by Disney-owned Searchlight. Kohn points out that his four movies have all been made for around $15 million, a manageable scale by studio standards that lets McDonagh get away with creative freedom.
“That is the reason why the films are singular,” McDonagh said. “It is all me. It hasn’t been watered down, for good or bad.”
From virtual celebrities and MetaHumans to deepfakes and voice clones, novel forms of “synthetic” media blur the distinction between physical and digital environments and will radically accelerate the process of content creation and delivery. With it comes important questions about privacy and ethical dilemmas.
An article by future tech strategist and entrepreneur Mark van Rijmenam runs through where synthetic media is today, where it might be going, and what pitfalls we should be aware of.
“We are entering a new age where more people will be exposed to synthetic media,” he says. It’s a mass social experiment, and we have no idea what the consequences of this medium might be. If we cannot predict or study its impact accurately, there is little hope of protecting ourselves against its dangers.”
But what is synthetic media? The definition given by van Rijmenam is “virtual media produced with the help of artificial intelligence (AI). It is characterized by a high degree of realism and immersiveness.” He continues, “Furthermore, synthetic media tends to be indistinguishable from other real-world media, making it very difficult for the user to tell apart from its artificial nature. It is possible to generate faces and places that don’t exist and even create a digital voice avatar that mimics human speech.”
Examples that exist right now include virtual celebrities like Lil Miquela, an online persona who does not exist in the real world, but has become one of the world’s most popular influencers on Instagram, with three million followers.
“She (it?) and other virtual influencers are becoming increasingly popular and will continue to do so,” says van Rijmenam.
“When it comes to deepfake issues, journalism cannot escape the fact that its old forms of reporting are under pressure due to the rise of digital information. Therefore, we need media literacy and verification to report on these deepfake videos and regarding worldwide disinformation or propaganda.”
Mark van Rijmenam
The ability for anyone to create their own avatar to popular the metaverse is being driven by Epic Games, which has a program called MetaHuman that does just that.
“MetaHuman Creator enables you to create fully rigged photorealistic digital humans in minutes, in real-time, for use in video games, virtual and augmented reality content, architectural visualizations, and more.”
Van Rijmenam himself says he is now “transitioning” to a MetaHuman character; a digital twin of himself. That’s an interesting choice of word given today’s controversies surrounding gender identity.
Unreal Engine’s MetaHuman Creator
On the audio side, artificial voice technology, such as text-to-speech and voice cloning, has become very popular. Companies here include Respeecher, Voiseed and Resemble.ai, which allows you to clone your voice to create digital avatars and use them in movies. As described by van Rijmenam, Voiseed makes audio content more human with a voice interface that communicates in authentic, natural language using emotion and intellect.
Synthetic media tools allow for creating complex data visualizations, or even videos, using only a spreadsheet. Analysts and researchers often use these to present findings to a broader audience. Art directors also use it to mockup ideas before they bring them to life in development.
Synthetic music has the potential to generate sounds that are indistinguishable from a human-produced track.
“AI generates ever-shifting soundscapes for relaxation and focus, powers recommendation systems in streaming services, facilitates audio mixing and mastering, and creates rights-free music.”
You can find AI-generated and copyright-free music using platforms like Icons8 and Evoke Music. Synthetic images are already being used for creating NFT art and generating realistic stock photos, while synthetic videos combine the worlds of photography and videography. Synthetic videos have taken on many forms, but one of the most popular types is deepfakes. These are face swaps, where one person’s face replaces another’s, or face reenactment in which the source actor controls the face of the target actor.
GANs
This is all made possible by advances in neural networks — or Generative Adversarial Networks (GANs), to use the jargon. Since GAN outputs look natural and indistinguishable from the original photos, they enable synthetic media that is difficult to distinguish from real media, particularly in computer vision and image processing applications.
“At the same time, advances in machine learning and deep learning have made it possible to train computer vision algorithms on large datasets of images,” van Rijmenam says. “As a result, today’s neural networks can see things in photos that humans can’t even detect with our own eyes.”
What is Synthetic Media Good For?
As with most AI technology, there are pluses and minus. The sheer volume of content required to build the worlds of the metaverse is going beyond all traditional computer assisted graphics construction. AI-driven media can be created rapidly. One only has to look at the impact of the fledgling text to picture AI DALL-E 2 to realize the benefits of this approach is going to have in kickstarting artistic creation in the 3D internet.
Such technology is only going to advance. Text-to-picture is evolving into text-to-video and eventually will lead to the creation of full-scale feature films. There have been several experiments in creating long form AI-composed content, but what distinguishes the next generation of synthetic media is that it will be indistinguishable from the real thing.
“It can create an illusion of authenticity,” says van Rijmenam. “This media type allows companies to connect with their audiences without paying actors or hiring professional photographers or videographers.”
“Synthetic media tends to be indistinguishable from other real-world media, making it very difficult for the user to tell apart from its artificial nature. It is possible to generate faces and places that don’t exist and even create a digital voice avatar that mimics human speech.”
Mark van Rijmenam
If certain jobs in media production are going to be automated out of existence a new type of job will emerge to take its place.
“The primary focus is interacting with AI to help it become more intelligent and capable,” says the futurist. “The skill of those who work with AI is important. If employees do not stay updated on technological advancements and improve their knowledge, they could be forced out of their jobs — no matter how hard they try to avoid automation.”
As AI becomes ever more sophisticated, so do the ethical challenges AI faces. The biggest concern is ensuring that the algorithm will not engage in abusive or unethical practices towards humans and vice versa. Text and video can be created to generate misleading, false, or non-existent information, (fake news).
“When it comes to deepfake issues, journalism cannot escape the fact that its old forms of reporting are under pressure due to the rise of digital information. Therefore, we need media literacy and verification to report on these deepfake videos and regarding worldwide disinformation or propaganda.”
Personal rights and intellectual property laws will also be challenged. The legality of AI-generated counterfeit content is often unclear, making it difficult to know where your rights lie.
“Copyright law protects original intellectual property from copying,” van Rijmenam explains. “However, in an era of exponential growth, it will soon be unable to distinguish between ‘real’ and ‘fake’ text.”
Who should own the rights to a synthetic movie where all the actors are created digitally? The studio, or the creators of the algorithm that generated the characters? These questions are still to be explored and framed in law.
“Synthetic media will need to be regulated by law and policy, so we’ll need new rules to determine ownership and licensing,” he says.
Artificial intelligence can help companies manage the increasing amount of video being shared, particularly among mobile-natives.
July 14, 2022
Tiny Is Beautiful: The Magic of Making “Marcel the Shell With Shoes On”
It took more than a decade for the hybrid stop-motion/live-action “Marcel the Shell with Shoes On” to move from his DIY origins to a full-length feature. Cr: A24
Stop-motion animation proceeds at a snail’s pace at the best of times, but it took over a decade for Marcel the Shell with Shoes On to move from his DIY origins to a full-length feature.
When director Dean Fleischer Camp and his co-writer and star Jenny Slate made their first short featuring the one-eyed mollusk, in 2010, they couldn’t have imagined its impact: more than 32 million views to date on YouTube.
Dean Fleischer Camp and Marcel (Jenny Slate) in “Marcel the Shell with Shoes On.” Cr: A24
Those shorts have now been adapted into a feature film, a process that Camp worried could rob the project of its low-fi, low-budget charm.
“I think especially with processes that are really technical,” he told Jason Bailey at The New York Times. “You can very easily lose the authentic, organic thing.”
As Marcel’s short films went viral online, Hollywood studios and networks approached Camp and Slate. The duo took the meetings but were wary of attempts to attach Marcel to a more familiar tentpole template.
“I remember somebody suggested that we partner Marcel with Ryan Reynolds so they could fight crime,” Camp recalls to Carlos Aguilar at the Los Angeles Times. “I’m not saying I wouldn’t watch that movie, but I just knew that was not the right avenue to pursue for him. We knew after that round of meetings, ‘If we’re going to expand Marcel, it needs to be made independently.’ ”
Nana Connie (Isabella Rossellini) and Marcel (Jenny Slate) in director Dean Fleischer-Camp’s “Mar-cel the Shell with Shoes On.” Cr: A24
Camp says he was committed to preserving the authentic, loose-sounding audio and documentary texture of the originals, he explained in an interview with Jack Smart at the AV Club. “So we kind of had to invent a new production model in order to do that.”
Marcel (Jenny Slate) in director Dean Fleischer-Camp’s “Marcel the Shell with Shoes On.” Cr: A24
Joining creative forces with co-writer Nick Paley, the team wrote a long treatment and started hosting recording sessions where Slate would give form to Marcel’s dialogue with spur-of-the-moment ingenuity. Based on what those improvisation meetings yielded, Camp and Paley slowly polished and rerouted the plot.
Director Dean Fleischer-Camp’s “Marcel the Shell with Shoes On.” Cr: A24
“They would put together a patchwork of transcription of recorded audio and then write new scenes. Then we would record off of those written new scenes and improvise off of them too,” Slate told Bailey. “Most of the film is highly improvised, while some parts were word-for-word written out, depending on what Dean and Nick decided to do.”
Marcel (Jenny Slate) in director Dean Fleischer-Camp’s “Marcel the Shell with Shoes On.” Cr: A24
Almost nothing is recorded inside a studio except a few lines as pickups. “The path most Hollywood projects take is: you write a screenplay and then you make the movie,” Dean tells the AV Club. “I’ve always felt like that robs us of so much that can happen in the way that people interact non-verbally. So with Marcel the Shell with Shoes On, you can hear it in the audio. You would never write certain lines if they hadn’t been in the same room.”
Marcel (Jenny Slate) in director Dean Fleischer-Camp’s “Marcel the Shell with Shoes On.” Cr: A24
During this time, they found backing at indie darling A24, which helped bring aboard arthouse icon Isabella Rossellini to voice a key character.
The writing and recording and iterating process took two-and-a-half years, but audio was only part of the challenge. Whereas the camera for the shorts was kept pretty static, Camp knew that a feature-length narrative would require more movement, more locations, and more interactions with other characters. Plus, they would stick to the format of the stop-motion/live action hybrid.
Jenny Slate, Nick Paley, Dean Fleischer Camp, and Isabella Rossolini on the set of “Marcel the Shell with Shoes On.” Cr: Marcel The Movie LLC
This type of production has been around since the earliest days of cinema but there are very few features made this way, partly because of how complex it is. The filmmaker essentially has to commit to shooting every shot in the film at least twice, first live-action and then stop-motion. And in the final edit these elements combines provided both sets of footage are a perfect match.
Nana Connie (Isabella Rossellini) in director Dean Fleischer-Camp’s “Marcel the Shell with Shoes On.” Cr: A24
“It’s so complex and labor-intensive, but I felt committed to it for that reason,” Camp explains in the film’s production notes. “The constraints that make stop-motion so hard can actually result in more textured, emotional performances because it’s such an imprecise, human process. That fallibility translates into a kind of warmth.”
Lesley Stahl in director Dean Fleischer-Camp’s “Marcel the Shell with Shoes On.” Cr: A24
So, when shooting the live-action portion, stop-motion cinematographer Eric Adkins took notes of every minute detail — lens, depth-of-field, the distance and angle of the camera to the characters, the distance to props, every source of lighting. Anything that was reflective that the character was standing near that might bounce light on him so they could recreate it exactly on the animation stages. If any tiny detail was off, the animation wouldn’t mesh well with the live-action plate.
From “Marcel the Shell with Shoes On” courtesy of A24
“You should see his iPad, it’s just like, every time I glanced down at it, it was like A Beautiful Mind scratchings of equations and measurements,” Dean recounts to the AV Club.
Marcel (Jenny Slate) and Nana Connie (Isabella Rossellini) in director Dean Fleischer-Camp’s “Mar-cel the Shell with Shoes On.” Cr: A24
Marcel the Shell with Shoes On premiered at the 2021 Telluride Film Festival and recently went on nationwide release. “As with the shorts, the tweeness factor is high,” reviews Rolling Stone’s David Fear, “though mileage may vary on whether you add ‘unbearably’ as a descriptive there.”
Marcel (Jenny Slate) in director Dean Fleischer-Camp’s “Marcel the Shell with Shoes On.” Cr: A24
But even the harshest critic finds an “extreme sense of melancholy and isolation,” that resonates with the hardest of hearts.
Fear says: “The ache of loneliness pulses at the center of this labor of love.”
Grotesque and dark visuals were just the start for director David Fincher’s first foray into animation for “Love Death & Robots: Volume 3.”
March 13, 2023
Posted
June 8, 2022
Because Science! Here’s How a Multiverse Is Totally Possible
In The Double Life of Veronique, the enigmatic 1991 feature from the great Polish director Krzysztof Kieślowski, a woman thinks she sees her doppelgänger and a little later she dies as if the shock to her heart were too much.
That film is a far more profound take on identity and possibility than the manic multiverse hopping of the Marvel Cinematic Universe or A24’s crazed but fun Everything Everywhere All At Once.
It speaks to the loneliness within all of us. How stressful is it at times to navigate the world as sole individuals and how wonderful, strange, horrific and mind-blowing would it be if there were another person just like us somewhere out there?
Something at the core of our human being would like to believe, perhaps instinctively knows, that we are not alone. It’s about love — that there is someone out there just for us. It’s about death — that there is another life with us in it out there. It’s about lost chances — onto which we project the possibility that we can adjust the past to make the present better.
There’s also some hard science behind it — that grounds the MCU and any other time-travel and parallel universe fiction in some sort of reality.
“Many scientists claim that mega-millions of other universes, each with its own laws of physics, lie out there, beyond our visual horizon,” writes cosmologist George Ellis in Scientific American. “They are collectively known as the multiverse.”
Real-life multiverse theories include everything from branching timelines to exact copies of our world. Physicist Max Tegmark has arranged four distinct “levels” of multiverse into a hierarchy, where each type of universe grows progressively different from our own.
The most straightforward multiverse scenario is the Level I Multiverse. In this scenario, space is so mind-blowingly big that, eventually, it just has to repeat itself. This includes the existence of perfect doppelgängers.
“Nearly all cosmologists today (including me) accept this type of multiverse,” says Ellis.
From “Spider-Man: No Way Home,” courtesy of Sony Pictures Entertainment
In a Level I Multiverse, Jess Romeo notes at JSTOR Daily, Tom Holland’s Spider-Man could certainly exist alongside Andrew Garfield’s and Toby Maguire’s Spider-Men.
The Level II Multiverse is trippier. As the vacuum of space continues to expand and spawn other universes, “some regions of space stop stretching and form distinct bubbles, like gas pockets in a loaf of rising bread,” Tegmark explains.
When Dr. Strange travels to an unfamiliar, psychedelic dimension, he may have popped into one of the Level II Multiverse bubbles.
At Level III, we find multiverse forms around us, as random events cause the timeline to split. Imagine rolling a die, and instead of landing on a single number, it lands on all values at once. We can “conclude that the die lands on different values in different universes,” writes Tegmark. Six new branches of reality are formed. This mind-bending model is called the “many-worlds interpretation.”
It may seem familiar if you’ve watched the Marvel show Loki, where time-traveling agents work to prune the branching timeline, and avert random events that could cause it to split out of control.
President Loki (Tom Hiddleston) in Marvel Studios’ “Loki.” Cr: Marvel Studios
Once you reach the final level, Level IV, all bets are off. It is comprised of multiverse models that don’t obey even the most fundamental laws of nature. Tegmark calls it “the ultimate type of parallel universe.”
Existing outside of space and time, the universes in this model are almost impossible to visualize; “the best one can do is to think of them abstractly, as static sculptures that represent the mathematical structure of the physical laws that govern them.”
The handy thing about prevailing scientific methodology is that all probabilities are on the table until proven otherwise. This paradigm was enshrined by Karl Popper a century ago who argued that scientific knowledge is provisional — the best we can do at the moment. He suggests that for a theory to be considered scientific its hypothesis must be able to be tested and conceivably proven false.
Theories about the existence of parallel universes and the nature of time and space are being tested all of the time through work on black holes, dark matter and the origins of our universe called the Big Bang.
(Recently, NASA announced its astronomers had found a potential “missing link” between the universe’s first supermassive black holes and young, star-forming galaxies. What’s more it was “hiding in plain sight.”)
Meanwhile, scientists have restarted the atomic supercollider at CERN in Switzerland in a bid to hunt for the elusive dark matter that lies beyond the visible universe. The CERN lab previously found the Higgs boson particle — once theorized and now proven to exist.
“There are new universes born beyond our cosmic horizon,” he says. “Multiverse theories have proliferated, hoping to answer the deepest questions about what we, and the entire cosmos is.”
The hour-long video is separated into five parts: How Big is the Universe?, The Bubble Multiverse, The Cyclical Multiverse, Branes in the Bulk, and Many Worlds.
By thoroughly examining the history of the multiverse and what we know about it today, Lewis makes a complex topic easier to understand:
Creative duo Daniels take you on an adventure through the multiverse in A24’s multiple Oscar-winning “Everything Everywhere All at Once.”
March 14, 2023
Posted
April 12, 2022
“Navalny:” When Your Documentary Ends Up as a Spy Thriller
Following Russian opposition leader Alexei Navalny in his quest to identify the men who poisoned him in August 2020, director Daniel Rohr’s “Nalvany” won both the US Documentary Audience Award and the Festival Favorite Award at the 2022 Sundance Film Festival for. Cr: Sundance Institute
‟Navalny” won the Oscar for Best Documentary Feature at the 95th Academy Awards and is currently streaming on HBO Max.
https://youtu.be/I8utTvWW6B0
The Sundance film festival doesn’t need much to have their films enter competition. The name of the film is usually a given. But in 2022’s US Documentary section, had a mysterious last-minute competitor entered under the codename of ‟Untitled LP9.” It was, in fact, ‟Navalny,” still steeped in secrecy even up to the point that director Daniel Roher aired his pre-recorded introduction.
In a statement on the Sundance blog, festival director Tabitha Jackson addressed the level of secrecy over ‟Navalny:” “We have known about this film for months and we haven’t said anything about it to allow the team the time to work on it the way they needed to.”
The documentary ostensibly follows Russian opposition leader Alexei Navalny in his quest to identify the men who poisoned him in August 2020. ‟Navalny” is a fly-on-the-wall documentary that is also a study of Navalny the man — a portrait of a leader intent on reform who will not be cowed by anything, including his own poisoning.
From “Navlany”
The post-premiere Q&A at the Sundance Film Festival released the tension that had built up from the day Navalny had consented to making the film back in 2020. Asked why they had used such a peculiar codename, one of the producer answered with laughter, “The FSB (Russia’s secret service) had a code name for [Konstantin] Kudryavtsev — ‘Love Potion No. 9’ — so that became our fake title.”
Kudryavtsev had played an unwitting part in ‟Navalny.” He was the target of a sting operation between CNN, Navalny and the data investigative journalism outlet Bellingcat. Amazingly, Navalny himself had called Kudryavtsev pretending to be a Kremlin higher-up wanting to know why the assassination didn’t go as planned.
Variety explains what happened in the phone call, quoting Bulgarian journalist hacker Christo Grozev. “Amazingly, one of the men — Konstantin Kudryavstev, a chemist who helped to orchestrate the operation — is fooled by Navalny’s ruse and admits, right over the phone (and on camera), to all of it. (“We did it just as planned, the way we rehearsed it many times. But in our profession, as you know, there are lots of unknowns and nuances.”)
“He’s basically confessing to state-sanctioned murder — and in doing so, he’s incriminating his boss, Vladimir Putin. One of Navalny’s associates claps her hand over her mouth in disbelief.”
Alexi Navalny as himself in “Navalny” Cr: Dogwoof Films
It was Grozev who had initially contacted Navalny with the idea of the documentary while he was still recuperating in Germany from the poisoning. Grozev had done his digging, as The Guardian described, “After Navalny’s poisoning, Grozev began looking for clues about who might have been behind the hit.
“Having bought telephone and flight records on the Russian dark web, he found a group of eight men from the FSB security services who appeared to have been following Navalny on trips across Russia for several years.”
Alexi Navalny as himself in “Navalny” Cr: Dogwoof Films
Grozev contacted Navalny and journeyed to Germany to meet with him and share the information he had found. Director Roher came with him and kept filming. It turned out Navalny and his team had already been thinking about making a film, and a collaboration began.
Roher tells the story of when he first met Navalny, “Christo and I were crossing the Austria-Germany border and we drove to this cinematic sleepy town called St. Blasien in the Black Forest and I met Alexei. From the moment we met, I felt Alexei’s presence and energy in a very real way. He was disarming, his smile was warm, he was charming.
“He found us compelling enough to say, ‘Okay let’s start.’ Alexei — who’s a mastermind of media strategy and strategist — understood that if the story was this unfolding murder mystery, then we had to start right away.”
Over the next three-and-a-half months this film evolved into a very intimate portrait of one man, his family, his staff, and what they were willing to sacrifice for the values they believe in. All of the filming was done in secret, with a small crew and limited resources.
“It wasn’t until late December that we finally were able to re-emerge and strategize before Alexei’s return in January 2021.”
Navalny went back to Russia in January 2021 and, on the basis of trumped-up corruption charges, was immediately apprehended and thrown in jail, where he now faces a potential 20-year sentence.
Alexi Navalny as himself in “Navalny” Cr: Dogwoof Films
Concluding clips of Navalny in handcuffs and behind bars, flashing peace signs to supporters, loved ones and cameras, imply that he hasn’t given up the fight for freedom, human rights, and justice in his homeland.
The Daily Beast calls the documentary “a thrilling nonfiction ride” and its depiction of Putin as a “cruel autocrat who’s willing to achieve his ends by any merciless means necessary.”
Director Daniel Roher and Dasha Navalny during the Q&A following the virtual premiere of documentary feature “Navalny.” Cr: Sundance Institute
The Guardian’s review realizes that the documentary had successfully hit its target. “’Navalny,’ a 98-minute documentary from Canadian director Daniel Roher, details in cogent, stressful, riveting fashion just how scared the Kremlin is of Navalny, arguably the biggest threat to Vladimir Putin’s power at home,” their conclusion is one of a story straight from the movies.
“Doughy, dopey agents who followed Navalny for three years and poisoned him on a filming trip to Siberia with the nerve agent novichok, a poison which essentially shuts down the body and then dissipates, making death appear to be from natural causes.”
Director Daniel Roher, Dasha Navalny, Maria Pevchikh, Christo Grozev, producers Diane Becker, Shane Boris, Melanie Miller, and Odessa Rae, director of photography Niki Waltl, and editors Langdon Page, Maya Daisy Hawke, and Edmund Stenson during the Q&A session following the virtual premiere of “Navalny.” Cr: Sundance Institute
But for Navalny, was it all worth it as he now sits in a Moscow prison. Roher is not hopeful, “I think he’s going to be in prison for a very long time. Whether it’s five years or 10 or 20 I’m not sure.
“But I don’t think he gets out until Putin is forced out and Putin is in fine shape. There’s no incentive for them to release him. He mortally offended them several times and then he went back.”
Additional Recognition
In addition to its 2022 Sundance win, ‟Navalny” scored the Oscar for Best Documentary Feature Film at the 95th Academy Awards Show, which was also the first win for production company CNN Films, according to ABC7News.com.
Learn about the making of other best documentary nominees on NAB Amplify:
Documentary filmmaker Laura Poitras’ “All the Beauty and the Bloodshed” shows how Nan Goldin’s life and work intersect with her activism.
March 27, 2022
Complete Chaos Theory: Building the (Messy) Multiverse of “Everything Everywhere All At Once”
Writers and co-directors Daniel Kwan and Daniel Scheinert (collectively known as the Daniels) take audiences on a heart-wrenching trip through the multiverse in “Everything Everywhere All at Once,” starring Michelle Yeoh as Evelyn Wang. Cr: A24
Everything Everywhere All At Once lives up to its title. The Oscar-winning sci-fi comedy feature “takes the red-pill mind-screw of The Matrix and multiplies it by infinity,” according to Variety’s Peter Debruge, but that doesn’t necessarily mean audiences can handle the “gnarly three-dimensional sudoku puzzle” that results.
The movie stars Michelle Yeoh as a woman named Evelyn Wang who learns that she can experience endless dimensions simultaneously, and uses that power to attempt a reconciliation with her estranged daughter.
The internal logic of the film is complex, but the gist is that different decisions cause splinters in time, and, somewhere out there, anything that could have happened actually did. So, that means there is a timeline where Evelyn is not the lowly laundromat owner we first meet but is also a huge Hong Kong action star, an opera singer, a maid, and a teppanyaki-style chef… ad infinitum.
Writers and co-directors Daniel Kwan and Daniel Scheinert — collectively known as the Daniels — made the equally absurd Swiss Army Man, about a man who befriends Daniel Radcliffe’s semi-sentient corpse. This time around, the creative duo has the Russo Brothers (Avengers: Infinity War) producing, with A24 and Ley Line Entertainment, IAC and Josh Rudnick exec producing.
The film rushes headlong into unruly anarchy: Evelyn is plunged into the metaphysical world of “verse-jumping,” veering from the mundane dreariness of a IRS building to the palatial lair of a nihilistic villain, to the flashing lights of Hong Kong red carpets, to a deserted canyon where sentient rocks have an extended on-screen chat.
Critics say the unhinged imagination on display here will leave viewers exhausted, but that could be intentional. The filmmakers believe they’re saying something about the impact of the metaverse on our ability to truly see those near us.
“We could say a million things about [the film], but the most simple, honest thing is it’s about a mom learning to pay attention to her family in the chaos,” Kwan says. “The biggest seed that drove us through — that felt like a metaphor for what we’re going through right now in society — is this information overload.”
He adds, “People keep saying ‘empathy fatigue’ set in with Covid, but I feel like even before covid we were already there — there’s too much to care about and everyone’s lost the thread. That was the last key, turning this into a movie about empathy in the chaos.”
Information Overload
In an era of information overload, extreme polarization, and mass existential dread, the struggle to connect with family might feel less like a banal, everyday experience, and more of an increasingly confounding battle between a loved companion and a mortal enemy.
“In a lot of ways, the movie is just a family drama,” Scheinert says, “and then we came up with some of the most insane, enormous, overcomplicated hyperbolic metaphors for generational gaps, along with communication errors and ideological differences within a family.”
Michelle Yeoh as Evelyn Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Michelle Yeoh as Evelyn Wang and Jing Li in “Everything Everywhere All at Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Writers and co-directors Daniel Kwan and Daniel Scheinert (collectively known as the Daniels) take audiences on a heart-wrenching trip through the multiverse in “Everything Everywhere All At Once,” starring Michelle Yeoh as Evelyn Wang. Cr: Allyson Riggs/A24
Michelle Yeoh as Evelyn Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: A24
Ke Huy Quan as Waymond Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Ke Huy Quan as Waymond Wang and Michelle Yeoh as Evelyn Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Jamie Lee Curtis as Deirdre Beaubeirdra and Michelle Yeoh as Evelyn Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Michelle Yeoh as Evelyn Wang in “Everything Everywhere All at Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: A24
Audrey Wasilewski as Alpha RV Officer #2 and Stephanie Hsu as Joy Wang in “Everything Everywhere All at Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Stephanie Hsu as Joy Wang, Michelle Yeoh as Evelyn Wang, and Ke Huy Quan as Waymond Wang in “Everything Everywhere All at Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Jenny Slate as Big Nose in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Stephanie Hsu as Joy Wang in “Everything Everywhere All at Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Ke Huy Quan as Waymond Wang in “Everything Everywhere All at Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Harry Shum Jr. as Chad and Michelle Yeoh as Evelyn Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Michelle Yeoh as Evelyn Wang and Jamie Lee Curtis as Deirdre Beaubeirdra in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Jamie Lee Curtis as Deirdre Beaubeirdra in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Stephanie Hsu as Joy Wang, Ke Huy Quan as Waymond Wang, Michelle Yeoh as Evelyn Wang, and James Hong as Gong Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Jamie Lee Curtis as Deirdre Beaubeirdra and Michelle Yeoh as Evelyn Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Ke Huy Quan as Waymond Wang and Michelle Yeoh as Evelyn Wang in “Everything Everywhere All at Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Ke Huy Quan as Waymond Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
The film slyly tweaks the “hero’s journey” story beats that audiences have come to expect, squishing and stretching a three-act structure as if the movie itself were jumping through a fracturing multiverse.
“The result is a mess,” says Debruge, “but a meticulously planned and executed mess, where every shot, every sound effect and every sight gag fits exactly as the Daniels intended into this dense and cacophonous eyesore, which endeavors to capture the staggering burden of trying to exist in a world of boundless choice.”
The Daniels wanted that sense of infinity — of all of the possible worlds, the depthless rabbit holes, and all of the tiny moving pieces underneath it — to remain top-of-mind for the audience, even if that meant fraying their minds.
“There are enough ideas in Everything to fuel a dozen movies, or else a full-blown TV series, but the Daniels have shoehorned it all into a bombastic, emotionally draining 139 minutes,” writes Debruge.
“Moviegoers with limber imaginations may well appreciate the lunatic ambition and nutso execution of this high-concept hurricane, which ricochets like a live-action cartoon for most of that duration. But less versatile viewers will emerge frazzled, like Wile E. Coyote after swallowing a stick of dynamite: their heads charred, blinking blankly as smoke wafts from their ears.”
The idea of generational love grounds the wildly chaotic narrative of Everything Everywhere All At Once, as Charles Pulliam-Moore notes at The Verge.
“It became our guiding light very early on,” Scheinert told Pulliam-Moore, describing how making the film about a family gave the Daniels free reign to experiment with some of their more outrageous ideas. “But the litmus test would be like, does that complement the journey of this family? A surprisingly weird array of things still complement the story of this family because they’re all distractible, and so anything that distracts them in a new and interesting way becomes a potential path,” he explains.
Stephanie Hsu as Joy Wang, Ke Huy Quan as Waymond Wang, Michelle Yeoh as Evelyn Wang, and James Hong as Gong Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
“I think one of the things about generational love that we kind of came to while making this was empathy,” Scheinert continues. “We tried to make an empathetic story about how hard it is for our parents’ generation to understand our generation. We tried not to oversimplify that idea by doing an ode to how beautiful it is when someone who grew up in a completely different way goes on the brave journey of trying to understand and support someone so different from themselves.”
“One of the things we realized was, like, we are going to be the old people soon,” Kwan added. “If progress is to happen, the older generation has to be willing to listen, and hopefully, they will listen in the way that they wish they were listened to. And the young generation will have to be kind and patient in the way that they hope that the next generation will be kind and patient with them. It’s obvious to say, of course, but it’s… I think it’s one of the hardest things any human has to do, and I’m hoping that this movie creates space for that kind of conversation because we’re in the middle of it right now. We need that kind of conversation.”
NPR film critic Justin Chang agrees: “For all its cosmic craziness, Everything Everywhere All At Once has a simple emotional message: It’s about how the members of this immigrant family learn to cherish each other again,” he notes in his review.
“It’s also about making peace with the life you’ve lived — and the ones you haven’t. And that sort of sums up how I feel about this funny, messy, moving and often exasperating movie: There may be a better, more focused version of it in some other universe, but I’m still grateful for the one we’ve got.”
Most of the visuals for Everything Everywhere were captured in a warehouse in Simi Valley. “It was big enough that we could wreck one part of the building, then walk away and just go somewhere else in the complex to continue filming while our team restored the initial part of the building,” says production designer Jason Kisvarday.
The film was shot by Swiss Army Man DP Larkin Seiple and edited with manic intensity by Paul Rogers using split screens and blurry overlay effects.
“The directors don’t shy away from the use of dizzying flashing lights, or rapidly shifting light sources that disorient the viewer,” Aurora Amidon notes in her review for Paste. “They also aren’t afraid to implement over-the-top images, like a person’s head exploding into confetti or a butt-naked man flying in slow-motion toward the camera. At the same time, movement between ‘verses feels seamless through Rogers’ meticulous editing, as does the effortless fashion in which different aspect ratios melt into one another.”
Rogers, who has also edited the feature films The Death of Dick Long and You Cannot Kill David Arquette, and the TV series Dream Corp LLC and The Eric Andrew Show, discussed how Everything Everywhere was assembled with The Art of the Cut podcast host Steve Hullfish.
“They pitched it to me as ‘We want to make a film, and then we want to break that film, and then we want it to rebuild itself,’” he recounted during the episode.
“It’s hard to describe,” he adds. “It just reaches a point where you have to let go of trying to hold it together as a viewer in your mind, and trying to make sense of everything and fit everything together. You’re going on this journey with Evelyn, the character, and part of her journey is learning to let go.”
During editing, Rogers used Adobe Premiere Pro to create split screens and other effects. “Premiere has always been wonderful because it disappears when I’m using it,” he says.
“I love to split the screen and combine performances or just change the timings between actors, make someone react or speak over a line versus waiting their turn. And I love to be able to do that in a two shot or a wide shot, but that’s just not how the scene plays out.
“So I use Premiere all over this film to do that, to the point where I just didn’t tell them. When they got into finishing the VFX supervisors, Zak Stoltz said there were about 30 VFX shots that he wasn’t aware of. There are all these little split-screen things or changing an extra out in the background, or, even a prop.
“Sometimes the way a prop was placed was better in one shot than another shot, so I would throw that in. I can temp together VFX in Premiere in a heartbeat.”
Listen to the full episode in the player below:
Watch Rogers discuss the collaborative, cloud-based post workflow for Everything Everywhere All At Once at the 2022 NAB Show with Adobe’s Meagan Keene:
You can also watch Rogers and Keene in conversation on the NAB Amplify stage at the 2022 NAB Show:
Infinite Storytelling Possibilities
While Everything Everywhere offers a treasure hunt of eclectic cinematic references — ranging from 2001: A Space Odyssey and In the Mood For Love to Ratatouille — Kwan insists their voice is far from that of a cinephile, but was honed rather through things like YouTube videos, Tim and Eric sketches, and the form-breaking anarchy of Japanese anime movies.
“We would put our stuff online, and the algorithm would push it because it was so insane, and then we’d get attention and that positive reinforcement,” Kwan recalls. “We were like, oh, I guess we should be more insane.”
The film is also their response to other multiverse sci-fi movies that annoyed them. Star Trek’s 2009 reboot by JJ Abrams might have featured two Spocks, but they felt this twist didn’t make near enough of the mind-bending opportunity.
“My pet peeve is time travel when you introduce it and just do a tiny bit like it’s no big deal,” Scheinert told Eric Kohn at IndieWire. “It would be such a big deal! Like if logic broke down and time didn’t move forward and a million people could go back in time a million number of times there’d be absolute chaos.”
“This movie is 100 percent a response to The Matrix, obviously,” Kwan added. “We wanted to make our version of it.”
Michelle Yeoh as Evelyn Wang in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
Michelle Yeoh as Evelyn Wang and Jamie Lee Curtis as Deirdre Beaubeirdra in “Everything Everywhere All At Once,” directed by Daniel Kwan and Daniel Scheinert. Cr: Allyson Riggs/A24
He’s specifically talking about the original Matrix film’s iconic fighting scenes. “There’s something so entertaining and visceral about it, and we wanted to try to take that kind of energy and satisfying filmmaking and point it towards love and understanding,” Kwan says. “We don’t know how to do that, but we want to see it on the big screen.”
The film has also just satirized the trajectory of Marvel’s ever-expanding universe. With an MCU increasingly folding over on itself with actors playing the same (or different versions of) characters from past movies (Spider-Man: No Way Home) or opening portals into other storylines (the entire Loki series — which the Daniels apparently turned down an offer from Marvel to make), not to mention explicit references as in the forthcoming Doctor Strange in the Multiverse of Madness, it’s clear that the multiverse will be a recurring theme for years to come.
“It’s no wonder that the idea of the multiverse is so popular within the sci-fi genre when there are infinite storytelling opportunities,” Gavin Spoors observes at Space. “When films like Spider-Man: Into the Spider-Verse come around, it proves how powerful and exciting the multiverse can be. With infinite universes, comes infinite possibilities.”
Want more? In this episode of Vanity Fair’s “Notes on a Scene,” Everything Everywhere star Michelle Yeoh and directors Daniel Kwan and Daniel Scheinert discuss how they filmed this action-packed fight scene between Evelyn and Deirdre:
Talking to Insider, stunt coordinator Timothy Eulich and fight choreographers Andy Le and Brian Le break downhow the movie’s most impressive action sequences came together. From classic Hong Kong movies and Jackie Chan action figures to break dancing and parkour, Eulich and the Le share what inspired them and the various references they used for EverythingEverywhere.
In an episode of NPR’s Short Wave podcast with host Emily Kwong, the Daniels break down how an indie film about laundry and taxes blends the arts with sciences:
Watch the Daniels break down the scene where Waymond (Ke Huy Quan) uses his fanny pack to take down a phalanx of security guards.
“Every good action movie has to have a good kickoff action sequence,” Kwan says, adding that the directors wanted to “show off what this movie is going to be, but in a way that is mysterious, in a way that hopefully causes you to ask a lot of questions and want to keep watching.”
Confused about why Evelyn Wang places a googly eye on her forehead during the film’s climactic sequence? In this video essay, Accented Cinema explains it all, breaking down the hidden metaphors within Everything Everywhere All At Once:
The Daniels also sat down with visual effects artist Zak Stoltz to discuss how the small budget for the film forced them to get creative, especially when it comes to the special and visual effects, in order to create something unique:
Want to learn more about the sound of Everything Everywhere All At Once? In an interview with Dolby, sound designer and sound effects editor Andrew Twite and re-recording mixer and sound supervisor Brent Kiser are joined by the Daniels to discuss their creation.
“We knew we wanted to create something that somehow bridged the gap between big blockbuster action films and really intimate risk-taking indie films,” says Kwan. “And we wanted to find a way to do both at the same time. To carve out space for independent films in theaters, because that’s something that’s slowly being carved out more for these big, big IP blockbuster films.”
If you’re looking for more candid moments with the duo, Alamo Drafthouse asked the Daniels to pitch a concept on the spot for a Don’t Talk/Don’t Text PSA that goes… very sideways:
Exploring the Cinematic References of “Everything Everywhere All At Once”
From The Matrix and Ratatouille to the films of Wong Kar-wai, the hit indie sci-fi feature from the Daniels is filled with “trippy cinematic homages,” writes Vanity Fair’s Yohana Desta.
By Jennifer Wolfe
Over at Vanity Fair, Hollywood writer Yohana Desta takes a deep dive into Everything Everywhere All At Once, examining the abundance of movie homages the Daniels managed to pack into their hit indie sci-fi feature.
Characterizing the film as “a giant nerd,” Desta says, “It’s also a whirlpool, a fun house, an unpredictable Russian doll of ideas and feelings and things, dense with trippy plot points about parallel universes and metaphors about familial love and acceptance. But it’s also a movie about movies, layered with references that not only pay homage to classic films of the past, but also reanimate them with unhinged verve.”
The Matrix
“This movie is one hundred percent a response to The Matrix, obviously,” Daniel Kwan said ahead of the film’s SXSW premiere, Desta notes. “We wanted to make our version of it.”
The film’s press notes detail an afternoon double feature of The Matrix and Fight Club Kwan saw in 1999, and how the experience helped reinvigorate his love of cinema. “I was like, man, if I could just make something half as fun as The Matrix is, but with our own stamp and our spirits, I would just die happy,” he said.
Kung Fu Classics
Everything Everywhere All At Once contains countless nods to martial arts, as Desta observes, including classic kung fu movies such as Clan of the White Lotus, and more recent movies like Quentin Tarantino’s Kill Bill series.
“The biggest nod arrives in the form of Evelyn’s kung fu master, played by Li Jing,” Desta writes. “With her fluffy white eyebrows, mustache, and beard, she’s a direct play on the classic Bak Mei, or Pai Mei character, a kung fu master whose likeness has been portrayed in numerous 1970s and 1980s-era Hong Kong films like Clan of the White Lotus and Executioners From Shaolin.”
2001: A Space Odyssey
Everything Everywhere riffs on the classic opening sequence of Stanley Kubrick’s 2001: A Space Odyssey, which shows early hominids discovering the monolith featured in the sci-fi classic.
“But the scene also delivers the viewers into a rather peaceful moment in the otherwise chaotic film,” Desta points out: “In one of her parallel lives, Evelyn is a prehistoric rock, silently observing and enjoying the natural world… for a few moments, anyway.”
Ratatouille
“Nowhere is Everything Everywhere more chaotically unbridled than in the ‘Racacoonie’ scenes,” Desta writes, pointing to the film’s hilarious send-up of Pixar’s animated hit Ratatouille, which has Evelyn fighting a rival hibachi chef controlled by a racoon hiding under his toque.
“But the Daniels don’t just float the concept as a comic idea,” she says. “They follow through, returning again and again to the subplot as Chad loses Racacoonie to villainous pest control and, with the help of Evelyn, rescues his furry friend. Rescuing Racacoonie becomes part of the narrative, showing Evelyn’s strength and dogged pursuit of being the hero.”
In the Mood for Love
From director Wong Kar-wai’s “In the Mood for Love.”
“In terms of style and narrative, the Daniels could not be further from someone like Wong Kar-wai,” Desta hypothesizes, “And yet, during a handful of scenes in Everything Everywhere, the duo pays painstaking homage to the director’s oeuvre, particularly his 2000 classic In the Mood for Love.”
One of Evelyn’s parallel lives is an action movie hero, much like the film’s star, Michelle Yeoh, and Everything Everywhere employs real-life sequences of Yeoh on the red carpet to depict her counterpart. Some of the the most moving moments occur between movie star Evelyn and her husband, Waymond (Ke Huy Quan), evoking the lush, moody visuals of the Chinese film director’s 2001 art house hit.
“…In that parallel life, Evelyn learns that her success is only possible because she chooses not to be with Waymond, forging a different path ahead,” writes Desta. “Waymond somehow finds his way to the premiere of her latest movie, and the two go outside for a one-on-one conversation. It’s moody and dark, stylized with the same romantic energy as the corridor scenes in In the Mood for Love, in which Tony Leung and Maggie Cheung play two neighbors who can never be together, and must resort to exchanging longing looks that no one else can catch.”
“Everything Everywhere All At Once” Is an Ode to the Internet
By Abby Spessard
If Everything Everywhere All At Once reminds you of the internet, that’s not an accident. The Daniels — filmmaking duo Daniel Kwan and Daniel Scheinert — wanted their film to reflect the tumultuous, rapid-fire zeitgeist of the vast global network that comprises the World Wide Web.
The movie portrays the “uncanny delirium of digital life,” Alex Pasternack writes for Fast Company, complete with “doomscrolling and context-collapsing and flipping between wildly divergent and often confounding realities — between videos of war and wildfires and an awards-show slap — and whatever all of this is doing to our brains and our relationships.”
Speaking with Pasternack, the Daniels shared how they questioned whether a film set in the absurd chaos of infinite universes could have any meaning. Because if everything is possible, does any of it matter?
As it turns out, yes. We create our own meaning, just as we do online. The internet is a form of multiverse in itself, Pasternack argues, capable of creating unique realities for every individual person who uses it. “In the metaverse as in the multiverse, you can see things and perspectives you never knew existed — or you can live in whatever world you please, all others be damned,” he comments. “The characters of Everything don’t mention the internet, but it looms over everything.”
The Daniels conjure Evelyn Wang (Michelle Yeoh) and Joy Wang (Stephanie Hsu), a mother and daughter duo struggling with their own multiple identities. Joy’s alter ego, Jobu Tabacky, is “a bizarro, omnipotent version of Evelyn’s daughter,” as Pasternack writes, who personifies the worst parts of the internet, creating an Everything Bagel of doom that threatens the very existence of the multiverse.
The film’s high stakes echo the equally high stakes we experience in real life.
“Specifically for us: we are millennials, we grew up on the internet, we were the first generation to do so, and our parents didn’t,” Kwan explains. “And so I think that made that gap just a chasm. And so the movie uses the multiverse almost as a metaphor for how the internet has destroyed our minds. And how our parents are trying to figure out how to fix this.”
After more than year’s worth of buzz, “Everything Everywhere All at Once” fulfilled the promise of its critical acclaim at the 2023 Academy Awards. The A24 film was the biggest winner of the evening, taking home Oscars in seven categories, making it the “most-awarded best picture winner since 2008’s ‘Slumdog Millionaire,’” per Variety.
EEAAO’s wins:
Best Picture
Best Actress (Michelle Yeoh)
Best Supporting Actor (Ke Huy Quan)
Best Supporting Actress (Jamie Lee Curtis)
Best Director (Daniel Kwan and Daniel Scheinert)
Best Original Screenplay (Daniel Kwan and Daniel Scheinert)
Best Film Editing
It was also nominated for Best Costume Design, Best Original Score, and the film’s Stephanie Hsu was also put forward for the Best Supporting Actress category but lost to Curtis (and her hotdog fingers).
Backed by hard science, cosmologist George Ellis arranges the four distinct “levels” of the multiverse into a hierarchy.
March 9, 2022
“Winning Time” Is a Wild Ride: Here’s How To Remake History
Produced and directed by Adam McKay, HBO’s new docudrama “Winning Time: The Rise of the Lakers Dynasty” shows how the Lakers changed the way basketball is played. Cr: Warner Media
In the 70s, the business of basketball was more than overshadowed by show business itself, especially in Los Angeles, where Hollywood ruled the roost. At the same time the NBA languished in the background and had all but lost its way.
In fact, Noel Murray’s review of the new HBO docuseries, Winning Time: The Rise of the Lakers Dynasty, for AV Club, called it a niche league that had a lack of “likeable players — or, to put it in the terms the era’s television executives were using behind closed doors, a lack of white players.”
John C. Reilly as Jerry Buss in episode 1 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
Quincy Isaiah as Magic Johnson and DeVaughn Nixon as Norm Nixon in episode 1 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
Jason Clarke as Jerry West in episode 2 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
But, by the close of the decade, things started to reverse. Over a decade of “showtime” — the phrase used to describe the team’s relentless style of play and the razzle-dazzle, sports-as-entertainment atmosphere — began with the Lakers winning five NBA championships along the way. “The 1979 NCAA basketball championship broke viewership records, because of the marquee matchup of the intense Indiana farm boy Larry Bird and the flashy Michigan kid Earvin ‘Magic’ Johnson, both of whom were about to be NBA rookies.”
But the switch-up wasn’t due just to Bird and Magic. Sopan Deb at The New York Times drilled down to pick out other catalysts of change. “The team crossed over into pop culture consciousness in a way no NBA franchise had. It spurred discussions about the place of money, race, celebrity and sex in the game. With their brash new-money owner, Jerry Buss, the Lakers challenged what was then the status quo — which included poor attendance and ratings. They helped save the league.”
HBO’s newest docudrama is based on the book Showtime, by sports journalist Jeff Pearlman, but it is executive producer and pilot director Adam McKay (The Big Short, Don’t Look Up) who arguably gives the show its edge.
Deb describes the style as “Chatty, fast-paced and fourth-wall-breaking. Cuts are frenetic, needles drop hard, and characters frequently deliver commentary and exposition straight to the camera. Grainy film and glitchy video mix with real and faux archival footage, add to the vintage vibes.”
But McKay had a vested interest, “I was a hardcore Celtics fan,” he told Entertainment Weekly’s Derek Lawrence. “I hated the Lakers in the ‘80s — they were the villains. It wasn’t until later that I realized, no, the Celtics were the villains, and the Lakers were actually incredible; they changed the way basketball is played, the way it related to the culture, and the way celebrities were created out of the sport.”
“It involves not kings and queens, but celebrities, entrepreneurs, and visionaries who were changing culture,” said showrunner Max Borenstein, who adapted the Showtime book for television.
Josh Spiegel, at Slash Film, celebrated the cast, especially John C. Reilly as the Lakers owner Jerry Buss, when by all accounts the role was previously promised to Will Ferrell. “Will Ferrell was initially set to play Buss, but was recast due to not looking as much like the late billionaire, the choice frankly makes sense,” Spiegel concluded.
Other standout performances come from Adrien Brody and Jason Segel, as Lakers coaches Pat Riley and Paul Westhead, along with Quincy Isaiah as Magic Johnson and Solomon Hughes as Kareem Abdul-Jabbar.
Quincy Isaiah as Magic Johnson and Tamera Tomakili as Cookie Keely in episode 2 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
“It could all be too showy and distracting, but the performances pull you in, and keep everything afloat,” says NPR’s David Bianculli. He praises actors who “can play for comedy and for drama with equal effectiveness”, while those portraying the well-known Lakers stars “pull off their portrayals with exceptional flair, both off and on the court.”
“Another nice surprise is how much attention Winning Time devotes to its women,” Bianculli adds. “From company employees to players’ mothers, wives and girlfriends, they’re all given their own chances to shine, and to have their say.”
In a wide-ranging discussion on The Hollywood Reporter podcast, TV’s Top Five, Borenstein opens up about his pitch to turn Pearlman’s book into a TV show and balancing fact versus fiction when it comes to the portrayal of Johnson and Abdul-Jabbar.
“Sometimes we’re compositing characters for convenience and dramatic ideas or we’re creating composite characters,” says Borenstein. “Sometimes it’s a matter of we know certain tips of the iceberg — who had relationship with Magic — and we know aspects of that relationship but they’re not a public figure, so we’d create someone new.”
Deadspin’s Lee Escobedo interviewed Borenstein, who said he was keen to “avoid the cinematic pitfalls of other basketball-centric movies by focusing on the cultural epoch the Lakers were reborn into.”
Borenstein explained how he saw this bigger picture, “To me, it is the perfect way to approach this incredible epic about the American Dream. It’s a moment about a cultural transformation in the modern NBA. A lens we can use to look at this incredible era in our country’s history and our recent history.”
The showrunner encapsulated what he meant, depicting the famous two rookies in terms that the TV advertisers could work with. “In comes Magic Johnson. And on the other coast, Larry Bird. Two guys who were perceived to be as different as two people could be. Magic is massively charismatic. Built for cameras. Instantly gregarious to reporters. And, who happened to be black. And you had a guy in Larry Bird who happened to be white, who was an equally great player, but dour, not interested in media and didn’t have natural charisma with the media that Magic did,” Borenstein said.
“It’s a story about the business of professional sports, that is an epic, not about a single season but rather a dynasty.”
Quincy Isaiah as Magic Johnson and Solomon Hughes as Kareem Abdul-Jabbar in episode 10 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
Quincy Isaiah as Magic Johnson and Solomon Hughes as Kareem Abdul-Jabbar in episode 7 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
Jason Segal as Paul Westhead in episode 4 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
Borenstein also had an unusual comparison to make for his show, as he told Jamie Burton at Newsweek. “I’ve compared it, not tonally or in any other way, to The Crown or a show like The Crown, in the sense that it’s based in fact.”
He continued: “It’s inspired by a true story. Obviously, there are liberties taken because it’s a dramatization and we have an incredible cast playing these characters.”
Producer Rodney Barnes is sure that they are giving the fans what they want. He told The Hollywood Reporter’s Lacey Rose that the moments that were wanted are there, “…the seminal moments,” he said, “the hug between Magic and Kareem, the championships, the passes, the skyhook.”
Solomon Hughes as Kareem Abdul-Jabbar in episode 2 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
“I’m old enough to have seen a lot of sports-themed movies and TV shows. And more often than not, the players are relegated to a one-dimensional idea,” Barnes tells Chris Koseluk in an interview for the Motion Picture Association’s The Credits. “That’s the funny one. That’s the bad one. That’s the surly one. And the narrative is about the coach, the owner, or a particular player. Here, we got an opportunity to really get into the nuance of the human part of being a professional athlete.”
At the same time, the writers were careful not to do a disservice to some of basketball’s biggest stars, Barnes tells Koseluk: “We’re fans of these guys. We appreciate what they accomplished. You’re trying to make this a love letter — a show of appreciation more so than anything else. So, it’s a delicate balance of storytelling, while still being true to the times and respectful at all times.”
Adrien Brody as Pat Riley in episode 3 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
Vulture‘s review by Jen Chaney seems to get hung up on the visual aesthetic. “The visual patina might best be described as Late-’70s Sepia Haze. It is purposely grainy as a nod to its era and washed out frequently by the blinding L.A. sun. Even if you watch Winning Time in HD, it will still look like an old glitchy videotape or a 1980 broadcast coming through with cloudy reception on a TV with rabbit ears.”
But Chaney doubles back when the actual basketball begins. “I love that this series devotes minutes to observing strategy sessions and Laker practices where the guys try to undo years of playing traditional, slower basketball.
“WinningTime shows us the late nights spent studying plays, early morning shootarounds, and locker-room arguments that constitute a day at the Lakers’ office. That is much appreciated.”
Jason Clarke as Jerry West in episode 1 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
The “glitchy, 1980s” look of the series is of course very deliberate, but actually has its roots in the production lacking any NBA permission to use basketball archival footage. IndieWire’s Bill Desowitz takes an in-depth look at the process that McKay, co-cinematographers Todd Banhazl (Hustlers) and Mihai Malaimare Jr. (The Harder They Fall), and Oscar-nominated editor Hank Corwin (Don’t Look Up) used to “evoke the ‘80s as a cultural snapshot” in the series.
According to Desowitz, Banhazl decided to emulate the look of Kodak’s long-defunct Ektachrome for the present day ‘70s and ‘80s, and the even older Kodachrome look for the ‘50s and ‘60s. “Playing into Corwin and McKay’s mix-and-match archival style, the DP shot a variety of film stocks — 35mm, 16mm, and 8mm color and black-and-white — and even incorporated long outdated tube video technology (Ikegami ITC-730A and HL-79 cameras),” says Desowitz.
Banhazl push processed all the 35mm color film to make it grainier and add contrast, underexposed the negative so that it was dirtier, instructed the film lab not to dust-bust the negative before scanning (thus leaving dirt particles on the neg), then pushed the look even further digitally in the grade with Company 3 colorist Walter Volpatto. They also shot a lot on 8mm, sometimes putting a pistol grip on the 8mm camera to give footage a home movie look.
“They took this even further by intercutting 8mm or Ikegami footage during present-day scenes to extend the archival look for greater intimacy and vulnerability,” says Desowitz.
DeVaughn Nixon as Norm Nixon and Quincy Isaiah as Magic Johnson in episode 3 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
Banhazl was recently recognized for his work on Episode 5, “Pieces of a Man,” with an Emmy nomination for Outstanding Cinematography for a Single-Camera Series (One Hour).
“Our scripts had a kaleidoscopic, maximalist bravado and we wanted that reflected in the images. We based the looks on the dominant advertising styles of the 1960s, ‘70s, and ‘80s, to reinterpret our collective memory of what America looked like to Americans at the time,” he tells IndieWire’s Erik Adams and Chris O’Falt.
“Our main look was an older 35mm Ektachrome reversal film, and mixing into that we used 8mm to recreate a sense of time and place, documentary-style 16mm for basketball and for emotional accents within scenes, as well as vintage Ikegami tube video cameras from the 1980s to recreate the famous basketball games on TV as well as during narrative scenes to see our characters in a more vulnerable way contrasting with the more bold 35mm. We also used black and white film for special shots within scenes for jazzy accents.
“The idea was to blur the line between documentary realism and the iconic mythic worlds that these characters inhabited. I always thought of the visual style of the show as a collage of textures, images, and ideas: an American culture mixtape.”
Quincy Isaiah as Magic Johnson and Solomon Hughes as Kareem Abdul-Jabbar in episode 5 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
It all goes to show necessity can often a wellspring for creativity. One has to wonder what the show would have looked like if there was co-operation from the subjects involved.
Not that Borenstein seems too bothered. According to Alexandra Del Rosario at Deadline, the series co-creator addressed the lack of cooperation from the Lakers themselves and the Buss family during a CTAM session, saying, “We made this show as fans with a tremendous amount of respect and love for all these characters of the NBA and Lakers and I think it hopefully shows on screen. I can only imagine how strange it must be to have a movie made about your life, or show made about any aspect of your life so I never presume what people will or won’t do but on our end, this was made with great love and appreciation.”
John C. Reilly as Jerry Buss, Quincy Isaiah as Magic Johnson, and Kirk Bovill as Donald Sterling in episode 1 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
IndieWire’s Samantha Bergeson digs deeper, reporting that the show’s team has made clear that the NBA and athletes featured in the series aren’t profiting from Winning Time. “HBO confirmed that NBA league lawyers have reached out to the network regarding the use of official NBA logos and trademarks,” says Bergeson, while a Lakers representative has said: “We have no comment as we are not supporting nor involved with this project.”
“The real-life Johnson previously said he was “not looking forward” to the premiere of Winning Time, and is instead focusing on his own upcoming four-part Apple TV+ docuseries, They Call Me Magic,” she adds.
“Former teammate Abdul-Jabbar also noted that “the story of the Showtime Lakers is best told by those who actually lived through it,” Bergeson continues. “Both Abdul-Jabbar and Johnson are participating in a ‘Lakers-sanctioned’ Hulu docuseries to be released in late 2022.”
Star Quincy Isaiah and co-creator Jim Hecht are keen to counter this judgement from the athletes.
John C. Reilly as Jerry Buss and Quincy Isaiah as Magic Johnson in episode 1 of “Winning Time: The Rise of the Lakers Dynasty.” Cr: Warner Media
“I just know what we put into it, the respect and the admiration that we have, and I just hope that that comes across,” Isaiah tells Kirsten Chuba at The Hollywood Reporter. “It’s about 1979 and a rookie at 20 years old moving from Michigan to L.A. I hope that people understand that and that who we’re talking about in the show isn’t the man he is today.”
“We wanted to have a perspective that was objective, and there was always this thought that if you go to one person it becomes their story,” Hecht tells Chuba. “I love The Last Dance, love it, but that’s Michael [Jordan]’s story, and this… is a lot of people’s story and has a different perspective on it. That’s what makes it a good drama because you should root for all of our characters when they’re in competition with each other, when only one can win, and they’re all coming from some place real.”
Want more? In the video below you can watch former Laker Rick Fox in conversation with Quincy Isaiah and executive producers Adam McKay and Max Borenstein about the first episode of the series:
To fine-tune the look of “Winning Time,” Company 3 Hollywood senior colorist Walter Volpatto evokes the feel of older imaging technology.
October 15, 2023
Posted
December 17, 2021
Sounding the Depths of BBC/Peacock Submarine Drama “Vigil”
Suranne Jones as Amy Silva in season 1 episode 6 of “Vigil.” Cr: NBCUniversal
BBC/Peacock murder mystery Vigil, set aboard a nuclear submarine, was the broadcaster’s biggest domestic ratings hit of 2021. Now available via BBC and ITV streamer Britbox, among other outlets, it’s time to take a deep dive into how the show was made.
The indie outfit behind the six-part drama has pedigree. World Productions, part of ITV Studios, produces the hugely popular police thriller series Line of Duty, and also made the political thriller Bodyguard, which was the highest-rated BBC drama for three years until Vigil broke that record, reaching 13.4 million viewers earlier this year.
Lauren Lyle as Jade and Rose Leslie as Kirsten Longacre in season 1 episode 2 of “Vigil.” Cr: NBCUniversal
Martin Compston as Craig Burke in season 1 episode 1 of “Vigil.” Cr: NBCUniversal
Shaun Evans as Glover in season 1 episode 1 of “Vigil.” Cr: NBCUniversal
Rose Leslie as Kirsten Longacre and Suranne Jones as Amy Silva in season 1 episode 1 of “Vigil.” Cr: NBCUniversal
The show is set in Scotland (where the UK’s nuclear submarine fleet is based), and the premise is that a land-based detective is tasked to investigate the death of one of the crew of the HMS Vigil while the sub is at sea. Things get murkier from there.
Given the nature of the subject matter, the Royal Navy did not offer much in the way of cooperation. Nuclear subs are shrouded in secrecy.
“I don’t think that they were interested in engaging with us,” Matt Gray, BSC noted in an interview with British Cinematographer. “[Nuclear submarines] are just short of one and a half football pitches long, eight double decker buses deep, generate their own airflow, and have unlimited power. They are designed to be invisible. We also had to piece together the world of the Navy base which was a combination of visual effects and the clever use of locations.”
The Hunt for Red October was natural touchpoint for designing the visual grammar of the show, along with sci-fi films like Alien for creating a claustrophobic environment deep underwater.
“It’s completely artificial… and what it does to your senses and mind,” says Gray, who was hired for Vigil by series director James Strong. “We tried to create a sense of depth. You are on different decks and the way that piece of engineering is created you have your nuclear tubes and reactor, and the human element fits in around those components.”
Season 1 episode 1 of “Vigil.” Cr: NBCUniversal
Stephen Dillane as Shaw in season 1 episode 1 of “Vigil.” Cr: NBCUniversal
Newsome as Paterson Joseph in season 1 episode 1 of “Vigil.” Cr: NBCUniversal
Connor Swindells as Hallow in season 1 episode 2 of “Vigil.” Cr: NBCUniversal
Anjli Mohindra as Doc Doc in season 1 episode 3 of “Vigil.” Cr: NBCUniversal
Series creator Tom Edge had done a lot of research, but a large part of the show’s credibility stems from the production design of the submarine. Edge and production designer Tom Swayer talked to former submariners and scoured the internet for similar vessels. The space had to be big enough to contain the story action and flexible enough to work in and yet retain all the claustrophobia of a real submarine. For the interior of the HMS Vigil, LED Astera tubes were linked back to an iPad via Wi-Fi.
“We did some testing in the submarine set, but some adjustments had to be made so that the Steadicam was able to take full advantage,” Gray tells Definition. “We wanted the camera to be constantly moving and flowing through the environment — never letting it get too static, not to distract from the pace of the story.”
A lot of the tension in the series comes from the twin tracks of story — one on the submarine and the other following another detective investigating on land.
Gray gave each track a different but complementary color scheme, as he explained to British Cinematographer: “We were working with more man-made industrial colors inside of the sub like acidic yellows, greens and reds. There were different states so when the sub was on different levels of power that was denoted by the way the lighting would adjust. When on land, we tried to have natural interpretations of the same colors.”
Anjli Mohindra as Doc Doc and Suranne Jones as Amy Silva in season 1 episode 5 of “Vigil.” Cr: NBCUniversal
Shaun Evans as Glover in season 1 episode 3 of “Vigil.” Cr: NBCUniversal
Suranne Jones as Amy Silva in season 1 episode 5 of “Vigil.” Cr: NBCUniversal
Anjli Mohindra as Doc Doc and Paterson Joseph as Newsome in season 1 episode 6 of “Vigil.” Cr: NBCUniversal
He shot with a pair of ARRI Alexa LF cameras in 4K HDR, delivered as a 2:1 aspect ratio and graded in ACES. The lenses were a combination of Zeiss Master anamorphic primes, Kowa anamorphic 75 mm, and Tokina Vista spherical.
Goodbye Kansas Studios delivered 180 VFX shots for the series, including a detailed 150-meter model of the Trident submarine — again having to create a convincing model through extensive research.
Kirsten Longacre in season 1 episode 3 of “Vigil.” Cr: NBCUniversal
Gary Lewis as Robertson and Rose Leslie as Kirsten Longacre in season 1 episode 3 of “Vigil.” Cr: NBCUniversal
Rose Leslie as Kirsten Longacre and Gary Lewis as Robertson in season 1 episode 4 of “Vigil.” Cr: NBCUniversal
Suranne Jones as Amy Silva and Kirsten Longacre in season 1 episode 3 of “Vigil.” Cr: NBCUniversal
Detailing the process for Befores & Afters, VFX supervisor Jim Parsons said, “We even went so far as talking to a former Navy officer… obviously without breaking any official secrets! The next challenge was to submerge HMS Vigil into the ‘digital North Sea’, developing each shot to make the submarine look like more than a long object in a dark ocean. We created a thickness to the water that allowed pools of light through it, creating a sinister and ominous mood, with every shot of the submarine adding to the atmosphere of the show’s mystery.”
Some important scenes feature a fishing trawler that, due to the complexity of the sequences, called for it to be shot in many different locations, including in a bay and in a stationary dock.
“A lot of our work involved removing external scenery, creating the illusion that it was nowhere near land,” Parson explained. “Some underwater scenes with actors were also filmed at the Pinewood Studios water tank, which involved having to remove the external scenery in the edit and create VFX surroundings of a lake in the Scottish Highlands.”
Also taking a remote look at the production was Glasgow-based post-production facility Blazing Griffin Post. Describing the finishing process on Vigil on the Sohonet blog, Niran Sahota said that the production might have been sunk without the use of remote review and collaboration tool Sohonet ClearView Flex.
The first block of three episodes were graded by colorist Colin Brown with Gray in attendance at Blazing Griffin, but when the COVID surge towards the end of 2020 in Scotland forced more local lockdowns and restricted travel, the post process turned to the live streaming solution.
Brown, who worked on developing the show LUTs with Gray, was initially skeptical about remote finishing: “Grading is collaborative and trying to pitch a look that satisfies the DP, director, producers and executives can be tricky enough at times in the suite let alone doing so remotely — but the pandemic forced our hand,” he said. “Matt and I had great grading sessions in my suite in Glasgow, but we didn’t have any final VFX at that time. Episode 1 had a lot of important VFX sequences which we had to get right.”
All the key decision makers were brought together over Sohonet ClearView Flex to offer real-time input to refine the grade, which Brown carried out in DaVinci Resolve.
For the second block of episodes, directed by Isabelle Sieb and lensed by DP Ruairí O’Brien, Brown sent out iPads specially calibrated at Blazing Griffin to enhance the review process.
“As long as Isabelle and Ruairí were in a dark enough room, the results would match my suite when streamed on ClearView Flex,” recalled Brown. “I have an iPad in my suite too, so I was confident that the output matched my Grade 1 monitor. We worked our way through the grade, chatting on Zoom, as if sitting in the suite. There really was no difference with it being remote and it felt just like a typical grading session.”
Director Strong described the opening 20 minutes of the series as the most audacious, complex and exciting footage he has ever shot.
“We had to film the sinking of a boat in the middle of the North Sea and then helicopter our hero onto a moving submarine 200 miles off the Scottish coast,” he said to the BBC. “It took months and months of planning, breaking it down shot by shot and deciding how to do each frame, utilizing all the different cinematic tools, kit and techniques available. It was a monumental effort from all the departments involved and I’m truly thrilled with the end results.”