Past and Present Intersect in Steve McQueen’s “Occupied City”
Adrian Pennington
TL;DR
Academy Award-winning director Steve McQueen’s film “Occupied City” is a four-hour documentary that provides two simultaneous portraits of Amsterdam — one, current and the other a record of atrocities during the Nazi occupation of the Netherlands in the early 1940s.
“Occupied City” is the second recent feature film, following “The Zone of Interest,” to address the Holocaust without resorting to overused archival imagery.
McQueen based “Occupied City” on a book written by his wife, the Dutch historian and filmmaker Bianca Stigter. They describe it as a collision of the ghosts of Amsterdam’s past and the reality of the city’s present.
Occupied City is the second recent feature film, following TheZone of Interest, to address the Holocaust without resorting to overused imagery. This four-hour feature documentary by British director Steve McQueen concerns the Nazi Occupation of Amsterdam during World War II but doesn’t use archive footage, talking heads, or dramatize any scenes.
The film is based on Atlas of an Occupied City: Amsterdam 1940-1945, a historical encyclopedia written by McQueen’s wife, the Dutch historian and filmmaker Bianca Stigter.
“Bianca had written this extraordinary book, and it’s all her research over the last 20 years or more,” the director explained to A.frame. “It’s not the first book you’d ever think we’d translate into a movie. It’s not an obvious choice.”
Using the text of Atlas as narration, McQueen (who won Best Picture with 2013’s 12 Years a Slave) juxtaposes the history of the city and explanatory narration by Melanie Hyams with footage of life in Amsterdam today, which he shot over the course of several years beginning in 2019 and through the pandemic lockdowns.
“What I wanted was, as you would do in a city, you get lost,” McQueen told IndieWire’s Filmmaker Toolkit podcast, adding that the film was a bit like an English garden. “Unlike a French garden, which is all about the avenues; it’s very symmetrical, very formal. An English garden [has] more to do with wandering and the contemplating and lots of ideas come from those places of wandering and pondering.”
Stigter describes the film is more of a free wandering through the city, and the book is more practically set up like a guide book.
One scene in which the elderly owner of an apartment in which Occupied City filmed showed the crew country line-dancing. Under Hyams’ narration of what happened there during the war, the joyful dancing of the owner adds the fact that she, also, might have her own story of the Nazi occupation.
“There’s something excessive about the movie because — besides from what you see, you also think, ‘What do these people [we’re seeing] have in their heads [from that time]?’” Stigter told IndieWire.
McQueen, who lives in Amsterdam with his wife, found the experience of living in a city that had once been Nazi occupied an unsettling one.
“My daughter’s school was once an interrogation center. Where my son went to school was a Jewish school, so these things were in my every day,” he told A.frame. “When it’s sinking into your pores, you start thinking about it. Coming from London, not having grown up in an occupied city but being here now, it felt like I was living with ghosts. It’s almost like an archaeological dig. This is recent history within the last 85 or 90 years, and I thought this could be fascinating. It is two existences: My presence and another presence.”
Initially, McQueen thought he’d find some archive footage from Amsterdam in WWII to project on top of the present day footage, but then decided to use narration based on Stigter’s text and to merge the two things together.
“There’s optimism in [Hyams’] voice, even though there was a dispassionate sort of description of what was going on,” he told NPR’s Asma Khalid. “And that was because I didn’t want to manipulate the audience. It was about the audience bringing the information, receiving the information for the first time.”
He described the process of shooting on 35mm — his favored medium — as a ritual. “It’s so precious this footage and it actually adds to the tension of being careful about how you how you approach the moment,” he shared with the audience at the New York Film Festival.
“It was shooting without a tightrope, in a way,” he added to A.frame. “Young people today shoot digitally; they spray the whole area, shooting for 60 hours and cutting it down to half an hour. You can’t do that with film. The process of making a film and working with Lennert Hillege, the DP, the sound people, and others, it was a beautiful ritual every time we took the camera. I think that was extremely helpful in capturing things, because everyone was very focused.”
Addressing the length of the film, McQueen said it couldn’t be told in an hour and a half. “It needed that contemplation, needed meditations to sort of get into the psyche of the cinema experience, and that time was very important for us,” he told NPR.
Speaking again to A.frame, Stigter said, “It’s essential to have ways to bring history to the fore. We have documentaries, books, and feature films, and this is trying to tell you things about the past in a different way. That’s also why the length is important. It turns it more into a meditation or an experience than a history lesson.”
McQueen, who began his career making video installation art, is also preparing a “36-hour sculptural version” as an art piece. “There are 36 hours of edited footage,” he informed A.frame. “From that 36 hours of edited footage, we took out these four hours, because making a feature film is a very different experience than making the sculptural element of it. Certain things are repeated in that, but you don’t want to do that in a feature film. In some ways, after a particular moment, it condenses itself, and then you decide what you want to keep in and what you want to take out to make it a certain kind of journey.”
Occupied City ends with a bar mitzvah ceremony because it was important to McQueen and Stigter to show the persistence of Jewish life in Amsterdam.
Speaking at the New York Film Festival, Stigter said, “For me the last scene is also very important to show something of contemporary Jewish life in the city, and that was a very beautiful and hopeful conclusion for the for the movie.
“I often think watching a movie is like a religious experience,” McQueen added to A.frame. “You’re trying to create meaning in what you see. In this case, the more you know, the less you know.”
He continued this theme with NPR, saying, “When you go to the movies, people try to connect the dots and try to make sense of things. But the lessons learned from this situation is that nothing makes sense. How can you even fathom or sort of get to an understanding of how, for example during this war, six million people died. Try and make sense of that.”
Director Jonathan Glazer pursued an immersive naturalism in “The Zone of Interest” by removing the artifice and conventions of filmmaking.
January 10, 2024
AI Is Here — and Everywhere: Researchers Look at the Challenges for 2024
2023 was an inflection point in the evolution of artificial intelligence and its role in society. The year saw the emergence of generative AI, which moved the technology from the shadows to center stage in the public imagination. It also saw boardroom drama in an AI startup dominate the news cycle for several days. And it saw the Biden administration issue an executive order and the European Union pass a law aimed at regulating AI, moves perhaps best described as attempting to bridle a horse that’s already galloping along.
We’ve assembled a panel of AI scholars to look ahead to 2024 and describe the issues AI developers, regulators and everyday people are likely to face, and to give their hopes and recommendations.
Casey Fiesler, Associate Professor of Information Science, University of Colorado Boulder
2023 was the year of AI hype. Regardless of whether the narrative was that AI was going to save the world or destroy it, it often felt as if visions of what AI might be someday overwhelmed the current reality. And though I think that anticipating future harms is a critical component of overcoming ethical debt in tech, getting too swept up in the hype risks creating a vision of AI that seems more like magic than a technology that can still be shaped by explicit choices. But taking control requires a better understanding of that technology.
One of the major AI debates of 2023 was around the role of ChatGPT and similar chatbots in education. This time last year, most relevant headlines focused on how students might use it to cheat and how educators were scrambling to keep them from doing so — in ways that often do more harm than good.
However, as the year went on, there was a recognition that a failure to teach students about AI might put them at a disadvantage, and many schools rescinded their bans. I don’t think we should be revamping education to put AI at the center of everything, but if students don’t learn about how AI works, they won’t understand its limitations — and therefore how it is useful and appropriate to use and how it’s not. This isn’t just true for students. The more people understand how AI works, the more empowered they are to use it and to critique it.
So my prediction, or perhaps my hope, for 2024 is that there will be a huge push to learn. In 1966, Joseph Weizenbaum, the creator of the ELIZA chatbot, wrote that machines are “often sufficient to dazzle even the most experienced observer,” but that once their “inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away.” The challenge with generative artificial intelligence is that, in contrast to ELIZA’s very basic pattern matching and substitution methodology, it is much more difficult to find language “sufficiently plain” to make the AI magic crumble away.
I think it’s possible to make this happen. I hope that universities that are rushing to hire more technical AI experts put just as much effort into hiring AI ethicists. I hope that media outlets help cut through the hype. I hope that everyone reflects on their own uses of this technology and its consequences. And I hope that tech companies listen to informed critiques in considering what choices continue to shape the future.
Kentaro Toyama, Professor of Community Information, University of Michigan
In 1970, Marvin Minsky, the AI pioneer and neural network skeptic, told Life magazine, “In from three to eight years we will have a machine with the general intelligence of an average human being.” With the singularity, the moment artificial intelligence matches and begins to exceed human intelligence — not quite here yet — it’s safe to say that Minsky was off by at least a factor of 10. It’s perilous to make predictions about AI.
Still, making predictions for a year out doesn’t seem quite as risky. What can be expected of AI in 2024? First, the race is on! Progress in AI had been steady since the days of Minsky’s prime, but the public release of ChatGPT in 2022 kicked off an all-out competition for profit, glory and global supremacy. Expect more powerful AI, in addition to a flood of new AI applications.
The big technical question is how soon and how thoroughly AI engineers can address the current Achilles’ heel of deep learning — what might be called generalized hard reasoning, things like deductive logic. Will quick tweaks to existing neural-net algorithms be sufficient, or will it require a fundamentally different approach, as neuroscientist Gary Marcussuggests? Armies of AI scientists are working on this problem, so I expect some headway in 2024.
Meanwhile, new AI applications are likely to result in new problems, too. You might soon start hearing about AI chatbots and assistants talking to each other, having entire conversations on your behalf but behind your back. Some of it will go haywire — comically, tragically or both. Deepfakes, AI-generated images and videos that are difficult to detect are likely to run rampant despite nascent regulation, causing more sleazy harm to individuals and democracies everywhere. And there are likely to be new classes of AI calamities that wouldn’t have been possible even five years ago.
Speaking of problems, the very people sounding the loudest alarms about AI — like Elon Musk and Sam Altman — can’t seem to stop themselves from building ever more powerful AI. I expect them to keep doing more of the same. They’re like arsonists calling in the blaze they stoked themselves, begging the authorities to restrain them. And along those lines, what I most hope for 2024 — though it seems slow in coming — is stronger AI regulation, at national and international levels.
Anjana Susarla, Professor of Information Systems, Michigan State University
In the year since the unveiling of ChatGPT, the development of generative AI models is continuing at a dizzying pace. In contrast to ChatGPT a year back, which took in textual prompts as inputs and produced textual output, the new class of generative AI models are trained to be multi-modal, meaning the data used to train them comes not only from textual sources such as Wikipedia and Reddit, but also from videos on YouTube, songs on Spotify, and other audio and visual information. With the new generation of multi-modal large language models (LLMs) powering these applications, you can use text inputs to generate not only images and text but also audio and video.
The deluge of synthetic content produced by generative AI could unleash a world where malicious people and institutions can manufacture synthetic identities and orchestrate large-scale misinformation. A flood of AI-generated content primed to exploit algorithmic filters and recommendation engines could soon overpower critical functions such as information verification, information literacy and serendipity provided by search engines, social media platforms and digital services.
The Federal Trade Commission has warned about fraud, deception, infringements on privacy and other unfair practices enabled by the ease of AI-assisted content creation. While digital platforms such as YouTube have instituted policy guidelines for disclosure of AI-generated content, there’s a need for greater scrutiny of algorithmic harms from agencies like the FTC and lawmakers working on privacy protections such as the American Data Privacy & Protection Act.
A new bipartisan bill introduced in Congress aims to codify algorithmic literacy as a key part of digital literacy. With AI increasingly intertwined with everything people do, it is clear that the time has come to focus not on algorithms as pieces of technology but to consider the contexts the algorithms operate in: people, processes and society.
The breakthrough year of gen AI is being likened to the appearance of the first web browsers by a senior scientist at IBM, but just like the internet, there will only be a few companies or people to whom power and wealth will accrue.
Darío Gil, SVP and director of Research at IBM, said that at the birth of the Web browser “people imagined experiencing the internet. All of a sudden there was this network of machines, and content can be distributed, and everybody can self-publish.”
Though perhaps this should be termed “tipping point,” a phrase popularized by Gil’s on stage interlocuter, the psychologist and culture commentator Malcolm Gladwell.
“Previously, AI systems were behind the scenes, [computing] your search results, or translation systems,” Gil said. “Now people can touch it. Fundamentally, at this moment is where that the number of people that can build and use AI has skyrocketed.”
Gladwell asked the computer scientist to place AI on a list of the biggest technological achievements of the last 75 years.
Gil said he would put it first.
“Since World War II, undoubtedly, computing has reshaped our world. And within computing, I would say, the role that semiconductors have had has been incredibly defining. I would say AI is the second example of that as a core architecture, that is going to have an equivalent level of impact.“
The third leg will be Quantum information, he said. “I like to summarize that the future of computing is bits, neurons, and qubits. It’s that idea of high-precision computation — the world of neural networks and AI and quantum. The combination of those things is going to be the defining force of the next 100 years in computing.”
Leverage AI — and Add Value
However, the distribution of power when it comes to AI will not be equal, Gil predicted.
“It likely be true that the use of AI will be highly democratized, meaning the number of people that have access to its power to make improvements in terms of efficiency and so on will be fairly universal, and also that the ones who are able to create AI may be quite concentrated.
Gil elaborated, “If you look at it from the lens of who creates wealth and value over sustained periods of time then just being a user of AI technology is an insufficient strategy.
“And the reason for that is you will get the immediate productivity boost of API which will be a new baseline for everybody. But you’re not accruing value in terms of representing your data inside the AI in a way that gives you a sustainable competitive advantage.
“So what I always try to tell people is, don’t just be an AI user; be an AI value creator.”
As a business, he warned, “It would be kind of a mistake to develop strategies that are just about usage. There will be lot of consequences in terms of the haves and have-nots that will apply to institutions and regions and countries.”
At the same time, predictions about how and where and to what extent AI will impact are tricky because of the way that AI draws its information from across systems and networks.
“In this very high-dimensional representation of information that is present in neural networks, we may find amazing adjacencies or connections of themes and topics in ways that the individual practitioners cannot describe.”
We are going to suffer from not knowing the root cause of something impacted by AI, he argued.
“One of the unsatisfying aspects [of AI] is that it may give you answers but not give you good reasons for where the answers came from.”
Gladwell drew from this implications for how educational and medical institutions would have to change their learning methodology. He said he would encourage institutions not to teach how to use individual AI tools but to embed AI across the curriculum.
“Understanding how we do problem solving in the age of data and data representation means it needs to be embedded in the curriculum of everybody and certainly in the context of medicine and scientists.”
Students entering college are going to know more about AI than the academics do, he said. “That alone is a Herculean people problem.”
What Hollywood Studios Don’t Understand
Institutions of all sorts will have to be at the forefront of integration in order to unlock the full power of AI thoughtfully and responsibly, Gladwell said. “Even Hollywood is being forced to figure this out.”
The writer popularized the idea that if you practice at something for 10,000 hours you will achieve world-class expertise in any skill. But with AI on tap to automate and radically speedup the process to achieving goals where does this leave the creative process?
It should make studios concerned for their future not so much the writers and actors, Gladwell said.
“In the strikes, the frightened ones were the writers and not the studios. Wasn’t that backwards? It should be the studios who are worried about the economic impact of AI. Doesn’t AI, in the long run, put the studios out of business long before it puts the writers out of business?
“I only need the studio because the costs of production are as high as the sky and the costs of production are overwhelming.
“Whereas if I have a tool which introduces massive technological efficiencies to the production of movies, then why do I need a studio?”
The MIT Technology Review’s Will Douglas Heaven poses key questions to bear in mind as the generative AI revolution unfolds in 2024.
January 3, 2024
Posted
January 3, 2024
“May December” and Its Many Mirrors: How the Cinematography Tilts and Shifts Perception
TL;DR
Netflix’s “May December,” the new film from director Todd Haynes, examines how little people understand about how they appear to others.
The story of real-life teacher Mary Kay Letourneau, who married the much-younger object of her desire after serving time for second-degree rape of a child, serves as a springboard for the film.
As with his other films, Haynes employs stylistic techniques to remind audiences that they are watching an artistic construct.
The film was shot using ARRI’s new Alexa 35 by DP Christopher Blauvelt rather than longtime Haynes collaborator Edward Lachman.
Blauvelt appreciated the increased latitude of the Alexa 35, which allowed him to shoot in bright daylight without overexposure.
Few people really know what they sound or look like or how they appear to others. That is among the essential ideas that director Todd Haynes’ new film on Netflix, May December, examines. The film concerns Gracie (Julianne Moore), a woman who has married and raised children with Joe (Charles Melton) — a pairing that began when she was already married and in her 30s and he was just 13. We quickly learn that their relationship landed her in prison and resulted in a national scandal similar to that of real-life teacher Mary Kay Letourneau, who married the much-younger object of her desire after serving time for second-degree rape of a child. May December is clearly meant to summon audiences’ memories of Letourneau’s situation as its springboard.
The story kicks off when Elizabeth (Natalie Portman), an actress set to portray Gracie in an indie film, spends time with Gracie, Joe and their children in an attempt to absorb useful information about the character she’s scheduled to portray. Gracie sees this as an opportunity to shape Elizabeth’s portrayal into the sympathetic portrayal she feels is truthful, while Elizabeth is focused on gathering any detail, the more sordid the better, that she can use in her performance.
As is almost always the case with Haynes’ filmmaking, the director isn’t looking to make the audience feel like they’re watching something “real,” or to forget that they’re watching a movie. Instead, he searches for ways to remind audiences that they are watching an artistic construct, whether by using the stylistic techniques of a ‘50s melodrama in Far from Heaven, the approach of a sensationalist exposé, as in his well-known short Superstar, in which a Barbie doll stands in for singer Karen Carpenter, or a mid-century romance in Carol.
As Haynes told American Cinematographer back in December 2002, following the release of Far from Heaven, which was stylized to look like a Douglas Sirk “women’s picture” of the 1950s: “I think the best movies are the ones where the limitations of representation are acknowledged, where the filmmakers don’t pretend those limitations don’t exist. Films aren’t real; they’re completely constructed. All forms of film language are a choice, and none of it is the truth. … We’re not using today’s conventions to portray what’s ‘real.’ What’s real is our emotions when we’re in the theater. If we don’t have feeling for the movie, then the movie isn’t good for us. If we do, then it’s real and moving and alive.”
Among the techniques he uses in May December to achieve this type of aesthetic distancing: scenes framed by mirrors; coastal Georgia locations captured through large amounts of diffusion; and grain laid in during post for additional texture and a seemingly discordant music track, repurposed from 1971’s torrid romance The Go-Between from director Joseph Losey and writer Harold Pinter.
Distancing techniques notwithstanding, the film also avoids traditional filmmaking tropes that cue the viewer about how to feel about what they’re watching. Could there be some truth in Gracie’s notion that her relationship was borne of true love and evolved into normal family life? Is Elizabeth’s attempt to cut through Gracie’s public front a search for truth or just another form of exploitation? In that way sense May December could be said to be this year’s Tár, with final judgement ultimately left to the viewer.
As Haynes summarizes in his director’s statement from Netflix, “All lives, all families, are the result of choices, and revisiting them, probing them, is a risky business. But it’s hard to think of more volatile romantic choices than these, and all the more so when so many defenses have been called upon to shut out such unanimous contempt and judgment from the world.”
He adds, “But as Elizabeth observes and studies Gracie and her world, and gets to know her husband Joe, her reliability as narrator begins to falter. The honest portrait she hopes to erect, her own investment in revealing truths, becomes clouded by her own ambitions and presumptions, her own denials.”
Originally intent on working with frequent collaborator Edward Lachman to shoot the film, Haynes’s plans were disrupted when the cinematographer of his award-winning features, including Far from Heaven, Carol and I’m Not There, suffered an injury that prevented Lachman’s participation. Cinematographer Christopher Blauvelt, who knew both Haynes and Lachman quite well, stepped in during the brief prep period and handled the cinematography for the rapid-paced 23-day shoot in and around Tybee Island, just outside of Savannah, Georgia.
In conversation with playwright Jeremy O. Harris at the New York Film Festival, Haynes recounted that he had no compunction about working with Blauvelt, who’d shot several of the films of respected indie director Kelly Reichardt, including First Cow, Showing Up and Certain Women. “Kelly Reichert is a dear, dear friend and one of the great independent filmmakers working in the world today. She, her last, what, five, six films were all shot by Chris Blauvelt… I’ve known Chris for years because he worked under Harris Savides, who was one of the greats.”
It had already been determined that May December would be shot with ARRI’s new Alexa 35, which offered some features that would be conducive to the project’s fast pace and limited lighting crew.
“I was immediately interested in this camera because [I understood] that the latitude was even more the [previous Alexas] and I had never used it before,” Blauvelt explains to Nick Newman at The Film Stage. When I went to test it at Keslow Camera in Atlanta… we were in a warehouse with a giant door, and I had a person in there that I was shooting for my tests with some string lights and a chart and the other things you have at a camera test. But I had this door open, so I had sunlight out of the back, and I kept opening and opening and opening that door and [the camera] maintained [definition in] the clouds, like, forever!” he says.
“It felt like I couldn’t clip. I couldn’t make it overexposed! So that was what I needed. And I was really happy to have that that much latitude because going into [this] film, knowing that I would be stuck in bright daylight without the tools to slow things down. It was a tremendous help.”
As Blauvelt explained to Newman, “I think there’s a big interest in finding these older, beautiful [lenses] that we used to use because the digital can be super-clinical. You know high definition is not flattering if you shot everything clean and right to your sensor. You’re looking, now, at pores on skin and it doesn’t lean into a ‘cinematic look’ — like from the past — that we all are inspired by and love. So, there’s people that have been rehousing these old lenses to match all of our gears and make them more user-friendly.”
Of the 1930s era Balter glass, he says, “You can’t crack them open because it’s toxic — like poisonous gas — because they’ve been encapsulated for so long, and the materials they use was, like, pine tar to make the gears work. And so what they do now is: they cover them. Like the rehousings are just built over the old lenses. So, you can look at a lens that’s built to be this big, to be user-friendly with big marks and everything for the focus and aperture, and you look inside, and the lens is, like, this big. [Spreads hands]
“There’s a characteristic of each lens, right? Like, we tested Cooke Panchros; we tested Super Baltars, normal Baltars, Cooke S4s — which are the most contemporary ones I would use. But even still: I say that and it’s funny because those lenses were made 35 years ago. [Laughs] But those, to me, are as sharp as I’ll get because I’m always trying to find a way to sort of disarm the eye for perfection of digital.”
Blauvelt spoke to Vanity Fair’s David Canfield about Haynes’ desire to avoid crisp, clinically clear images for his brand of visual storytelling.
When he got to the location, he recalls, “Todd was showing me all these images and there was this inherent sea-worn glass, this sort of haziness on things because of the ocean air. I could tell that was just a natural occurrence. It reminds me of Todd. Todd has this old, really shitty phone, and he would take a photo of a set with it, and it would already look like that. [Laughs]
“So they were showing me images already discolored — it just became this throughline. This very texturized filmic look comes from a lot of the inspirations that Todd had already had. To me, we were all on the same page in regard to finding these places and these frames and the way we lit.”
Further elaborating to ICG Magazine, Blauvelt says, “We wanted it to feel texturized. We wanted to give the feeling of this place where the windows are covered in a marine layer, and there’s all this haze, and sunlight warming things, and leaving moisture between window panes. We embraced it and never cleaned a window. We were shooting through screen and brush, which helped to give a filmic look.”
Valentini also reports that Blauvelt made use of heavy diffusion from Schneider Optics Radiant Soft filters in front of the lens, in strengths from 1/4 up to five, sometimes stacking more than one for the right effect.
Another feature of May December involves the use of mirrors to frame the action, concurrently enhancing the thematic elements of characters’ inherent limitations of seeing themselves and others accurately, and also adding more of those layers of distance between the viewer and the characters.
In portions, the camera takes the position of a bathroom mirror in which Elizabeth studies Gracie’s approach to applying makeup. In one scene, Elizabeth delivers an extended monologue into a mirror, again with the camera pointing at her. Shots like these simply use the proscenium as if it’s a mirror and the actors perform directly to the lens. Some other scenes that actually show mirrors within the shot were more complex to execute.
In a scene that has been widely referenced in articles and reviews, Gracie’s daughter Mary (Elizabeth Yu) tries on dresses to wear to her high school graduation. Gracie and Elizabeth sit outside the store’s dressing rooms and in an extended oner, the camera is pointed directly at the two women sitting side-by-side surrounded by mirrors and framed to show Elizabeth surrounded by both Gracie and, on her other side, Gracie’s reflection. The dramatic point of the scene is to observe Gracie’s offhanded and crushing response to her daughter’s modelling the sleeveless dress she wants. But acquiring the shot as envisioned presented the problem of hiding a camera pointing directly at a mirror.
To accomplish this, Blauvelt, the camera and crew were placed behind a two-way mirror — one that is a typical mirror on one side and clear on the other. Haynes explains to Vanity Fair, “The challenge was how to hide the camera, and which angles the mirrors were going to be; when you have any mirror on any set, it’s difficult because you’re hiding lights and stands and everything. I always stare at the little vanity over Natalie’s shoulder because that’s where the camera is hidden. Also, it’s great conceptually. When I watch the film and see how it works and integrates into our multiplicity of what’s happening within the story, it makes so much sense. Your eye can go in any direction. We play it mostly as a one-er, and so it relies a lot on their performances, which are just immaculate.”
Haynes elaborates to Adam Chitwood at TheWrap, saying his initial idea for the shot was much simpler, but it evolved from there. The performers are surrounded by mirrors and the camera had to be positioned just right so it wouldn’t catch any errant reflections of the set or crew. It was one of the most complicated scenes in the entire shoot, and Blauvelt said it was a true team effort to nail it.
“It’s not exclusive to me, or even the departments, it’s like a collective that goes all the way back to the genius of the writing, and the characters, and Julianne Moore and Natalie Portman and Elizabeth Yu,” he continues. “When that happens, and all the pistons are firing and you know that we got there from everybody really understanding the intent and building something like that, it’s the best feeling you can have as a filmmaker.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
January 3, 2024
“The Zone of Interest:” Ways to Film the Unfilmable
TL;DR
Director Jonathan Glazer went to great lengths to pursue an immersive naturalism in his screen depiction of the holocaust, “The Zone of Interest,” removing the artifice and conventions of filmmaking.
The filmmakers gave their actors freedom to improvise by rigging multiple cameras for long takes with the actors often unaware if the cameras were even rolling.
Cinematographer Łukasz Żal also deployed a thermal imaging camera to juxtapose black and white scenes of energy and hope with the bleak world of color.
Writer-director Jonathan Glazer refuses to be drawn into making comparisons with other Holocaust screen depictions and his new film TheZone of Interest.
“I don’t like getting involved in a genocide-off,” he told Giles Harvey of The New York Times, commenting on repeated attempts by the press to get him to talk about why he felt his approach differs from the likes of Schindler’s List, Son of Saul or the documentary Shoah.
He went on to clarify that his decision to tackle this highly sensitive subject was rooted in his family history. Glazer’s grandparents were Eastern European Jews who fled the Russian Empire in the early 20th century. He also went to a Jewish state school in London.
The British director had not yet finished Under the Skin in 2013 when he told his longtime producer, James Wilson, about his idea for the next project.
“He did not want to do another, quote-unquote, ‘Holocaust movie.’ Jon has a very small filter when it comes to doing something that’s never been done before,” Wilson told Rolling Stone’s David Fear. “But neither of us knew what that something would be.”
Glazer’s idea was galvanized in 2014 by reading about the latest novel by the late Martin Amis, a story told from the viewpoint of a fictional Nazi commandant who ran and lived next door to a concentration camp in Nazi Germany.
In Amis’ book, the Dolls were loosely based on Rudolf Höss, the real-life commandant of Auschwitz, and his family. Glazer’s first big call was to revert to the originals. Before starting work on the script, he spent two years researching the Hösses, during which he came across a staggering data point: The garden of their villa shared a wall with the camp. What feats of denial, he wondered, would it have taken to live in such proximity to the damned?
“I wanted to dismantle the idea of them as anomalies, as almost supernatural. I wanted to show that these were crimes committed by Mr. and Mrs. Smith at No. 26,” he told the NYT.
“I looked at the darkening world around us, and had a feeling I had to do something about our similarities to the perpetrators rather than the victims,” Glazer elaborated to Rolling Stone. “When you say, ‘They were monsters,’ you’re also saying: ‘That could never be us.’ Which is a very dangerous mindset.”
Instead, he began to see the Hösses as “non-thinking, bourgeois, aspirational-careerist horrors” who’d simply normalized evil.
‟There were two givens on the film. … That it would be in its native languages — German and some Polish, obviously — and that we would film it in Poland. And Jon really wanted to film it in the real house,” Producer James ‟Jim” Wilson told Gold Derby’s Charles Bright.
Wilson describes visiting the Höss family home and experiencing its proximity to Auschwitz (which he says is ‟kind of holy”) as generating ‟one of the lightbulb moments.”
“The Höss house is still there, and the proximity to the camp is striking,” Glazer told The Hollywood Reporter’s Scott Roxborough. “I imagined myself at one point as a prisoner, imagining hearing the sounds of the Höss children splashing and laughing in their swimming pool on the other side of the wall. The idea of the film became about that wall, about how that wall is a direct manifestation of how we ourselves as human beings compartmentalize the things we were happy to indulge in and surround ourselves with and the things — sometimes horrible things — we want to disassociate ourselves from. That became the axiom of the whole endeavor.”
Because the Höss home is the heart of the film, Glazer tasked Production Designer Chris Oddy with building a fully functional set. Oddy deemed the actual building’s “80 years of decrepitude” insurmountable and instead opted to renovate a nearby building crafted in a similar style, according to Roxborough’s reporting. Glazer mandated that the end result should be as if it had been built yesterday.”
In fact, Oddy says, “The only scene shot in the original house comes late in the film, where Rudolf walks from his office through the real underground tunnel that connects the camp with his home.”
Another key set piece, the family garden, required a full-year head start to adequately mature before filming began, Oddy says.
The production goal was an immersive naturalism, and Glazer went to great lengths pursuing it, telling Vanity Fair’s David Canfield that he sought to “remove the artifice and conventions of filmmaking that lead you down a road which didn’t feel relevant here: screen psychology. The way that cinema fetishizes, glamorizes, empowers—in this context, none of those were appropriate.”
Instead, as Fear notes in Rolling Stone, The Zone of Interest uses suggestion and sound — what Glazer refers to as “ambient evil” — to conjure up how human beings could look upon the methodical killing of other human beings as background noise in their lives rather than a profound tragedy.
Oddy’s recreation of the Höss home at Auschwitz was rigged with 10 hidden cameras that would roll simultaneously.
“Cinema is at odds with atrocity,” he explained to the NYT. “As soon as you put a camera on someone, as soon as you light them, or make a decision about what lens to use, you’re glamorizing them.”
Cinematographer Łukasz Żal (Ida and Cold War) made some initial studies of the house. Glazer told him they were “too beautiful.” He wanted the images to seem “authorless.”
The two-time Oscar-nominated Polish DP explained to John Boone at Aframe, “You have to forget completely all the tricks you’re carrying with you as a cinematographer and all your knowledge and everything you were taught in your career. The whole idea of this film was just to put the cameras in the places where you can see what is happening in the most objective way, and that’s it. Very simple.”
This method gave the actors — principally Christian Friedel (Babylon Berlin) as Rudolf and Sandra Hüller (Anatomy of a Fall) as his wife, Hedwig — the freedom to improvise; they were often unaware if the cameras were even rolling.
Hidden Cameras
“There was nobody on set [except] the actors,” Żal informed Aframe. He remotely monitored from a shipping container outside the house. “The actors were just living their lives, and because we had 10 setups at the same time and were shooting everything, after one or two hours we had all the setups we needed. We had continuous action, and the sun was changing, the light was changing, the clouds were coming and going, the dog was running through the house, and everything was captured on the 10 cameras.”
Sony VENICE cameras were partly chosen because they could have their lenses detached from the camera body in order to be able to hide them all over the set, including in the garden, which is the Hösses’ pride and joy.
“The whole idea was to create a space for actors to just be there, be in the situation and for us to witness with no interruption,” Żal told fellow DP Mandy Walker in an interview for the ASC. “We were able to attach those cameras to the walls, hide them in cardboard, in the bushes, try and cover them with fabric. We’re just placing them in a different spaces in the garden, in the house. Everything was hardwired. The five focus pullers were in the basement of house, and we were in the shipping container behind the wall. That was our mission control and we were just shooting the whole scene without interruption, continuously.”
The film intersperses the desaturated color of the Höss family life with stark black and white footage of a girl leaving apples in a work camp for the Jewish prisoners. This effect was captured at night using a special thermal imaging camera, as Żal explained to Vanity Fair.
“We spent a lot of time adjusting this for filming in terms of focus and image and also software, because It’s not so easy to use this camera and get the image we would like to have.”
Glazer explains that the camera is recording heat, not light, adding, “there’s something very beautiful and poetic about the fact that it is heat, and she does glow. It reinforces the idea of her as an energy.”
In post, they used an algorithm to upgrade the resolution from lowly 1K to 4K and to match as well as they could with the 6K resolution footage from the VENICE.
While the visuals deliberately refrain from showing the inside of the extermination camp, the audience is not spared the harrowing sounds emanating from behind the wall. Neither are the Hösses, even though for them life seems to continue as normal.
Sound designer Johnnie Burn recalls to Aframe the original directive from Glazer. “He said, ‘It is going to be mandatory that we don’t go in the camp, and we don’t see the atrocities. We’re going to just hear them. It will all be sound.’ I panicked. I started reeling, because I realized that would be a leap of faith. And also, where are we going to get the sounds from? Jon said, ‘Well, that’s what you’re going to figure out.’”
Burn compiled a 600-page research document and spent the year before filming began and throughout the shoot and into post-production building the sound library with his team. According to Aframe, he recorded the industrial rumble of textile workshops and incinerators, boots marching on gravel, period-accurate gunfire, and death itself.
“It’s the sound of murder, and it has to be credible but we didn’t want to be sensational,” Burn said. “Anything sensationalized in the sound wouldn’t work, so understanding the difference between someone acting pain and actually being in pain at point of death, that’s to do with literally the cadence of the way people scream.”
The idea was to create an immersive experience where performances were able to transform into people going about their daily routines, and the cast was free not only to explore the environments but lean into boring, mundane everyday life — a contrast to the horror literally happening outside their backyard.
“Some takes were up to 45 minutes,” Sandra Hüller told Rolling Stone. “You didn’t know what was being filmed from what angle, or from where. The crew and monitors were in a separate building, so if they didn’t tell us to cut, we’d just restart a scene and it would end up being completely different.”
It was a concept that Glazer hoped to make explicit with the film’s ending, in which viewers are momentarily dropped into Auschwitz in the 21st century — a flash-forward that the director says came from his experience wandering around the grounds one morning and noticing the cleaning crew picking up litter and vacuuming in front of the exhibits.
“It was like they were tending graves,” Glazer says. “You know, Höss is long gone. He is ash. But the museum, and the importance of such museums, they are still there.”
But the future of entertainment extends well beyond Hollywood. Social media creators — otherwise known as influencers, YouTubers, TikTokers, vloggers and live streamers — entertain and inform a vast portion of the planet.
For the past decade, we’ve mapped the contours and dimensions of the global social media entertainment industry. Unlike their Hollywood counterparts, these creators struggle to be seen as entertainers worthy of basic labor protections.
Platform policies and government regulations have proved capricious or neglectful. Meanwhile, creators’ bottom-up initiatives to collectively organize have sputtered.
Living on the Edge
Industry estimates regarding the size and scale of the creator economy vary. But Citibank estimates there are over 120 million creators, and an April 2023 Goldman Sachs report predicted that the creator economy would double in size, from US$250 billion to $500 billion, by 2027.
According to Forbes, the “Top 50 Creators” altogether have 2.6 billion followers and have hauled in an estimated $700 million in earnings. The list includes MrBeast, who performs stunts and records giveaways, and makeup artist-cum-true crime podcaster Bailey Sarian.
The windfalls earned by these social media stars are the exception, not the norm.
The venture capitalist firm SignalFire estimates that less than 4% of creators make over $100,000 a year, although YouTube-funded research points to a rising middle class of creators who are able to sustain careers with relatively modest followings.
These are the users who find themselves most vulnerable to opaque changes to platform policies and algorithms.
Platforms like to “move fast and break things,” to use Meta CEO Mark Zuckerberg’s infamous expression. And since the creator economy relies on social media platforms to reach audiences, creators’ livelihoods are subject to rapid, iterative changes in platforms’ features, services and agreements.
Yes, various platforms have introduced business opportunities for creators, such as YouTube’s advertising partnership feature or Twitch’s virtual goods store. However, the platforms’ terms of use can flip on a switch. For example, in September 2022, Twitch changed its fee structure. Some streamers who were retaining 70% of all subscription revenue generated from their accounts saw this proportion drop to 50%.
In 2020, TikTok, facing rising competition from YouTube Shorts and Instagram reels, launched its billion-dollar Creator Fund. The fund was supposed to allow creators to get directly paid for their content. Instead, creators complained that every 1,000 views only translated to a few cents. TikTok suspended the fund in November 2023.
Bias as a Feature, Not a Bug
The livelihoods of many fashion, beauty, fitness and food creators depend on deals brokered with brands that want these influencers to promote goods or services to their followers.
Yet throughout the creator economy, people of color and those identifying as LGBTQ+ have encountered bias. Unequal and unfair compensation from brands is a recurring issue, with one 2021 report revealing a pay gap of roughly 30% between white creators and creators of color.
Along with brand biases, platforms can exacerbate systemic bias. Creator scholar Sophie Bishop has demonstrated how nontransparent algorithms can categorize “desirability” among influencers along lines of race, gender, class and sexual orientation.
Then there’s what creator scholar Zoë Glatt calls the “intimacy triple bind”: Marginalized creators are at higher risk of trolling and harassment, they secure lower fees for advertising, and they are expected to divulge more personal details to generate more engagement and revenue.
Couple these precarious conditions with the whims and caprices of volatile online communities that can turn beloved creators into villains in the blink of a text or post, and even the world’s most successful creators live on a precipice of losing their livelihoods.
Rumblings of Solidarity
Unlike their counterparts in the legacy media industries, creators have neither taken easily nor well to collective action as they operate from their bedrooms and fight for more eyeballs.
Yet some members of this creator class recognize that the bedroom-boardroom power imbalance is a bottom line matter that requires bottom-up initiative.
The Creators Guild of America, or CGA, which launched in August 2023, is but one of many successors to the original Internet Creators’ Guild, which folded in 2019. Paradoxically, CGA describes itself as a “professional service organization,” not a labor union, yet claims to offer benefits “similar to those offered by unions.”
There are other movements afoot: A group of TikTok creators formed a Discord group in September 2022 to discuss unionizing. There’s also the Twitch Unity Guild, a program launched in December 2022 for networking, development and celebration and includes a dedicated Discord space. In response to the rampant bias in influencer marketing, creator-led firms like “F–k You Pay Me ” are demanding greater fairness, transparency and accountability from brands and advertisers.
Twitch streamers are already seeing some of their organizing efforts pay off. In June 2023, after a year of repeated changes in streamer fees and brand deals, the company capitulated in response to the backlash of their top streamers threatening to leave.
None of these initiatives has yet attained the legal status of unions such as the Writers Guild of America. Meanwhile, efforts by the Screen Actors Guild-American Federation of Television and Radio Artists to recruit creators have proved limited. Legal scholar Sara Shiffman has written about how SAG-AFTRA provides creators with health and retirement benefits, but offers no resources to ensure fair and equitable compensation from platforms or advertisers. Nonetheless, while on strike, SAG-AFTRA threatened creators that partnered with studios with a lifetime ban from joining the union.
And despite these bottom-up efforts, the tech behemoths refuse to recognize creators’ fledgling organizations. When a union for YouTubers formed in Germany in 2018, YouTube refused to negotiate with it. Nonetheless, you’ll see companies trot out their biggest stars when they find themselves under regulatory scrutiny. That’s what happened when TikTok sponsored creators to lobby politicians who were debating banning the platform.
An Invisible Class of Labor
Meanwhile, most governments have failed to provide support for — or even recognition of — creator rights.
Within the US, creators “barely exist” in official records, as technology reporters Drew Harwell and Taylor Lorenz recently pointed out in The Washington Post. The US Census Bureau makes no mention of social media as a profession; it is invisible as a distinctive class of labor.
To date, the Federal Trade Commission is the only US agency to introduce regulation tied to the work of creators, and it’s limited to disclosure guidelines for advertising and sponsored content.
Even as the European Union has operated at the forefront of tech and platform policy, creators rate scant mention in the body’s laws. Writing about the EU’s 2022 Digital Services Act, legal scholars Bram Duivendvoorde and Catalina Goanta criticize the EU for leaving “influencer marketing out of the material scope of its specific rules,” a blind spot that they describe as “one of its main pitfalls.”
The success of the 2023 Hollywood strikes could be just the beginning of a larger global movement for creator rights. But in order for this new class of creators to access the full breadth of their economic and human rights — to borrow from the movie Jaws — we’re gonna need a bigger boat.
Continued growth will be driven by marketing through short-form videos on platforms like Instagram, TikTok, and YouTube.
January 4, 2024
Posted
January 2, 2024
M&E Technologists Look Towards 2024
TL;DR
NAB PILOT asked the NAB Technology staff and other broadcast industry luminaries for their predictions for 2024.
In 2024, broadcasters will continue to play a vital role in the local news ecosystem as data journalism becomes even more important. FM radio will see a digital power increase, with 2024 set to be a breakout year for terrestrial broadcast radio in connected cars.
After a year of observing developments in artificial intelligence, the broadcast industry will begin to leverage AI in new and innovative ways.
The adoption and development of new video compression technologies will enable eco-friendly and high-quality content delivery.
The ATSC 3.0 rollout will continue to ramp up, with broadcasters seeking to enter the wireless market.
The entire industry wants to know what broadcast technologists are most excited about and expect or hope to see in 2024. NAB PILOT, the National Association of Broadcasters’ innovation arm, asked the NAB Technology staff and a few other industry luminaries if they had anything to share. They absolutely delivered. As we look ahead into the new year, Here’s a compilation of predictions for 2024.
Data Journalism Grows
Chris Jansen, Head of Broadcast News Partnerships, Google, and PILOT Member
In 2024, broadcasters will continue to play a vital role in the local news ecosystem. As compute power becomes more cost efficient, data journalism becomes even more important. From using tools like Trends to discover what people are curious about, to tools like Pinpoint to mine through mountains of public documents with a few clicks, the stories communities need to know are out there.
2024 – the Year for an FM Radio Digital Power Increase
David Layer, Vice President, Advanced Engineering, NAB
NAB and its broadcaster partners have been studying, testing and advocating for relaxed FM radio digital power rules for over a decade. 2023 saw the release by the FCC of a Notice of Proposed Rulemaking that tentatively adopts the changes that we have been advocating for, including a modified FM radio digital power formula that will allow more stations to increase their power to -10 decibels below carrier (dBc), allowing for better signal penetration into buildings and better replication of a station’s analog coverage by the digital signal. All the pieces are now in place for the FCC to issue an Order adopting these changes in 2024.
AI Begins to Fuel Industry Growth
Jeremy Sinon, Vice President, Digital Strategy, Hubbard Radio, and Vice Chair, NAB Digital Officer Committee
In 2023, broadcasters and the world at large became aware of AI for the first time. I believe much of the last 12 months has been spent “cautiously observing” and scrambling to play defense. Defense that is absolutely necessary for us to keep playing.
However, it’s time for us to also play offense.
In 2024 I think we’ll start to see some smart, impactful real world solutions from broadcasters and from broadcast partners. Solutions that will enhance our efficiency, knowledge and profitability. With machine learning and AI, our small teams can be empowered to do more. Adding AI to our mix will help us automate what is currently manual, which will feel like adding an extra copywriter, an extra producer, an extra developer, etc., to our teams. We can also use this technology to help us learn more about our audiences, build strategies to super-serve them and target like audiences in our marketing.
We are embarking on a whole new chapter of our industry, and in 2024 we graduate from observing in the stands to playing on the field.
Broadcast Radio Dominates the Connected Car Media Landscape
Joe D’Angelo, Senior Vice President, Broadcast Radio and Audio, Xperi, and PILOT Member
2024 is setting up to be a breakout year for terrestrial broadcast radio in connected cars. Thanks to DTS AutoStage, the support of tens of thousands of radio stations and over 15 OEMs, hybrid radio is going mainstream. The industry will experience exponential growth in DTS AutoStage enhanced radio listening as more OEMs launch with the platform around the world, and the impact will be measurable thanks to our Broadcaster Analytics platform. Broadcasters will be able to better understand their audience and in the impact of their programming across a wide-ranging mix of cars, markets and formats. These insights and the enhanced user experience are sure to delight both advertisers and listeners in 2024 and for years to come.
“It was the best of times, it was the worst of times.”
John Clark, Senior Vice President, Emerging Technology, NAB
Everyone is talking about artificial intelligence, and everyone has an opinion. Usually, those opinions fall into one of two buckets: good and bad. We’ll continue to see examples that easily fit into those extremes. However, we’ll see more use cases emerge in the messy middle. The more we understand how the new technology can be used, the more we’ll wrestle with how it should be used and how to combat the ways it’s used inappropriately. This will force us to think about AI not as a binary choice of good or bad, and it will force us to put the technology to use in service to our audiences. Doing nothing is not an option.
Sustainable Media: Adoption and Development in Video Compression Technologies for Eco-Friendly and High-quality Content Delivery
Ling Ling Sun, Chief Technology Officer, Nebraska Public Media, and Chair, NAB Broadcast Engineering and IT Conference Committee
In my 2024 prediction, a notable increase in the adoption and development of more efficient video compression technologies is anticipated, aligning with the media industry’s sustainability goals. As videos currently constitute more than half of network traffic, the potential to halve this load through enhanced compression signifies a substantial stride toward eco-friendly practices. Beyond network traffic, compression technologies also play a crucial role in conserving essential data storage.
However, with the increasing efficiency of compression comes complexity. There is a clear demand for new chips capable of substantially reducing computational power requirements. If these chips effectively cut power consumption in half, we could witness more energy-efficient compression processes. Moreover, artificial intelligence is positioned to play a pivotal role in adaptive compression and beyond, promising more efficient semantic communication.
This ecological approach to developing a sustainable media industry is indispensable for enhancing user experiences with technologies such as Ultra High Definition (UHD) and immersive videos. Minimizing resource usage and delivering high-quality content are pivotal for both environmental consciousness and heightened user satisfaction in the dynamic media landscape. As bandwidth demands for high-quality content increase, the contribution of efficient compression becomes even more significant.
Radical changes are happening in numerous areas of the broadcast TV ecosystem, but perhaps most pointedly with the ATSC-1 platform transitioning to ATSC 3.0. Since its regulatory green light in 2017, ATSC 3.0 has progressed steadily in the marketplace, albeit at times a little unpredictably and at a varying rate of change. A quick Google search and a query to ChatGPT led me to understand that acceptance of major business changes typically follows an adapted version of the familiar Kübler-Ross stages of grief. For business changes, industries typically go through four stages of acceptance: denial, resistance, exploration and commitment.
With denial and resistance largely in the rearview mirror, over the coming year I think we are likely to see a mass movement of stakeholders migrate across the exploration–commitment boundary of the transition — and not look back. In practice, commitment to ATSC 3.0 means more than just infrastructure changes and ATSC 3.0 lighthouses going on the air. It means significant investment in enhancing programming and services that can benefit most from the new infrastructure and will most impress and attract consumers. Look for rollouts of High Dynamic Range (HDR) and native 1080p production programming, more audio options, compelling broadcast app implementations, enhanced emergency information services and new non-television use cases to emerge. Philosopher and author Jean-Paul Sartre once said in an interview that commitment is an act, not a word. 2024 should be the year of action on ATSC 3.0 commitment.
Broadcasters Enter the Wireless Market
Mike Kelley, Vice President and Chief Information Security Officer, The E.W. Scripps Company, and Chair, NAB Cybersecurity Advisory Group
2024 will be the year where broadcasters redefine their market position, delving into the realm of wireless data transmission previously dominated by telecoms and internet service providers. We will witness experimentation with ATSC 3.0, exploring use cases that capitalize on the benefits of a one-to-many architecture and identifying areas where hyper-localized precision is necessary. This one-two punch combination marries our industry expertise with a modern technology standard that offers a lifeboat for broadcasters to navigate treacherous economic headwinds.
ATSC 3.0 will establish the broadcast industry as a new entrant to the wireless data market, offering a wide array of novel services and engaging audiences in unprecedented ways. The new standard is not just an upgrade; it’s a strategic pivot that could redefine the broadcast industry’s role in a data-driven future.
Best of the Old and Best of the New
Roswell Clark, Executive Director, Radio Engineering, Cox Media Group, and Vice Chair, NAB Radio Technology Committee
2024 promises to continue the evolution and importance of the public facing benefits of radio ranging from AM in the dashboard to the advancement of features in hybrid radios. How broadcasters support the technologies to maximize these areas for consumers and public safety will interesting and exciting. 2024 also looks to be the year that advancements in the next generation of radio architecture will potentially close the few remaining gaps in technology solutions to provide greater reliability, management and scalability.
2024 Will Be Highly Dynamic
Sam Matheny, Chief Technology Officer, NAB
High Dynamic Range (HDR) will begin to be deployed at numerous television stations across the country. This technology brings a noticeable improvement to the picture quality in video and is enabled within the ATSC 3.0 standards. In layman’s terms, it provides brighter whites and deeper, darker blacks. And it does so without costing a fortune in bandwidth the way sending more pixels (think 4K) does. The Ultra HD Forum has demonstrated HDR/SDR single master workflows, and I think some of those efforts will translate into real-world deployments. I believe we’ll see many stations and perhaps multiple networks adopt some form of HDR technology to improve their NEXTGEN TV services.
Broadcast Positioning System (BPS)
Tariq I. Mondal, Vice President, Advanced Technology, NAB
Broadcast Positioning System (BPS), an ATSC 3.0 application, will get serious attention from the government agencies who are trying to solve the GPS vulnerability and dependency problems for civil GPS use. I am hopeful that the U.S. government will fund a BPS market trial and evaluate its performance in 2024. A few broadcasters will deploy the system in test markets, and the ATSC 3.0 equipment manufacturers will take notice of this potential opportunity. BPS will also get traction in South Korea in 2024 in terms of research and development. U.S. TV broadcasters will consider BPS to be a strategically important application to advance ATSC 3.0 deployment.
The advent of streaming had many pundits predicting the end of broadcast television, but the ongoing transition to ATSC 3.0 shows that NextGen TV is on the rise. What’s more, legacy broadcast series have remained among the most popular content on streaming platforms worldwide. Learn about the latest broadcast tech and trends as well as what the future holds for over-the-air TV with the expert knowledge and insights you need from this hand-curated series of articles from NAB Amplify:
Show organizers say 2024 trend highlights include AI, “streaming universes,” virtual production, and the creator economy. The event will also emphasize live events and advertising.
New this year will be the Creator Lab destination, curated by Jim Louderback and Robin Raskin, and the Propel ME startup program.
Creator Lab is focused on the creator economy and features interactive workshops and networking events. Programming will cover topics including AI’s role in production, the utilization of video for corporate messaging and the establishment of a resilient infrastructure for the content ecosystem.
Propel ME will launch as a digital forum for startups (here on NAB Amplify!) in January. Then, during the April Show, Propel ME will be transformed into a live experience.
“Creator Lab and Propel ME epitomize the spirit of innovation we will be spotlighting at the 2024 NAB Show, providing a dynamic intersection where creativity meets technology,” said Chris Brown, executive vice president and managing director, NAB Global Connections and Events.
“Amid the swiftly transforming media landscape, it is important that the Show provide unique activations like these to act as a guide to the tech and companies that are redefining the horizons of the entertainment industry.”
Returning programming includes the Devoncroft Executive Summit, Post|Production World, Streaming Summit, TVNewsCheck’s Programming Everywhere and Core Education Collection, which features the Broadcast Engineering and Information Technology conference, Small and Medium Market Radio Forum and the Focus on Leadership Speaker Series.
Is It Technology Dread or Imminent Apocalypse? (Both?) Asking Sam Esmail
TL;DR
Writer-director Sam Esmail discusses his new film, “Leave the World Behind,” which sees Julia Roberts, Ethan Hawke and Mahershala Ali facing the end of the world.
The film makes a strong commentary on society’s dependence on technology, which is only going to grow as we continue to incorporate AI into our lives.
Esmail includes cheeky digs at Tesla and at Netflix, the studio that funded the film.
From the dystopian science fiction thriller Mr. Robot through Amazon’s military mystery Homecoming to his new film, Leave the World Behind, writer-director Sam Esmail’s thematic obsession is the impact of technology on society.
The film examines the reliance we have on technology as an apocalyptic series of events cuts off all communication.
“I’m not a technophobe,” Esmail insists in a Google Talk moderated by Josh Lanzet. “I think technology is agnostic, it has no morality to it. It’s the human side that I’m more fascinated with. I really do think that it’s our sort of complicity, or how we use tech that will, in the case of the film, kind of offer a cautionary tale of what could happen to our world if we go one way or the other with it.”
Can we can still have sort of a functioning community without technology? he asks.
“Ultimately, technology, is a double-edged sword,” he said during an interview on the RealBlend podcast produced by CinemaBlend. “When I think about… the positives, it gives us access to information, to people, to media, to content that we want to explore. I think it’s a tool like anything else [and] it’s what we do with it.”
Based on Rumaan Alam’s 2020 novel, Leave the World Behind is set mainly in a country house outside of New York City, where a couple played by Julia Roberts and Ethan Hawke travel with their children, for a weekend getaway. On their first night there, two strangers arrive at the door: played by Mahershala Ali and Myha’la who declare they are the owners of the house and ask to be let in, citing a blackout in the city. Distrust and paranoia ensues as Esmail uses the tropes of the disaster movie to explore relationships of race and class.
The Towering Inferno, Earthquake and The Day After Tomorrow were among influences but the idea that touched a nerve was about how people can lose sight of their common humanity in the face of a crisis.
“It’s pretty relevant today given what’s going on in the world,” Esmail told Matt Zoller Seitz at Vulture. “The other thing that interested me is that this book does the inverse of what a typical disaster film does. The disaster elements tend to be the center of the story in disaster films. The characters tend to be secondary. Here, I could invert that process and be with the characters and have the disaster element exist more in the distance. That instantly felt more authentic to how humans would experience a crisis like that.”
Esmail read the book during lockdown when the idea that people can easily lose sight of their common humanity in the face of their own danger was all too real.
“But prior to reading the book I had this idea percolating in the back of my head about trying to construct a sort of disaster thriller centered around a cyberattack,” he told Brenna Ehrlich at Rolling Stone. “Because I think cyberattacks — even though they’re out in the public consciousness — there’s something ominous but equally mystifying about them.”
Classic paranoia thrillers like The Parallax View and North by Northwest were other touchstones, the latter providing inspiration for a scene in which Mahershala Ali’s character runs from a crashing plane.
“It’s not very subtle,” Esmail admits to Rolling Stone. “In all honesty, I don’t think there’s a movie made in contemporary times that doesn’t show some influence by Hitchcock. I think he’s essentially invented modern-day film grammar, but clearly, his work was looming large over the film.”
We also learn from Vulture that Esmail cast Ali in part because he thinks of the actor as a modern day Hitchcockian leading man. “The prototype was Cary Grant or Jimmy Stewart in Hitchcock’s films. They are an Everyman. They’re not five steps ahead, like a superhero, but they’re half a step ahead. They’re savvy enough to size up any situation. Mahershala has that.”
The director also talks about the cinematography of Leave the World Behind, in particular the camera movement that seems to move through the architecture similar to The Shining, and another iconic Hitchcock film.
“That was a huge influence,” he admits, talking about Kubrick’s psychological horror film on the ReelBlend podcast. “I love big camera moves, especially when it’s relaying something the audience doesn’t know. It’s like what you’re saying: It’s almost as if the movie’s a little possessed, and you’re the demon looking down at those people.
“It’s that great shot in Rear Window: Jimmy Stewart’s asleep and the camera’s moving, and then you’re looking across the street seeing the thing he’s not seeing, and then you realize, “Wait a minute — who am I? What’s happening? Who’s seeing it?” It’s very unsettling. Ever since I saw that film as a kid, I’ve always loved the idea of a camera being its own sort of person.”
Esmail’s script exhibits an eerie synchronicity with current events. For instance he made the movie when conflict had not yet escalated in the Middle East. Yet there’s a startling scene where Ethan Hawke’s character is being pursued a drone that drops leaflets written in Arabic that say “Death to America” — and later, another character who heard about similar messages, this time in Korean.
“Honestly, I tried to follow the guidelines out of the playbook of how coup d’etats actually work, especially when it’s a foreign actor,” he told Rolling Stone. “Propaganda misinformation is an old tactic. I just took that and magnified it and heightened it to this situation. It plays on your own biases and your own beliefs about who our enemies are, and I always love it when you can remove the barrier between the audience and your protagonist.”
Turning on Tech
Another scene features a number of Teslas that turn on their self-driving functions to block the roadways. Esmail says he didn’t seek permission from Elon Musk for that.
“Look, I wrote it in the script. I asked my amazing props guy, Bobby, to bring a bunch of Teslas out on the street. We shot the scene. I edited it in post, I showed it to Netflix, I crossed my fingers. And to this day, no one has said anything to me. So yeah, I’m hoping the movie comes out and no one will say anything.”
What doesn’t get lost in a digital attack are physical media like vinyl, DVDs and VHS (though you’d still need a source of electricity to play them). These become a source of comfort and nostalgia towards the end of the picture. But how did that sit when making the movie for a streaming service?
Esmail wasn’t afraid to poke the hand that feeds. On the one hand, he claims to be a “great proponent” of physical media, but also explains that one of the advantages of streaming services like Netflix “is that you really have access to any movie from across history at your fingertips,” he says.
“So there’s, there’s always a conflict because I’m a proponent of theatrical. I’m a proponent of DVDs and Blu-rays. But I’m also not mad at a streaming service that lets me see all the classics at a moment’s notice.”
Nonetheless he includes a cheeky shot that he doesn’t think “the Netflix folks” have noticed: “In the very end, you see Rose’s thumb hovering over the remote, and it goes past the Netflix button to hit ‘play’ on the DVD player.”
Notes From a Former President
The film’s exec producers are Netflix stablemates Barack and Michelle Obama, who were more involved in production then lending the cachet of their name.
“He’s a huge movie lover and a huge fan of the book,” Esmail confides to ReelBlend. “He really was committed to making this into a great movie. And he was giving me notes at the script stage, multiple drafts, including, post rough cuts. It’s kind of a surreal because I do think he is one of the most brilliant minds on the planet and to get his insight on the disaster element, characters, the theme. It was the highlight of my career.”
Turns out Barak Obama is a fan of Mr. Robot to the extent that Esmail got a call from the White House when Obama was President.
“We were in the middle of second season, and it hadn’t aired yet. And we were cutting the episodes. And someone from the White House, contacted our office and said, he’d love to get rough cuts of the episodes. Imagine that.”
“Oppenheimer” director Christopher Nolan says his new film offers lessons on the “unintended consequences” of technology.
January 5, 2024
Posted
December 18, 2023
How Erik Messerschmidt Post-Produced His Cinematography for “The Killer”
TL;DR
Cinematographer Erik Messerschmidt details his latest collaboration with David Fincher on “The Killer,” featuring Michael Fassbender as an assassin with sociopathic personality traits and an attention to detail that leaves nothing to chance.
Featuring an avocado-colored LUT, exquisite scene management and meticulous coverage, Fincher’s edict for “The Killer” was always to control the pace.
Multiple Paris interiors were constructed on sound stages in New Orleansin a series of three-walled sets that the actors were able to walk through.
The production shot practically whenever the outcome could be controlled, but lens flares and other digital effects were created during post-production.
A momentous fight sequence appears to have been captured using handheld cameras, but the footage actually had de-stabilization applied in post.
NAB Amplify caught up with cinematographer Erik Messerschmidt just as he was about to fly to the Camerimage film festival in Poland, where Ferrari, his first film with director Michael Mann, was in competition. “An extraordinary experience, once in a lifetime,” was his on-the-spot reaction to the question, “How was it?”
But we wanted to talk to him about The Killer, a Netflix movie with an appearance in selected theaters at very selective times. Most people would wait for the stream and live with the Internet compression artifacts for the treat of a Fincher film, this time about a man who kills for a living. Cue Michael Fassbender, with sociopathic personality traits and an attention to detail that leaves nothing to chance; some reviews suggested that this man was a depiction of Fincher himself.
If you have seen previous films or television shows from the Fincher/Messerschmidt duo, especially 2018’s Mindhunter, you would be in a comfortable place from the get-go of The Killer: An avocado-colored LUT, exquisite scene management, and meticulous coverage. “Is this all you?” the DP was asked.
“It’s a thing that David and I do together. I enjoy the process of camera direction; I view it as sort of my principal job, really. It’s thinking about the structure of the film and of each scene. Every director’s interaction in terms of coverage and camera direction is different. It’s the first thing David and I discuss: structure and pacing. It’s almost an editorial conversation in terms of what we’re going to provide Kirk [Baxter, the editor] and how each scene breaks down in terms of the pace,” he said.
“Then we watch the rehearsal, and I watch what he’s doing with Michael and what Michael’s doing. We look for the moments we need: the wide shots, the single setups, and what we need to address them. It’s quite scene-specific, but the edict on this movie was always to control the pace.”
When watching a Fincher movie, the joy is giving in to that control and mastery. For instance, when the killer is preparing for a hit, the pace differs from when it all goes wrong, and then that comforting LUT and algebraic camera direction deforms into something less exact.
“When we’re in the killer’s space, the camera is precise and classic. When his world falls apart and he’s no longer in control, then the camera follows suit,” says Messerschmidt.
But any comfort you may feel, especially in the first 20 minutes of the movie, is a ruse and preparation for the ride to come. Messerschmidt explains their methodology, “We’re not and never are pointing the camera at action. We’re — especially with David — quite concerned about using the frame to provide information for the audience. Those things can exist at the edge of the frame, at the center of the frame, and the relative depth sort of correlates to their importance,” he says.
“We’re quite cautious about the art direction and the composition of each shot with consideration about how we’re spoon-feeding that information to the audience. It’s like, ‘You need to see this now, so we’ll include it in the frame; you need to see this reaction, so it’ll be in a close-up.’ So you need to see what he’s looking at so you’ll be in a point of view,” the DP continues.
“That all comes from a place of blocking, really. The way that David sets it all up is quite holistic, and we now have a shorthand. I can see what he’s doing with blocking, so I can see the POVs and close-ups, so we’re generally in agreement.”
Without spoiling what comes next, the film stays with the killer almost all the time, but doesn’t empathize with him at any time. He is just a part of the tension that is Fincher’s most important gift to the audience.
In production terms, The Killer is as complicated as Fincher movies get. Ideas are suggested, and they are constructed or deconstructed differently if they don’t work. In the opening scenes, for instance, the apartment that the killer is watching isn’t real and was constructed thousands of miles away. There was just nothing in Paris that would work for the vision the director had.
Messerschmidt explains how they found the look they were after. “We’d gone to Paris looking for a location to do it all practically. We looked for a Penthouse apartment with a vantage point across the street that could be the killer’s lair, and we didn’t find it. We didn’t because we needed windows large enough to see all of this action clearly. The decision was made to build the apartment across the street; the final scene is an assembly of three different locations. The point of view when he’s looking out of the window at the cafe and all the actions on the ground is all real, and it is a square in Paris. The exterior facades are plates shot in Paris from the same vantage point. So we did a nine-camera setup looking out of this window that captured all that action and those plates. So we had matching light and light reference,” he details.
For the interior of the apartment, a set was built in New Orleans. “That’s where Michael’s action existed, and the window was real looking out to blue screen. The penthouse apartment where the target is was a build on stage as well on a different soundstage and it was a series of three-walled sets built together so the actors could walk through.”
During post-production, the set was placed on top of exterior plates that had already been shot, blending in the façade in front. “The facade was entirely digital, and the only thing that was real was beyond the windows — in fact, there was no glass either in those windows; all the glass was CG,” he says.
“We had to previsualize the action as they were all shot at separate times. The target’s movements had to be worked out because Michael’s points of view were all relevant to the edit.”
The telescopic sights were practical long lens shots, but Messerschmidt had the scope sent to post-production so they could see firsthand how the scope’s optics worked. “You then get that kind of warping around the edges, the drift of the crosshair and all the things that it really does.” This allowed the production team to capture the imagery the killer sees through the scope practically on a sound stage.
There’s no surprise that so much deconstruction is done on the movie. Fincher’s style has always been to find a way of getting the shots he wants, but this production is full of post-produced shots when doing it practically leaves too much to chance.
The lens flares employed for some of the street scenes came as a surprise to anyone who has watched Messerschmidt’s work with Fincher. He hadn’t done them before, and the flares looked different but beautiful. “Were they anamorphic flares?” we ventured to ask.
“I’m not a fan of anamorphic; in fact, I’ve never shot an anamorphic film; I’ve always shot spherical,” he replied. “I’ve shot some anamorphic commercials but have also been a bit disappointed, to be honest. It seems a little bit silly to shoot anamorphic with a digital camera. But I do sometimes like the qualities that anamorphic flares produce, but I never get them when I want them and get them when I don’t want them.”
There was some digital flare work in Mank, Messerschmidt said, “but it was very subtle. The Bell and Howell lenses of that era had particular flare characteristics that we wanted to copy. The CG artists got good at it, and I told David, ‘What if we play with that a little bit?’ So we would intentionally put bright things in the frame, practicals, or sun hits on roofs of cars where we would do an elaborate CG flare.”
Working in post-production, “I would mark it up and say that we needed a blue streak here, which should be very aggressive, and the guys would paint it in and make sure it was just right. It was an enjoyable experiment. It was cool to go in there and art direct them,” he said, adding, “I also rarely use diffusion on a lens, but there were moments in the Dominican Republic that I thought it would be interesting to try.”
For scenes the filmmakers wanted to appear very humid, they once again went digital, using a DaVinci Resolve plugin called Scatter. “It’s a bit of post-production cinematography. I would look to do that again. I think it’s all about control; I think the fear with that decision is that if you’re working with a team you don’t trust to implement it the way you want, it is not a fear I have with David. I’m pretty involved with that process,” he explains.
“There are certain effects that you have to do optically, but I’m not nostalgic about it like some people. I don’t believe you must be incredibly dogmatic about some things; it’s about the result.”
Veering back to what might be safer ground, and to more practical camera work, we inquired about the momentous fight scene towards the movie’s end, with its superb handheld camera work.
“That was quite an undertaking and a culmination of many people’s work, starting with the stunt coordinator. It was, of course, heavily choreographed and not something that was shot off the cuff. The thesis was that we wanted the audience to be geographically centered regarding understanding the space, so we didn’t deliberately disorientate them so they knew where we were in the house,” he said.
“Each room in the house has a distinct color palette that we key up in the beginning so you understand that you’re in the kitchen, gaming room, or bathroom. We go through the process of this fight, and we revisit all these spaces in reverse until he arrives back where he dropped the gun.”
However, Messerschmidt says, “there’s very little real handheld in that sequence. Almost all of that is post de-stabilization. It’s nice because we can go in there and art direct the level of shake by saying we can slow down a little bit here, quicker here. Sometimes I find that aggressive handheld is very hard to judge on the set and keep it consistent across five or six days of shooting that it took.”
For Messerschmidt and Fincher, this philosophy of only delivering the best experience for the audience is like a mantra. “It’s fun to see what we can do and get away with,” Messerschmidt concludes. “To be honest with you, anyone can shoot with a handheld shaky camera pointed at people fighting, and there’s a long history of success with that technique in action movies. There is a playbook for that. We wanted to see if we could do it differently.”
Someone tracking the conflict raging in the Middle East could have seen the following two videos on social media. The first shows a little boy hovering over his father’s dead body, whimpering in Arabic, “Don’t leave me.” The second purports to show a pregnant woman with her stomach slashed open and claims to document the testimony of a paramedic who handled victims’ bodies after Hamas’ attack in Israel on October 7, 2023.
Even though these videos come from different sides of the Israel-Hamas war, what they share far exceeds what separates them. Because both videos, though real, have nothing to do with the events they claim to represent. The clip of the boy is from Syria in 2016; the one of the woman is from Mexico in 2018.
Cheap But Effective Fakes
Recent headlines warn of sophisticated, AI-driven deepfakes. But it is low-tech cheap fakes like these that fuel the latest round of disinformation. Cheap fakes are the Swiss army knife in the propagandist’s tool belt. Changing a date, altering a location or even repurposing a clip from a video game and passing it off as battlefield combat require little know-how yet effectively sow confusion.
The good news is that you can avoid being taken in by these ruses — not by examining the evidence closely, which is liable to mislead you, but by waiting until trusted sources verify what you’re looking at. This is often hard to do, however.
In the largest survey of its kind, 3,446 high school students evaluated a video on social media that purported to show election fraud in the 2016 Democratic primary. Students could view the whole video, part of it or leave the footage to search the internet for information about it. Typing a few keywords into their browsers would have led students to articles from Snopes and the BBC debunking the video. Only three students — less than one-tenth of 1% — located the true source of the video, which had, in fact, been shot in Russia.
Your Lying Eyes
Why were students so consistently duped? The problem, we’ve found, is that many people, young and old alike, think they can look at something online and tell what it is. You don’t realize how easily your eyes can be deceived — especially by footage that triggers your emotions.
When an incendiary video dodges your prefrontal cortex and lands in your solar plexus, the first impulse is to share your outrage with others. What’s a better course of action? You might assume that it is to ask whether the clip is true or false. But a different question — rather, a set of related questions — is a better starting place.
Do you really know what you’re looking at?
Can you really tell whether the footage is from atrocities committed by Russian forces in the Donbas just because the headline blares it and you’re sympathetic to the Ukrainian cause?
Is the person who posted the footage an established reporter, someone who risks their status and prestige if it turns out to be fake, or some random person?
Is there a link to a longer video – the shorter the clip, the more you should be wary – or does it claim to speak for itself, even though the headline and caption leave little room for how to connect the dots?
These questions require no advanced knowledge of video forensics. They require you only to be honest with yourself. Your inability to answer these questions should be enough to make you realize that, no, you don’t really know what you’re looking at.
Patience Is a Powerful Tool
Social media reports of “late-breaking news” are not likely to be reporting at all, but they are often pushed by rage merchants wrapping an interpretation around a YouTube video accompanied by lightning bolt emojis and strings of exclamation points. Reliable reporters need time to establish what happened. Rage merchants don’t. The con artist and the propagandist feed on the impatient. Your greatest information literacy superpower is learning to wait.
If there are legs to the video, rest assured you’re not the only one viewing it. There are many people, some of whom have mastered advanced techniques of video analysis, who are likely already analyzing it and trying to get to the bottom of it.
You won’t have to wait long to learn what they’ve found.
Calls for national regulation of AI is growing amid a looming Presidential election and fears that more deepfake videos will be unleashed.
December 16, 2023
Posted
December 15, 2023
Editor Shelly Westerman Solves the (Post Workflow) Mysteries for “Only Murders in the Building”
Editing a mystery can be a delicate business. A reaction shot held a few frames too long can be a giveaway, too short and the eventual payoff could feel too obvious. This is challenging enough in a TV episode or a movie, but even more so in a ten-episode arc.
But that’s the kind of work the editors of “Only Murders in the Building” are responsible for. Season 3 is streaming now on Hulu.
The popular series, starring Steve Martin, Martin Short and Selena Gomez, is about to finish its third season, and editors Shelly Westerman, ACE; Peggy Tachdijian, ACE; and Payton Koch not only have to keep the reveals coming amid the show’s often absurdist humor and moments of pathos and drama, but they also have to attack some major musical numbers for the show-within-the-show “Death Rattle Dazzle,” at the heart of the season’s story arc.
In October, Westerman spoke with writer and film historian Bobbie O’Steen on stage at NAB Show New York’s Insight Theater. The duo discussed the “meticulous art of film editing.” Watch their full conversation (below).
The work of editing the series involves close collaboration among executive producers John Hoffman (the showrunner), Dan Fogelman and Jess Rosenthal; the writers, the directors and actors and the trio of editors, each of whom takes responsibility for particular episodes.
The editing process starts before cameras roll, when they receive that week’s script and virtually attend the table read in New York. Westerman explains, “Once you hear the words spoken, you hear the rhythms, you start to get an idea in your head, and you can begin visualizing an episode.”
This is followed by a concept meeting, featuring all the department heads. “We talk at a high level about the look and tone of the episode, and then we have a tone meeting specifically with the episode director and executive producers, and we go through the script scene-by-scene and talk about what’s happening.
“The director will propose all their questions and editors will chime in with questions, so there are a lot of very helpful discussions that happen early on.”
Each director helms two episodes, which are cross-boarded and shot in New York, usually with six or seven days allotted for each.
“Once they’re shooting one of your episodes,” Westerman explains, “we’ll start to get the dailies. What we see might match everything we’ve talked about to that point, or they might have discovered things on set that made the scenes go a very different way. But at least all the preparation lets us start with a grounding from which to work.”
Editors are given roughly two days to get their editor’s cut together and sent off to the director. “Then on a half-hour show like ‘Only Murders,’” she says, “the directors get about two days to work with the editor, before we need to turn that [cut] over to John and the other executive producers for their feedback.”
DRILLING DOWN TO THE WORKFLOW
Westerman, who a few years ago was adamantly opposed to the idea of remote editing (“I always said you must be in the room for creative collaboration,” she’d frequently asserted), has completely revised her feelings on the subject and acknowledges that she wouldn’t have even been able to have taken this job if it weren’t for the ability to work while also spending time in Florida caring for her parents. In fact, all three editors and each of their assistants work remotely.
While all work remotely, they are not actually doing the work on their computer or working with any of the media where they are. Boxes with Avid and media all sit securely inside the facility Pacific Post, where they are networked together via Avid-NEXIS.
Westerman, who works on a Mac “trashcan” wherever she’s set herself up to work, uses Jump Desktop to access her hardware and the network, as do the other two editors, though they happen to work on Mac Minis.
When dailies are ready, Westerman’s assistant, Jamie Clarke, is the first one notified. He will also have access to camera and sound reports and script notes, and he will QC the material to ensure that it’s all in sync and there are no technical issues.
Then Clarke organizes the scenes within Westerman’s system. Anything shot with more than one camera (most scenes in the show are covered by two and some of the musical numbers by three) into Group Clip projects, and he will load footage into bins to her specifications (each editor has their preferred method of organizing material).
“I don’t get the scenes in order,” Westerman says, “but I’ll start to build sequences pretty quickly, so that I can see how it’s flowing. By the time they finish shooting the episode, I’ve got a rough sketch of the acts put together. Then, for my two-day editor’s cut, I’m really trying to polish and tighten.”
MAINTAINING A SEASON-LONG MYSTERY
Westerman received a detailed briefing from Hoffman prior to commencement of production for the season. This provided her with a broad overview of all the episodes, “so we had some idea of what was going to happen as we got into the season.”
But that isn’t the only way to proceed, Westerman acknowledges. “Peggy said she didn’t want to know who the killer was,” she recalls. “She felt it helped her with the surprises because she was surprised, as well.”
Regardless, in a story propelled by a constant revelation and clues, there needs to be an ongoing overview provided. Hoffman and the other executive producers, Westerman says, “will sometimes look at a version we present and say, ‘Hey, we need to see this in Episode Three because we’re going to refer to it in Episode Six. So then, we go back and finetune the episode.
“The moment that Ben Glenroy [Paul Rudd] falls down the elevator shaft and the [three lead characters] run out of the elevator, turn back around to see what happened and Mabel says, ‘Are you fucking kidding me?’ — that scene comes back into play in a later episode where she’s looking at a hanky Ben is holding. I didn’t use that and one of the executive producers said, ‘The hanky’s important. We have to see her looking at it at that point.’”
There is also a moment where Charles (Martin Short) gets into a fight on the fateful opening night that kicks the season off.
“They shot the fight scene for use in Episode Nine, but then it turned out I needed to use some of it in Five, and Payton needed some of it for Six. But Peggy hadn’t cut Nine yet, so we all wound up pulling from her footage, using bits and pieces from the fight scene that worked for our episodes. Later, we went back to make sure we were all in sync with one another in terms of what we were using from the scene.”
This back-and-forth happens frequently, particularly for the recaps that show important moments from previous episodes. “One of us might do a recap and another one will say, ‘You’ve got to change that. That isn’t in the show anymore.’”
POLISHING PICTURE AND SOUND
Long gone are the days when editors turned in rough cuts with “insert effect here.” The final sound editing and VFX creation will continue after picture is locked, but directors and producers expect the editors to deliver scenes that are complete, and work as is. So much of that work commences while Westerman and the other editors are still sketching out scenes.
“The schedule is so accelerated compared to a feature,” the editor notes, “so as I’m going along and stringing together and polishing scenes, we’re also doing sound work, adding score, adding VFX. We’re doing all of that together so that by the time I get to the end of my editor’s cut, I’m hopefully in pretty good shape with a polished cut to present to the director.”
Editing assistants are generally skilled at basic VFX work, such as wire removal, and the show has a VFX artist on staff from the beginning of production who can step in and handle quite a lot of the work as it comes up.
“There’s a scene where one of the characters is in a basement threatening Charles and Mabel with a blowtorch,” Westerman recalls. “Of course, they couldn’t shoot with a real flame for safety reasons, so the VFX artist handled that.”
Sound Supervisor Matt Waters gets involved early on to build a wide variety of sound effects. As the season progresses, there are more and more sounds that can be re-worked and re-used. Fairly early in the season, the editors already had access to quite a few sounds of the theater where much of Season Three takes place. SFX such as specific doors opening and closing, and hallway background sounds were accessible to the editors and sound editors.
CUTTING MUSICAL NUMBERS
While the musical numbers are meant to be dramatic and advance the plot, they need to be approached differently from regular dialogue scenes. Especially, as “Death Rattle Dazzle” really gets on its feet and the routines get more elaborate.
Cutting musical sections is a different set of muscles, Westerman explains. “These are big numbers and big Broadway people like Sara Bareilles, Michael R. Jackson, Marc Shaiman and Scott Whitman were stepping in to help with the songs, so I’m not going to lie, it was intimidating at times.”
These scenes are generally shot with three cameras, and Westerman not only Group Clips all the angles from a take so she could watch them together, but she also has her assistant build what she calls a “super group” comprised of all the tapes of a certain setup, as well as all the coverage so she could observe every possible permutation of picture to each moment of the song.
When there is singing by say Steve Martin or Meryl Streep or one of the other performers, the songs were generally pre-recorded by the artist, who would then, during the shoot, sing live while being fed the playback through an earpiece so both the playback and live audio are available on their own clean tracks.
This approach leaves open the possibility of using the prerecorded audio or the live audio, depending on which plays best. In fact, many of the numbers are the result of the music and sound departments cutting extensively, sometimes syllable by syllable, to come up with the very best rendition.
“The performers go in and did recordings of all the songs a while before they were used,” the editor explains. “We get those early on so we can listen to them and get them in our heads and know the songs themselves. Then, once we get the scenes, we start assembling those right away because they took a little bit longer to craft. They’re technically more challenging. I’ll get it laid out first, and then I can go back and find these little moments that help tell the story.”
Once the musical scenes are cut, the music production team and music editor Michah Liberman goes in and re-works the sound, sometimes alternating between the prerecorded and the live versions.
“Sometimes, they’re literally cutting syllable by syllable in a very exacting, precise way. Finally, our sound mixer, Lindsey Alvarez, ties it all together.”
“There is a lot of teamwork on the show,” Westerman sums up, “and it’s been rewarding and fun to work on a show this good and be part of that collaboration.”
If FX’s The Bear reminds you of a Martin Scorsese film, you won’t be surprised editor Joanna Naugle is a devotee and used his movies as references for the show.
February 16, 2024
Posted
December 14, 2023
Translating “The Last of Us” From One Screen to Another
BY MICHAEL MALONE, BROADCASTING+CABLE
HBO’s adaptation of Naughty Dog’s wildly popular post-apocalyptic video game “The Last of Us” is a notable (and successful) example of seeking out IP from nontraditional sources.
(The Last of Us is set in a post-apocalyptic America, 20 years after a fungal infection has turned much of the population into zombies. Neil Druckmann created the show alongside Craig Mazin.)
Working with such detailed source material means you essentially need an army and Mazin confirms that “The Last of Us” takes “thousands of people” to make.
Mazin was joined on the NAB Show Main Stage by several of them for a panel moderated by THR’s Carolyn Giardina. The conversation also featured cinematographer Ksenia Sereda; editors Timothy Good, ACE and Emily Mendez; VFX supervisor Alex Wang; and sound supervisor Michael J. Benavente.
Mazin said, “There’s no way for a film to be by one person. There’s hundreds of people — in our case, thousands of people.”
Mazin described The Last of Us producers, cast and crew as “a big family.”
Mazin spoke of the “luck” involved in gathering the right producers to work on the show and how listening to them in interviews and chats tells him a lot more than their credits do. “I like talking to people,” he said, “and hearing their passion for things.”
Challenges of Adapting
Season one was shot in Alberta, Canada. The producers discussed the challenges of adapting the popular video game to series.
Alex Wang, VFX supervisor, described the game’s look as “so beautiful and so immersive. How do we use that as inspiration?”
Cinematographer Ksenia Sereda said the producers aimed for a balance between borrowing from the game and giving viewers something fresh. “We wanted to preserve the most iconic parts,” Sereda said, “but at the same time, we did not want to exactly copy the look.”
She spoke of the “massive” variety of choices for cameras and lenses, and said the ARRI ALEXA Mini gave the shots a realistic feel and helped the viewers get closer to the characters.
Mazin quipped: “I don’t understand any of that. I’m glad you do.”
Editor Timothy Good said he’d never played the video game before. Editor Emily Mendez, on the other hand, was a big fan. The two brought together their different perspectives to give the show a distinctive feel.
Key Moments
The editors spoke of the key moments in season one. Pedro Pascal’s Joel lost his teenage daughter in the pilot, and is reluctant to open himself up to another teen girl as he gets to know Bella Ramsey’s Ellie.
Ellie’s book of puns makes him smile for the first time in eons. “You can see the transformation between the two characters and how they sort of come together,” Good said.
Mendez mentioned Ellie stitching up Joel’s stomach later in the season, and the effort the producers went through to give the scene extra impact. “You’re with her in that moment,” she said.
Michael J. Benavente, sound supervisor, spoke of “a quiet world” in the show with no freeways, no kids on playgrounds, no airplanes. The viewer hears snowfall in one episode. “It really helps the story of the people,” Benavente said of the hushed vibe. “When you hear what they’re hearing, when you feel what they’re feeling.”
Season two will shoot in British Columbia. “This is what I do — I do The Last of Us,” said Mazin with a smile. “I couldn’t be happier.”
Cinematographer Patrick Capone, ASC, director Mark Mylod and senior colorist Sam Daley consider what made the series look and feel like that.
February 7, 2024
Posted
December 14, 2023
Editing “All of Us Strangers:” Shifts Between Real and Imagined
TL;DR
Director Andrew Haigh and editor Jonathan Alberts delve into the making of “All of Us Strangers,” revealing how the film holds a deeply personal significance for both filmmakers.
They explain that the tone of the film was tricky, noting the challenge of blending supernatural elements into its otherwise straightforward drama.
Haigh and Alberts wanted the audience to feel dislocated and consistently questioning the story’s reality and found music a creative help in achieving this effect.
Loosely inspired by Taichi Yamada’s 1987 novel Strangers, Andrew Haigh’s All of Us Strangers has garnered critical acclaim as a romantic-ghost story with a deeply personal touch. The British writer-director explored the film’s themes during a panel discussion at the New York Film Festival, describing it as an exploration of the desires, fears, and traumas unique to a specific generation of gay men.
“It was the most expensive therapy I’ve ever done. And it did feel like therapy, in many ways. The story is clearly not autobiographical but it’s definitely does come from a personal place. I wanted to tell an experience, as I see it, from a queer experience but not just my experience.”
The film is about Adam (Andrew Scott), a melancholy screenwriter living alone, who meets and begins a passionate relationship with the more extroverted Harry (Paul Mescal). At the same time, Adam begins another parallel journey to confront his troubled past and perhaps reconcile his unsettled present.
“A lot of the elements in the story are personal to me,” he revealed. These include filming in Haigh’s actual childhood home, that he last visited 42 years ago.
“But it was always about trying to tell a wider story about what it means to be a parent, what it means to be a child, what it means to be a lover and how we try and negotiate those complicated relationships that kind of come and go through our lives.”
Haigh’s script notably diverges from the original source material, where the character played by Paul Mescal was originally written as female.
“It has a different type of thing going on which works as a traditional ghost story,” he told NYFF programmer and panel moderator Florence Almozini. “It really does fit in with that traditional Japanese kind of ghost story style, which I like. But I knew that wasn’t the film I wanted to make. That wasn’t what was interesting to me about it. I wanted to find a more grounded reality of the story and then take it to somewhere different.”
In the film, Adam is preoccupied with memories of the past and finds himself drawn back to the suburban town where he grew up, and the childhood home where his parents (Claire Foy and Jamie Bell), appear to be living — just as they were on the day they died, 30 years before.
Haigh’s regular collaborator, editor Jonathan Alberts, found the script resonated personally with him too, telling Deadline’s Matt Grober that it felt like it was written with him in mind.
“We shared the experience of growing up in the eighties, growing up gay, kind of growing up with the specter of AIDS happening and trying to deal with all sorts of feelings of grief or trauma and shame and all of these things.”
While All of Us Strangers was tricky, both tonally and as a story rooted deeply in internal experience, another challenge of the project for Alberts was figuring out how to grapple with the way in which the protagonist ends up “slipping between these worlds of the 1980s and contemporary London” in the story.
“We wanted the audience to feel dislocated, but anchored, not mired in confusion, but consistently questioning, is this real? Is this not real?” says the editor. “I feel like you always want to have an audience ask those questions, and you want to keep them active, and to keep putting the puzzle together.
“But when you’re creating a film that is essentially a bit of a puzzle, it’s always a question of, is this puzzle going to fit together? Because you can create a puzzle that doesn’t quite fit together, and people are just like, ‘I don’t know what’s going on.’”
Alberts came to All of Us Strangers after collaborating with Haigh on numerous projects over the last decade, from films like Lean on Pete and 45 Years, to shows like HBO’s Looking.
“We’ve been working for about 10 years together. So when we’re busy working on a television show or film, he’s busy typing in the background, and I’m cutting. That’s when I first hear about the script. Then, typically, he’ll share with me a few months later.”
When they get to the first cut of the film, about a week after shooting, he says the director and he never sit in the same room and watch it together, “because you’ve worked so hard, it’s like you’ve spent a lot of time yourself and your assistants putting it together. It’s an extremely vulnerable time for a director and seeing all the problems or seeing all the things they didn’t quite get.”
Alberts explains that the tone of the film was tricky in not being a straightforward drama but one that introduces supernatural elements.
“We never wanted to be moving to a genre, we always wanted to keep it in a very subtle space. And it’s a very delicate line. I think music helped to draw that out.”
Through screenings they experimented with a lot of different notes to find what was working and what was not before hiring a composer.
“When we were shooting this film in London I would take the tube and the train and every day and I was listening to this Italian composer Caterina Barbieri, which we ended up using as temp soundtrack. She’s an amazing composer, we met with her and we thought about her doing a score. But eventually we kind of went in a different direction [hiring London based French pianist Emilie Levienaise-Farrouch]. But that evolved over several months and many discussions.”
Haigh adds, “It’s obviously quite an unusual film and I was always very scared that the central conceit wouldn’t work. There are a lot of turns in the story that I was worried would not work. I wanted, even in the present day of the story, to feel slightly shifted from reality, even though that is based on an apartment block in London. It was really important to me that the tone just felt [to an audience] like ‘I’m not quite sure when and where this is set’.
“We thought really long and hard about trying to create a tone that made you feel like you were somehow separate from time. And that would allow you to understand the kind of conceit of the story and make it feel real when you suddenly go back and see parents.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 23, 2024
Posted
December 13, 2023
“Poor Things:” Making This Crazy Fantasy a Reality
TL;DR
The Searchlight Pictures release and Venice Festival Golden Lion winner “Poor Things” is an awards season favorite on multiple counts, not least the cinematography of Robbie Ryan.
Ryan explains his use of various wide angle and vintage lenses used to shoot within large scale “composite” sets using virtual production techniques.
Production designer James Price built four large composite sets at Origo Studios in Hungary, which deployed painted backdrops and cutouts as well as LED walls.
Portions of the film are also shot in black and white, with the final decision to do so only agreed upon at the last minute.
Director Yorgos Lanthimos saysreferences for the creative team included “Dracula” and the films of Fellini and Fassbinder, as well as Powell and Pressburger — all famed for their surrealist and extreme screen visuals.
In what critics are hailing as his boldest vision yet, auteur Yorgos Lanthimos (The Lobster, The Favourite) delivers Poor Things, a punkish Frankenstein update that metamorphoses into a feminist fairy tale.
In the Searchlight Pictures release and Venice Festival Golden Lion winner, Emma Stone plays a peculiar, childlike woman named Bella who lives with a mysterious scientist and surgeon (played by Willem Dafoe). The movie is set in an alternate version the 19th century and based on the 1992 novel by Alasdair Gray.
“I read the book around 2009 and immediately fell in love with it,” the director explained during a Q&A session following a screening at NYFF. “I hadn’t read anything like it. And it was mainly the character of Bella Baxter that I was drawn to.
“I just thought she was just this incredible, unique human being. The world of the novel itself, all the characters and the premise of it allowed you to explore the story of this woman who has a second chance in life to experience the world in her own terms.”
Lanthimos says that other references for the whole creative team included Dracula and the films of Fellini and Fassbinder, as well as Powell and Pressburger — all famed for their surrealist and extreme screen visuals.
In the same on-stage discussion, production designer James Price explained how he built four large composite sets at Origo Studios in Hungary, “which become something more like an immersive set, a like a Disney theme park,” he said. “Nobody builds set this big anymore.”
They didn’t use vast canvases of green screen. Instead they deployed painted backdrops and cutouts “techniques that nobody ever does anymore,” Price said.
Costume designer Holly Waddington, also on the panel, said she drew inspiration for fabric color from anatomical drawings and bodily fluids, “the yucky ones and the beautiful ones and everything in between… pinks and saturated reds and lilac sort of tripe colors. I always tried to relate it to something a bit revolting.”
For The New York Times series “Anatomy of a Scene,” Lanthimos dissects a sequence that takes place in a restaurant in Lisbon, and explains how Emma Stone with choreographer Constanza Macras devised her deliberately awkward dance moves.
Arguably, the real star on the technical side is Irish cinematographer Robbie Ryan, ISC, BSC, working on his second film with Lanthimos after being nominated for an Oscar for The Favorite.
Describing the director himself as “an astral cinematographer,” Ryan says that the desire was always to shoot on 35mm.
“That sensibility is something that kind of lands with the rest of the film where you’ve kind of got a whole sort of universe that is unique,” he says in an interview with Denton Davidson for GoldDerby. “Yorgos wanted to create a world for Bella to be in that nobody else would see. [We see it] only through her eyes.”
The DP’s work started with three months of prep to try out new film stocks and various lenses. “We did one test where we had about 50 lenses that we had to look through, and we had to get through that in one day,” Ryan told John Boone at A.frame. “It was a process of evolving and discovering as far as sorting the language for what we were going to do.”
Shots with an extreme wide angle 8mm fisheye lens were used to explore Baxter’s lab, and are of a type that he employed on The Favorite. This time the wide angles give the impression of almost looking through a peephole or a magnifying glass.
“This is extension of the wide angle language that Yorgos has been of developing over other films,” he told Davidson. “We wanted [to recall] the old vintage photography, where you would see a lot of vignette sort of kind of effects, because the big plate cameras that would have been used in early photography had la lens that didn’t cover the full width of the glass plate that would have been used for the camera.”
The extreme wide angle lenses paired with a 35mm camera allows the viewer to feel like “you can almost step into the world,” he says.
Vintage Petzval lenses, originally ground in 1910 for projectors, were also deployed for period effect.
“They’ve been rehoused which made them possible to shoot portraits as a camera lens,” he told Screen Rant’s Caitlin Tyrrell. “They had this beautiful way of creating a soft fall, a shallow focus and a kind of a crazy bokeh. But they evoked a lot of early photography. It makes me feel that we are connected a bit to the old world of photography, almost painterly. I remember the production design team mentioning Hieronymus Bosch quite a bit in prep.”
VistaVision lenses from another antiquated filming technology were adapted for use in a specially constructed “Frankenstein camera,” Ryan told an audience at Camerimage, as Variety’s Will Tizard reports. This achieved the desired period look but was tricky to work with, he said.
He also said that the results at times bordered on “mystical,” citing an incident when the camera’s “crap batteries” began to run down as he was filming Bella awakening from the dead. The film’s slower transport speed resulted in a slightly sped-up Stone sparking to life in a way no one had quite anticipated.
With Lanthimos adamant that he wouldn’t do additional dialogue replacement in post-production, it also meant the VistaVision camera could only be used for scenes where capturing dialogue on set wasn’t an issue.
Augmenting these old-school techniques and kit was the use of a virtual production screen to help create the views from the cruise ship.
Ryan called the 70-meters-long by 20-meters-high wall a “moving painted backdrop” in an interview with James Mottram for British Cinematographer. “For the cruise ship, Yorgos was always very keen to try out an LED backdrop, because then we could have the waves moving and the clouds moving,” he said.
Even though the set was small enough relative to the wall, shooting on wide angle meant they had to mask the ceiling. There were also issues with needing to illuminate the foreground set with a lot of light because he was shooting on negative film.
“That spillage of light is really painful, because it makes the LED wall lose its punch,” he revealed. “So, you’re kind of having to balance out so much all the time… it was a technical head wreck to try to keep the light on the deck, but not on the screen. And the fact that the deck of the ship was only probably four meters away from the LED wall made it really very difficult to stop this light spill, which made my life hell!”
Even the film stock itself was pushed to the extreme. Portions of Poor Things are shot on black and white, while Lanthimos was keen to shoot other sequences using Ektachrome. Because he wanted Ektachrome in 35mm, Kodak had to manufacture it specifically for the film.
“They only ever made it a 16mm Ektachrome, so Kodak cut it to 35mm and we processed it as reversal for reversal,” he explained to A.frame. “That’s something that’s never been done before. It’s actually a lot more versatile a stock than I thought it would be, but when we were filming with it, we were under the impression that if you were to underexpose, it would be irretrievable. So, I was [thinking], ‘Oh my God, if this stock comes back underexposed we’re in trouble.’ So you had to get it right. But the results were beautiful.”
Another challenge for Ryan was learning to shoot a lot of the picture on zoom lenses. Since he also operates the camera he had to perfect zoom control, as he explained to A.frame.
“For me, the wide angles are not difficult. I just put the wide angle on and everybody else — production design and sound — has a nightmare. The challenge for me, camera-wise, was the zoom, because I didn’t want to mess up any of the acting. I got the hang of it, but it was still nerve-racking and it pushed me to my limits.”
The studio-bound film was also unusual for a DP who typically shoots on location. This presented particular challenges around the lighting.
“The great thing was it kind of still felt like we were on a location because they just built the locations in this amazing detail,” he told Denton. “So everything in front of the camera is there. It was the same approach I would do normally, just I had to do a lot more lights and we had to build skies for cities like Paris and Lisbon.”
The wide-angle scope was so extreme that the fantastically detailed Victorian-style sets had to be created to all but completely wrap around the camera — which also made hiding lights and sound gear a challenge.
“They created all these composite sets, where you can walk in the front door and every little thing is shootable,” Variety reports he said at Camerimage. What’s more, Ryan added, is that sets don’t fly away to make space for the camera as it passes — instead, it must move through real rooms, halls and up and down stairs.
Choosing to shoot half an hour of the film in black and white and on B&W film stock, rather than shooting color and converting to B&W in post, was another key creative decision.
Davidson gets Ryan to talk about this in relation to the opening shot of Emma Stone dressed in an elegant blue outfit which then cuts to the black and white footage.
The reasoning, says Ryan, “and I’m probably gonna get in trouble for saying it,” is that if audiences saw a black-and-white image at the beginning of the film they might think the entire film was going to be black and white, and might tune out.
“I think we put a color shot out of the start so everybody will think it’s a color film, and then it goes to black or white, [then] goes into color again. That was sort of the theory behind that,” he continued.
“What I love about the use of color and black and white in the film, is that usually when you see a film, flashbacks are in black and white. But in this film, the film is in black and white and the flashbacks are in color.”
Ryan revealed to Movieweb that the decision to shoot black and white came just nine days before principal photography. “Yorgos said he had to go ring the producers at Searchlight, and it was like touch and go whether they would let him do it.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
March 31, 2024
Posted
December 11, 2023
How Martin Scorsese and Thelma Schoonmaker Reworked and Reframed “Killers of the Flower Moon”
TL;DR
With her 22nd collaboration with Martin Scorsese, Thelma Schoonmaker, ACE, talks about the process of changing the film midway into production to focus on the central love story.
The celebrated editor discusses how they test cuts both with each other and audiences.
She and Scorsese are longstanding cineaste curators — and financiers — of the legacy of the great mid-20th-century British filmmaking duoMichael Powell and Emeric Pressburger
Veteran editor Thelma Schoonmaker, now 83, is a graceful, generous and fascinating interview subject as she discusses Martin Scorsese’s Killers of the Flower Moon.
“The love story is the basic thing that Marty decided to focus on,” she told Matt Feury of The Rough Cut podcast. “When the idea about the film changed, because Leo DiCaprio decided he would like to play Ernest instead of the role of the FBI man [Jesse Plemons]. That was a dramatic change you can imagine in the script, and they were still working on that as we were shooting. Lily Gladstone and DiCaprio were working with Marty to create scenes that would show the evolving love story.”
She describes how the film teases out the complex character of Ernest, as someone who seems both to have genuine affection for his Osage wife, and yet is capable of facilitating murder.
“The audience enter this world and learn and experience things through Ernest, but we’re not really aligned with him because we only get a true sense of who he is, the atrocities and the violence, over time.”
The opening scene, for instance, depicts Robert De Niro’s character sizing up his nephew, much as the audience is.
“The way we worked on the rhythm of that scene, was to make sure that we sometimes paused for a few seconds, more than you normally would. Because you see that De Niro’s trying to make up his mind. What questions should I ask next to find out if this guy’s going to be a tool? As Ernest is. It’s obvious in the film that he doesn’t read, for example. He’s been horribly educated, whereas his uncle is much better educated.”
She and Scorsese tend to screen the movies they work on in multiple different cuts, fine-tuning in reaction to select audiences, as she explained to Craig McLean at Esquire.
“With our movies, we do rough cuts — sometimes as many as 12,” she said. Those cuts-in-progress are screened for people in her and Scorsese’s New York and Hollywood inner circles. “Then we start opening up to people we don’t know. Then we go to bigger audiences. And we learn from what we’re hearing, and then we do another cut.
“Then we screen again, and then we do another… we’re very lucky. A lot of editors aren’t given that kind of time, which I think they should be.”
Schoonmaker explains to Art of The Cut, “The fact that somebody who doesn’t know the movie is in the room with you affects you deeply. You’re very very conscious of people moving, or do they laugh? Or don’t they laugh at the right place? Or the wrong place? How are they feeling afterwards? Of course, we do talk to people at length afterwards to find out how they’re reacting.”
Sometimes there are big changes in direction — as was the case for Killers of the Flower Moon. “We usually do move things around when editing, except for Goodfellas where everything was perfect right from the start,” she told Feury. “That movie was like riding a horse. It knew where it wanted to go. We dropped only one shot. That film was just there.”
Honoring the Osage and Recognizing Powerful Scenes
Killers of the Flower Moon is dedicated to the memory of musician and composer Robbie Robertson, someone who’s had a hand in the music in various ways for many of Scorsese’s films since he recorded The Last Waltz (featuring The Band’s last concert) in 1976
Schoonmaker says the score’s throbbing baseline was something that Robertson came up with. “This culture, as you see in the last shot, and the dances that they do, are very sacred, you have to be invited to them, they’re not tourist things. So the drums are incredibly important. The Osage actually consider the drum a person as they do the pipe.
“So I think that Robbie being half Mohawk, Marty definitely wanted an indigenous person to do the music, and felt that this would drive the movie all the way through to the end. You know, it also is probably blood running through your veins. The fact that he continuously employed it meant that in his mind, he was giving it to Marty as a way to move the film along.”
In addition to the scoring, Scorsese wanted to emphasize the indigeneity of the characters. He did so in part by including several pivotal scenes in which Osage is spoken but no subtitles are provided.
Schoonmaker tells Steve Hullfish in an episode of The Art of The Cut, “There are many times in the movie where you do hear the Osage purely, which is a very, very good decision which I resisted at first. Not hearing it by itself. You don’t need to know what he’s saying in the wedding ceremony, for example, you know, he’s marrying them, right?”
However, in a late scene in which DiCaprio and Gladstone’s characters are arguing, Scorsese again opted for no subtitles, which Schoonmaker says “was a very brave and correct decision.”
Although Schoonmaker and Scorsese have worked on many projects together over the years, her instincts don’t always mesh with his, at least initially.
Scorsese knows, she says, he “could trust me to do what was right for the movie, that we weren’t going to have ego battles in the editing room about who’s right and who’s wrong.
“So when there ever is a really major disagreement, which is rare, I am always more than happy to show him what he has asked for. And then if I want to show him options, then I show him options. And he’s very happy to look at those. And then we’ll decide which one is best. But it’s never a battle.”
But, she says, “There’s never a problem when something’s that powerful. There’s never a question” of what to do, referring to DiCaprio’s performance in the courtroom scene.
For maximum impact, Scorsese instructed Schoonmaker to “cut away only when we absolutely have to. I want to just hold on Leo for the entire duration of the testimony because he is so brilliant.
“And he is. So we only cut away when the prosecutor points to De Niro and says he is now talking about this man. And that switch pans over to De Niro because Leo has just incriminated him.”
(And by the way — Schoonmaker is adamant that the film should be viewed in one sitting, with no pauses (even at home) to fully appreciate these cuts, regardless of the 206-minute runtime.
Dazed Digital’s Nick Chen reports that Schoonmaker was incensed to hear some theaters screened the film with an intermission: “That’s really horrible. There’s a build. It’s very important. There’s a long build that you have to feel. If you cut it, you’re not going to feel that! Don’t pause it!”)
Powell Pressburger’s Oeuvre
Her custodianship with Scorsese of the film œuvre of Michael Powell (The Red Shoes, A Matter of Life and Death, Black Narcissus — with Emeric Pressburger — and Peeping Tom), her late husband, crops up time and time again. The BFI in London recently held a career retrospective including newly minted versions of films like The Red Shoes, and Schoonmaker is a more than able commentator.
She tells Feury, “Michael Powell and Emeric Pressburger used to do what they called place little bombs in a movie little things that you may just barely notice that explode later. That is something that Marty would have noticed in his, you know, devouring of the Powell Pressburger films.”
“Marty says The Red Shoes are in his DNA,” notes Schoonmaker to Esquire. It’s a film that she first saw aged 12 while living on the Caribbean island of Aruba, in an “American colony” created by Standard Oil.
Returning to the U.S., aged 15, she tuned into a “wonderful TV show called Million Dollar Movie, where they ran one film nine times a week.” She later learned of another avid viewer: “Marty would [try to] watch a Powell and Pressburger movie [all] nine times unless his mother said: ‘If you don’t turn that off, I’m going to start screaming.’
That’s because, with the rise of realism in British cinema — “kitchen sink dramas” such as Saturday Night and Sunday Morning (1960) and ThisSporting Life (1963) — the films of Powell and Pressburger fell out of fashion in the UK. They were viewed as conservative, colonial, old-fashioned.
Key in that canon is Powell’s transgressive horror from 1960, Peeping Tom. In 1979 Scorsese arranged for Peeping Tom to be shown at that year’s New York Film Festival, and then paid for its redistribution in U.S. cinemas.
To mark the moment, Scorsese held a dinner in New York in Powell’s honor. He invited along the editor he’d hired to cut his latest movie, Raging Bull, partly on the advice of Powell.
“I was just so struck by Michael,” recalls Schoonmaker, who had last worked with Scorsese on his debut feature, 1967’s Who’s That Knocking at My Door.
“He was so extraordinary. He came back to talk to me — I was editing RagingBull in a bedroom, and we had film racks in the bathtub.”
That was how Schoonmaker and Powell met. They married in 1984. He died a decade later. “Marty gave me the best job in the world and the best husband in the world!”
ChatGPT was launched on Nov. 30, 2022, ushering in what many have called artificial intelligence’s breakout year. Within days of its release, ChatGPT went viral. Screenshots of conversations snowballed across social media, and the use of ChatGPT skyrocketed to an extent that seems to have surprised even its maker, OpenAI. By January, ChatGPT was seeing 13 million unique visitors each day, setting a record for the fastest-growing user base of a consumer application.
Throughout this breakout year, ChatGPT has revealed the power of a good interface and the perils of hype, and it has sown the seeds of a new set of human behaviors. As a researcher who studies technology and human information behavior, I find that ChatGPT’s influence in society comes as much from how people view and use it as the technology itself.
Generative AI systems like ChatGPT are becoming pervasive. Since ChatGPT’s release, some mention of AI has seemed obligatory in presentations, conversations and articles. Today, OpenAI claims 100 million people use ChatGPT every week.
The success of ChatGPT speaks foremost to the power of a good interface. AI has already been part of countless everyday products for well over a decade, from Spotify and Netflix to Facebook and Google Maps. The first version of GPT, the AI model that powers ChatGPT, dates back to 2018. And even OpenAI’s other products, such as DALL-E, did not make the waves that ChatGPT did immediately upon its release. It was the chat-based interface that set off AI’s breakout year.
There is something uniquely beguiling about chat. Humans are endowed with language, and conversation is a primary way people interact with each other and infer intelligence. A chat-based interface is a natural mode for interaction and a way for people to experience the “intelligence” of an AI system. The phenomenal success of ChatGPT shows again that user interfaces drive widespread adoption of technology, from the Macintosh to web browsers and the iPhone. Design makes the difference.
At the same time, one of the technology’s principal strengths — generating convincing language — makes it well suited for producing false or misleading information. ChatGPT and other generative AI systems make it easier for criminals and propagandists to prey on human vulnerabilities. The potential of the technology to boost fraud and misinformation is one of the key rationales for regulating AI.
ChatGPT is not the first technology to be hyped as “the next big thing,” but it is perhaps unique in simultaneously being hyped as an existential risk. Numerous tech titans and even some AI researchers have warned about the risk of superintelligent AI systems emerging and wiping out humanity, though I believe that these fears are far-fetched.
The media environment favors hype, and the current venture funding climate further fuels AI hype in particular. Playing to people’s hopes and fears is a recipe for anxiety with none of the ingredients for wise decision making.
This slowdown should give space for norms in human behavior to form, both in terms of etiquette, as in when and where using ChatGPT is socially acceptable, and effectiveness, like when and where ChatGPT is most useful.
ChatGPT and other generative AI systems will settle into people’s workflows, allowing workers to accomplish some tasks faster and with fewer errors. In the same way that people learned “to google” for information, humans will need to learn new practices for working with generative AI tools.
But the outlook for 2024 isn’t completely rosy. It is shaping up to be a historic year for elections around the world, and AI-generated content will almost certainly be used to influence public opinion and stoke division. Meta may have banned the use of generative AI in political advertising, but this isn’t likely to stop ChatGPT and similar tools from being used to create and spread false or misleading content.
As a result, another lesson that everyone — users of ChatGPT or not — will have to learn in the blockbuster technology’s second year is to be vigilant when it comes to digital media of all kinds.
In 2024, AI applications and algorithms that can optimize data, perform complex tasks, and make decisions with human-like accuracy will be used in diverse ways, the study finds.
November 28, 2023
The Precision Editing Required for David Fincher’s Assassin in “The Killer”
TL;DR
David Fincher’s go-to editor, Kirk Baxter, ACE helped achieve a new kind of subjective cinema with Michael Fassbender’s assassin character.
Although “The Killer” proceeds on a fairly linear trajectory, Baxter says this made it challenging to cut because there was nowhere to hide.
The rules for the visuals, the movement, and the soundscape were laid down in the film’s opening sequence set in Paris.
Critics are hailing David Fincher’s The Killer as his most experimental film since Fight Club: “a subjective, cinematic tour de force,” says Bill Desowitz at IndieWire, in which we get inside the mind of Michael Fassbender’s titular assassin after he experiences his first misfire in Paris.
The movie, now streaming on Netflix, divided into six “chapters,” each with its own look, rhythm, and pace tied to Fassbender’s level of control and uncertainty. According to the film’s editor, Fincher regular Kirk Baxter, ACE, the editorial process necessitated the creation of a visual and aural language to convey subjective and objective points of view for tracking Fassbender.
Baxter (Zodiak, The Social Network) goes into detail about working on each chapter with IndieWire. We learn that the opening sequence set in Paris took the most time for Baxter to assemble because it was stitched together from different locations including interiors shot on a New Orleans stage.
“I love the whole power attack, the stretching of time, the patience of what it takes to do something properly,” Baxter said. “And I love that it’s grounded in the rule of physics and how practical it is that each detail in order to do something correctly deserves the same amount of attention.”
Later, in a chapter set in New Orleans, the Killer exacts revenge on a lawyer. The setup prep is slow as he cunningly enters the lawyer’s office dressed as a maintenance worker.
“It was one of the hardest things to put together,” Baxter tells IndieWire. “It’s a little like a Swiss watch in terms of how exacting it can be in his control. David had like 25 angles in the corridor, but when you put it all together, I love how that scene unfolds by playing both sides of the glass [between the office and corridor]. Typically, he’s gonna say as little as possible and his stillness controls the pace, and when he gets fed up, these little, tiny subtle looks from him are letting you know that’s enough and where this conversation stops.”
The nighttime fight between the assassin and a character called The Brute in the latter’s Florida home is depicted as a contest between two warriors in the dark. Speaking to Dom Lenoir, host of The Editing Podcast, Baxter explains how he and Fincher choreographed this fight as well as talking more broadly about the director’s shooting style.
“David does always provide a lot of coverage [and] that gets misinterpreted as a lot of takes [but] what he’s extremely good at is making sure that I’ve got the pieces to be able to move around as needed, or to keep something exciting. It means I can edit pretty aggressively and use just the best pieces of everything. David knows these rhythms he shoots for an editor. So, if it’s a really long scene, you will find in the wide shot that they’ll often be blocking, for example, somebody coming into the room. You sort of work your way [into the scene].”
Baxter says all that matters to him once in production is the material Fincher has captured. “I will read the scene again so that I understand the blueprint of it. You know what its intention is, but then it can be thrown away because David can evolve beyond what the script was based on, whether a location or how our actors are performing. He’ll recalibrate and readjust.”
Although The Killer proceeds on a fairly linear trajectory (hey, like a bullet…) Baxter says appearances can be deceptive when it comes to cutting.
“I found it to be one of the more challenging movies to make because it’s not juggling a bunch of different character lines or going back and forth from past to present and that sort of thing,” Baxter told IndieWire. “It’s just a straight line, but the exposure of that [means there’s] nowhere to hide. It’s like everything is just under the spotlight and you’re not having dialogue and interaction to kind of dictate your pace. It’s a series of shots and everything has to be manipulated in order to give it propulsion, or how you slow it down.”
He continues this train of thought with Lenoir, “It was a challenging movie to make from my perspective because you are showing an expert on the fringes of society but he’s still a person that operates with precision. You’re trying to illustrate that by showing precision. And it is just a lot of fiddling to make things seem easy.”
He also discusses perhaps something that you may not notice in a first watch which is that The Killer doesn’t seem to blink. It doesn’t just happen in this film either but in other Fincher movies where Baxter says he has consciously selected shots of actor’s not blinking.
“I don’t think that it was an effort to remove them through the film,” he says. “It’s just the nature of how his performance was. But there’s been an effort to remove them in previous films when they’re all kind of landing off rhythm. It’s mostly about when you get into the meat of a scene and you’re in close ups and you want something delivered with intention and purpose.”
Audio was crucial to The Killer as well. Rather than be smoothed out in the background with the edge taken off all transitions, Fincher and sound designer Ren Klyce wanted the audio to be driven by point of view. The rules of the film’s soundscape are established in the opening sequence. Given that the protagonist is not predisposed to be chatty, we learn as much from his internal monologue as from his methodological movements.
“We crawl into his ears and sit in the back of his eye sockets instead of how it’s being presented,” Baxter describes to IndieWire. “From the moment when the target turns up, it was David’s idea to try a track that was what he plays in his headphones. And when you have his POV, we turn the track up to four, and when you’re back on him, the track drops down, and you get the perspective of it playing in his ear.”
They devised rules for how to apply his voiceover but realized they couldn’t have voiceover and music at the same time because there would be too much “sonic noise” for the audience.
“So one’s got to occupy one space and one take the other. The logic said to us what’s blaring in his ears and when he’s in a monologue is when we’re looking at him. That was the rule of what was subjective and what was objective,” says Baxter.
“We tried the notion of ‘vertical’ sound cuts,” Fincher explains. “By which I mean, you’re coming out of a very quiet shot and cutting into a street scene and – boom! — you pick up this incredibly loud siren going by. You’re continually aware of the sound.”
This makes for an unusual but effective experience. For instance, there’s a scene in a Parisian park where the sound of a fountain constantly moves around depending on the featured character’s POV.
Was matching that vertical sound cutting hard?
“I guess even when you’re creating chaos, you’re trying to affect it in your own way,” Baxter tells Andy Stout at RedShark News. “You’re always seeking your own version of the perfect way to do this.”
Jennifer Chung, ACE was one of the assistant editors on the film — part of a 14-strong editing department. She also spoke with RedShark News about the tools they used.
“Obviously we use Premiere, and we heavily use Pix also,” she says. “We do a lot of our communication in post through Pix, especially during production during the dailies grind, where we’re uploading not only the dailies but selects that are coming out so that we can get that to David.”
Adobe After Effects is also used extensively, with the team using Dynamic Links to round trip the content out of Adobe Premiere and back in. Some of the assistants also script, so Python or even Excel, in some cases, were also deployed to help automate some of the critical processes.
The Killer was shot in 8K using RED V-Raptor and according to Chung proved a little tricky initially to grade in HDR.
“We definitely had some kinks we had to figure out early on,” Chung says. “We all needed HDR monitors, but we didn’t have HDR monitors at home, though we had HDR monitors at the office. We also use a lot of Dynamic Links in Premiere, and we were having some color space issues going from Premiere to After Effects back to Premiere, but because we have such a close relationship with Adobe, we were able to figure that out.”
In The Killer, the assassin experiences a gradual loss of control that is in stark contrast with his profession’s demand for precision and self-discipline. The film’s voiceover is a key storytelling tool, not only advancing the plot but also reflecting the killer’s internal conflict. This narrative choice is complemented by the film’s editing, as Baxter elaborates in his discussion with Steve Hullfish, ACE, on the Art of the Cut podcast.
“David had outlined that the movie is about process,” Baxter says, explaining how he and Fincher used this idea to help guide their creative decisions. “When it’s the killer’s process — when he is in the driver’s seat — we’re going to be extremely deliberate about everything. The camera will be steady. The pacing will be steady, considered, and exacting. Continuity will rule the day.”
On the other hand, “when things go out of his plan — when chaos erupts — we’re going to introduce camera shake. I’m going to start to jump time slightly, start to clip action and have the freedom to kind of make it more exciting and out of control.”
Baxter likens it to the thriller-horror construction of “stretch everything until you reach that point,” adding, “Most of the series of chapters, or kills, have this similar construction that is stretching process, like taking your time with it, and then exploding into action,” he says. “There’s a great use of the voiceover line at the beginning where the killer says, ‘If you don’t have patience, this is not the profession for you.’”
While the Oscar-winning editor says he can’t entirely relate to a psychopath, he does relate to the concept of process. “I relate to the faith in process to see you through,” he says, “that if you follow all of the your steps of survival, you will get to the other side with the result you’re after.”
Baxter’s approach to working with the voiceover always began with the visuals, he said. “I would just start with the visuals on their own and then try to fit the voice to it.”
Voiceover “is always a wriggly fish because you can keep changing it,” he says. “It’s rather pliable,” he continues, describing how they would add, drop or even entirely rearrange lines.
“There was a lot of voiceover to kick it off, up until the first killer misfire, then it started to streamline back into just a repeating of the mantra but dropping away sections,” he recounts. “So the mantra got shorter and shorter, as he was breaking his own rules, and his own disciplines were starting to erode.”
The sniper scene, occurring near the beginning of the film, helped define how the voiceover would function throughout the remainder of the film. “We’re blasting the music in his ears, The Smiths song up so high that it can’t fit voiceover,” Baxter says. “So it became this rule in the movie, from that scene that then sort of bled out in every direction.”
Baxter’s editing approach was significantly influenced by the film’s music, particularly in key scenes where the soundtrack played a pivotal role in dictating the pace and mood. In the sniper scene, for instance, the loudness of The Smiths’ song was used to such an extent that it left no room for voiceover. This choice led to a rule that was applied throughout the film: voiceover was used where the music wasn’t dominant. Baxter describes this as a balancing act, saying, “When he’s in his POVs, what’s going on inside his head with sound? No voiceover goes on his POVs in the movie. It all goes where the music’s not up at 10.”
The editing was further complicated by the need to adapt to changes in the music track. Baxter recounts how the music selection process was dynamic and challenging, involving various experiments with different genres and artists before settling on The Smiths. He notes, “It was the cherry on top. Always for me, I just loved playing with music: being part of choosing which tracks should go where to have the most black comedy in.”
Baxter also emphasizes the style developed in the film of “punching in and out of the volume of the music — from POV of headphones to blasting full.” This technique became a signature aspect of the movie’s sound design, especially effective in scenes like the sniper scene and when the killer is in the van observing his target, Dolores. The fluctuating music volume played a crucial role in setting the tone and pace, aligning with the visual cuts and the psychological state of the killer.
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
November 20, 2023
“Saltburn” Is Debauched and Depraved But It Looks Like a Caravaggio Painting. So Let’s Start There.
TL;DR
Emerald Fennell explains how her psychological black comedy “Saltburn” is a satire on the British class system using the vehicle of a grand stately home setting.
Cinematographer Linus Sandgren uses a square aspect ratio and shoots on 35mm to film “disgustingly beautiful moments.”
Fennell says, “We did a lot of work then to make it a physical experience — uncomfortable, sexy, difficult. I thought a lot about the feeling of popping a spot — queasy pleasure.”
Emerald Fennell’s latest cinematic spectacle, Saltburn, savagely peels back the veneer of the British upper class of the mid-2000s, crossing Brideshead Revisited with The Talented Mr. Ripley served with a twist of vampire-infused black comedy.
The film revels “in voyeuristic repulsion and the fetishization of beauty,” writes IndieWire’s Bill Desowitz, told through the point of view of cunning Oxford student Oliver (Barry Keoghan), who becomes infatuated with his aristocratic schoolmate, Felix (Jacob Elordi), following an invitation to stay for the summer with his eccentric Catton family at their titular estate.
“I’m setting out to be honest and unsparing, and I’m not frightened of people not liking it,” Fennel explains to Salon’s Jacob Hall. “I mind if people don’t appreciate the craft or they think I haven’t done my homework, or they think I’ve made decisions that aren’t deliberate. That gets my goat, because that’s a different argument. But if you don’t like it, I don’t mind.”
Fennell’s bold visual plans began with shooting in 35mm to capitalize on the rich color and contrast, and using the 1.33 aspect ratio to enhance the story’s voyeurism.
“She wanted to convey the hot summer and foggy night, influenced by the legendary landscape painter Gainsborough, as well as more dramatic lighting inspired by Hitchcock, Nosferatu, and baroque painters Caravaggio and Gentileschi,” we learn from Desowitz’s interview with the film’s cinematographer, Linus Sandgren (La La Land).
The DP landed the job at the suggestion of Saltburn producer Margot Robbie, who had just worked with Sandgren on Babylon and knew first-hand what dark beauty he could achieve shooting in 35mm.
“I had seen Emerald’s debut film, [Promising Young Woman], where she made some very interesting decisions,” Sandgren said. “For example, letting the lady die in a single take, which was horrible to watch. And then when I got the Saltburn script, I thought it was brilliant. She writes very visually and in a descriptive way and I got some very clear images in my head.”
They both agreed that shooting on film was right for the story, as Sandgren explained following a screening at Camerimage, as reported by Will Tizard at Variety. The medium’s reaction to red light in some key scenes inside the family home was particularly well-suited to the growing sense of horror, Sandgren said. So were close-ups of characters feeling extremes of emotions, with sweat, hair and bodily detail helping to build on the descent into obsession.
To strike just the right tone in these scenes, Company3 colorist Matt Wallach says, “We got into using tools in the Resolve, like the Custom Curves and the Color Warper, to subtly bring out, say, the red lights in a party scene or the steely blue moonlit tones and a night exterior while always keeping the skin tones where they should be. With Linus, skin tone always has priority.”
Sandgren shot with the Panavision Panaflex Millennium XL2 camera equipped with Primo prime lenses to get colors and contrast with under-corrected spherical aberration. It all worked out well to propel the journey into darkness, Sandgren said, growing into other scenes of seduction that push boundaries. All of which just enriches “the bloody cocktail of Saltburn,” he says, noting that, after all, “Vampires are sexual beings.”
When the director first spoke with Sandgren about the project, she described it as a vampire movie “where everyone is a vampire.”
Elaborating on this to Emily Murray at Games Radar, Fennell says she liked the vampire metaphor as a vehicle for attacking the class system and our unhealthy fascination with the rich and famous.
“We have exported the British country house so effectively in literature and film, everyone internationally is familiar with… their workings,” she says. “As we are talking about power, class, and sex, this film could have existed at the Kardashians’ compound or the Hamptons, but the thing about British aristocracy is that people know the rules because of the films we have seen before. We all have an entry level familiarity but the things that are restrained about the genre are overt here — as we look at what we do when nobody is watching us.”
This embodied the vampire ethos at night in all its gothic beauty and ugliness. “Emerald’s attracted to something gross happening, but you see it in a perfectly composed image with the light just hitting perfectly,” Sandgren said in an interview with Tomris Laffly at Filmmaker Magazine. “I think the challenge was finding a language for the film with secrets that you don’t want to reveal and having it seem ambiguous.”
Fennell wrote Saltburn during COVID, when people couldn’t even be in the same room together, “let alone touch each other, let alone lick each other,” she said, commenting on some of the film’s explicit scenes. “This is a film really about not being able to touch. Now, especially, we have an extra complicated relationship with bodily fluids.”
As Laffly prompts, this sounds like Fennell wanted to unleash a beast we all have in ourselves that was so oppressed during lockdown.
“That certainly felt like one of the motives,” she admits. “There’s nothing that is more of a rigid structure than the British country house and the aristocracy, nothing more impenetrable. So yes, to unleash the viscerally human into that arena was so much of it.”
To Fennell, so much of film is “frictionless, smooth, so consistent. And I feel like cinema — without being so grandiose and pompous — is designed to be watched in a dark room of strangers, and it can be expressive, it can be to some degree metaphorical. When I look at the filmmakers that I love [like David Lynch or Stanley Kubrick] these are people who are making films that I feel in my body.”
This idea of foregrounding intimacy led to their decision to shoot within the Academy ratio. Again, she and Sandgren referenced classic portrait painters.
“To do that formal framing, if you’re looking at Caravaggio or lighting in a Joshua Reynolds and that kind of blocking, it is so much easier the more square you are. And I like extreme closeups, especially when you’re talking about sex and intimacy and inhuman beauty,” she told Filmmaker. “If you’re 1.33, you can have a full face. It can fill the frame completely.”
Scenes in the film are deliberately uncomfortable to watch. They are what Desowitz calls “disgustingly beautiful moments,” but Fennell emphasizes that they aren’t in any way there for shock value: “A lot of this film is an interrogation of desire,” she tells the Inside Total Film podcast.
“With this type of love, there has to be this element of revulsion, and for us to feel what Oliver is feeling and understand that, you need to physically react to stuff. We did a lot of work then to make it a physical experience — uncomfortable, sexy, difficult, queasy. I thought a lot about the feeling of popping a spot — queasy pleasure.”
Much of the more salacious coverage of Saltburn has concentrated on its final scene where Keoghan dances stark naked through the estate to Sophie Ellis-Bextor’s hit 2001 song, “Murder on the Dancefloor.” “Everything is diabolical, but it’s exhilarating,” Fennell explained to Jazz Tangcay at Variety. “It’s post-coital, euphoric, solitary and it’s mad.”
As for Sandgren’s camera moves, he pointed out that Oliver was always in frame for most of the film. “But this way, we see him full-figured. I think it was clear we wanted to follow him. Following him through that scene felt more natural to watch everything about him, and watch from the outside. It’s about his physicality and how he feels in that moment.”
It’s a tour de force for Keoghan, who, according to the cinematographer, was fearless throughout, but worked especially hard at rehearsing and shooting the choreographed dance sequence.
In capturing it, Fennell used 11 takes. “They were all very beautiful,” she said. “It’s quite a complicated and technical camera. A lot of the time, he was immensely patient because there was a lot of naked dancing. Take #7 was technically perfect. You could hear everyone’s overjoyed response, but I had to say ‘sorry’ because it was missing whatever it was that made Oliver that slightly human messiness. So, we had to do it a further four times.”
“[I]t was incredibly difficult to do because obviously it’s a oner, and we had to light every room completely from outside without seeing any of the kit,” Fennel tells Salon. “We had to set up all of the sound so we could switch to every room because of the lag, without again, seeing any of the kit.”
Fennell likens the actor to Robert Mitchum, as she explains to Filmmaker Magazine. “I think he’s just exceptional, not just now but for all time — someone like Robert Mitchum is a good comparison. There are actors who have a thing that nobody else has had before, and I think Barry has that.”
Cinematographer Charlotte Bruus Christensen approached the FX series “like a seven-hour movie.”
November 20, 2023
Empire State of Mind: Creativity and Social Collaboration at NAB Show New York
TL;DR
The winner of the “Empire State of Mind” photo contest, Dushawn (Dusan) Jovic, was announced during the “Empire State of Mind: Photo Contest Finale” at NAB Show New York.
With a cash prize of $4,000, this first-of-its-kind contest included a chance to direct a photoshoot on the streets of New York City with social media influencer and expert strategist Avori Henderson.
The session, featuring Bamboo founder and CEO Nick Urbom, explored new storytelling techniques blending photos, videos and narratives, and provided insights into leveraging social media tools and content monetization strategies.
Henderson and Dusan discussed strategies for discoverability, which remains a key challenge for content creators in a crowded social media landscape, including collaborations, hashtags and spec work to attract brands.
Watch the “Empire State of Mind: Photo Contest Finale” at NAB New York 2023.
At NAB Show New York’s Photo+Video Lab, the spotlight was on creativity, innovation and social collaboration with the announcement of the winner and full contest review of “Empire State of Mind presented by Bamboo.” This first-of-its-kind contest was aimed at finding the next great photographer with a cash prize of $4,000 and a chance to direct a photoshoot on the streets of New York City with social media influencer and expert strategist Avori Henderson.
The winner, Dushawn (Dusan) Jovic, was announced during a session entitled “Empire State of Mind: Photo Contest Finale.” Led by Bamboo founder and CEO Nick Urbom and featuring Jovic and Henderson, the session demonstrated new storytelling techniques blending photography and other visual mediums. It also covered social media tools, providing concrete practices for leveraging photos and videos for social media channels, as well as strategies for content monetization and winning methods for crowd-sourcing photo/video projects.
Bamboo is a mobile app and web-based social platform designed for creators to post collaboratively to monetize their efforts. Urbom, the visionary behind Bamboo, has developed and produced a number of cutting-edge experiences for creators to advance their careers, including tech platforms, professional conferences and awards shows. He was the co-founder, CEO and chairman of Infinity Festival Hollywood, and has co-founded three world-renowned trade organizations: the International 3D Society, the Advanced Imaging Society, and the VR Society.
“This is a $100 billion economy with 200 million people involved and there’s a lot of people looking for tools for how they’re going to grow and advance their careers,” Urbom said of the burgeoning creator economy.
The Bamboo platform was founded as the result of countless conversations with creators about the tools and platforms they utilized, as well as their own personal goals, Urbom said. “We were hearing a lot of feedback from people saying that, ‘hey, listen, we can do stuff for free online, we can do stuff with rev-shares, but it would be really nice if we could actually just run our own business, and if we could create content however we wanted to put it out there.’”
“From what I have seen, only Facebook Groups allows you to take a piece of content and put it in front of people who are interested in that same thing,” says Henderson, who was one of the 12 participants in the 2022 reboot of competition The Mole on Netflix. “Bamboo has built a space where you can create these little categories within your following. The most profitable thing you can do on social media is build a community and that makes you money.”
Collaboration, she notes, “is extremely important on social media, it’s one of the best and fastest ways to grow.”
Originally from Serbia, Jovic currently lives in New York City, bringing a distinctive cinematic style to his craft honed over years of working as a videographer and photographer with creators and brands including Saint Laurent.
“His unique color grading style is described as a candidate and lifestyle cinematic vibe, with a hint of subtle sexiness,” says Henderson, “and what’s wrong with that?”
Jovic acknowledges that collaborations can sometimes be fraught but remains a “big fan” of the synergies they can provide. “As a creative when you hear a word collab, you kind of start twitching,” he says.
“But I feel it has to be beneficial for everybody. When you start, you want to look at collabs as building your portfolio. When you’re already established, you want to look at collabs as it has to be beneficial for everybody. It’s always good to meet new people to create to establish new relationships to help other people because if you’re already established, not everything has to be ‘Okay. How can I get paid?’
“It doesn’t matter how big or small influencer or a brand you are,” he continues, “there’s always something if you’re good with creating, if you’re good with expressing yourself, something good is always going to come out of it.”
These “collabs,” Urbom points out, are called “partnerships” and “deals” in the business world, and “whatever you call it, they’re massively on the uptrend in terms of how brands are looking to partner and get their voice out and get their marketing message out.”
Discoverability in a crowded social media landscape remains a major priority for creators. Hashtags, Henderson and Jovic agreed, are an invaluable resource for creators who are just getting their start. “Hashtags [are] a big, big, big factor,” says Jovic. “Yep, thanks to you, hashtag, I pretty much started my career. And that’s how people usually find me.”
Spec work designed to attract the brands a creator wants to work with is another winning strategy. “When I started doing spec commercials for brands and products and actually posting stuff that is relevant to other brands, that’s where my where my business started growing,” Jovic shares.
“Once you start thinking like your ideal audience is when you start attracting your ideal audience,” Henderson advises. “So really think about your target market where you want to hone in on and then make every piece of content catered towards that.”
Check out the image gallery below to see Jovic’s work on the photoshoot with Avori Henderson, which took place on Little Island, a new public park located in the Hudson River, during NAB Show New York:
Ah, Youth: How Oliver Curtis Captures That Exuberance for “The Buccaneers”
Kristine Frøseth in “The Buccaneers,” now streaming on Apple TV+
TL;DR
Apple TV+ delivers a modern spin to new period drama about a culture clash between heritage England and Manhattan energy.
Cinematographer Oliver Curtis translates this into no-cut “oners,” swirling camera moves, a contrasted lighting pattern and the use of large format sensor combined with portrait lenses.
“It’s all about forward movement in people’s lives. It’s a playful show, full of light and color. The cinematography had to reflect that.”
With director Susanna White’s “The Buccaneers,” an adaptation of Edith Wharton’s unfinished final novel set in the 1870s, Apple TV+ adds a period drama with a modern spin to its lineup.
The story turns on the fallout of intercontinental marriages of convenience between five wealthy American heiresses and Englishmen long on family trees but short on cash. The women travel from New York to England, where they vie to pair off with aristocratic, eligible young men.
For cinematographer Oliver Curtis (“Stay Close,” “Vanity Fair,” for which he was nominated for a BAFTA), who worked on the first two episodes, the contrast was a natural setup.
“The theme of the clash of cultures from these vivacious, energized, young American women coming over to musty old England to meet their potential suitors has got a natural kind of transformative quality. You’ve got the color, light, and energy of their New York life, and then the dour, desaturated world of old England,” he told Motion Pictures. “It’s all about forward movement in people’s lives. It’s a playful show, full of light and color. The cinematography had to reflect that.”
Imogen Waterhouse and Aubri Ibrag in “The Buccaneers,” now streaming on Apple TV+.
Opening With a Oner
From the opening moments of the show, the expansiveness of this world and its characters are established via a long, meandering one-shot throughout an opulent New York home.
“Susanna and I designed … an opening statement of energy, of movement, of exuberance,” Curtis told No Film School. “So it was a real marker, if you will, for what you are getting yourself into with this show. And also the fact that it’s driven by the movement of our lead character played by Kristine Frøseth.”
Not only did the take need to incorporate different spaces with the cooperation of dozens of actors and supporting artists, but Curtis also had to consider what tools to use with his Steadicam operator, Alex Brambilla.
“Because we start close and wide on the flowers and as we sweep in, you get more compression as it gets busier with people inside. So we probably went onto a slightly longer focal length there. And then when we got up to the landing after Kristine meets Christina [Hendricks] there, I think we widened out a little bit more so that when we do the hidden edit transition, we were on a slightly wider focal length, which would allow us to get separation there.”
Kristine Frøseth, Alisha Boe, Josie Totah, Aubri Ibrag and Imogen Waterhouse in “The Buccaneers,” now streaming on Apple TV+.
They avoided any reflective surfaces with the coordination of the camera ops and cast. Eagle-eyed viewers might catch the one hidden cut in the sequence.
“There has to be a hidden cut because the first half of it is on location and the second half is on a build,” Curtis said. “So where we go into the rooms, we built that because we couldn’t find a building that gave us those two spaces. Plus we needed green screen beyond the windows for the street, which was just outside Glasgow City Chambers doubling for Madison Avenue.”
Camera Techniques Express Characters
The story theme of a clash of cultures gave the DP a clue that there was going to be an evolution of the show’s look. “You’re going to start with the modern American Vision and move to the old world vision. So that was an exciting prospect and thinking about how we were going to evolve that,” he explained to Patrick O’Sullivan.
The other aspect of the show he had to consider was to marry the grand interiors and big ballroom and dinner scenes with close ups of intimacy and expression. This drew him to using a larger format sensor of the Alexa LF combined with portraiture lens of the Arri DNA glass.
“It’s got all of the tropes of a period drama that you’d expect, but it’s also surprising and different in a lot of ways,” Curtis added.
Dynamic camera techniques, including tracking and Steadicam shots, reflect the characters’ infectious spirit.
Aubri Ibrag in “The Buccaneers,” now streaming on Apple TV+.
“When you’ve got an ensemble cast and the blocking is fairly fluid and not too static, the camera has to adjust and configure itself around their movements,” Curtis told IBC365. “Also shooting ‘B’ camera most of the time gave the editor coverage to build pace and find the action within the scene.
Lighting, Then and Now
“Something I hadn’t really taken on board previously is that the clothing from that period was much more reflective than most modern fabric. The bustles and corsets are textured and reflect the light so you get a lot of animation in the costume and movement.”
In the 1870s, electricity was available in the homes of wealthy New York society, while British aristocracy still had gas, oil lamps and candles. Curtis leapt on this as a storytelling device.
“The New York interiors are flooded with light, they are bright and open and accessible but when our heroes arrive in London the light hardly penetrates indoors. We keep the lighting levels low key there to build that contrast. Gradually as the women infiltrate high society the light starts to flood in.”
Glasgow City Chambers was used for interiors of London’s Grosvenor House, scene of a grand debutante’s ball. The building featured a magnificent white staircase, which White thought ideal to stage a parade of white gowned debutantes.
It was a very challenging space to work in. A giant sky light overhead meant the DP had to compete with all the vagaries of the Glasgow weather, and the staircase itself descended around an atrium, making it tricky to position and move a camera.
“We managed to work our way down the building in stages,” he told IBC. “Where there were doorways leading onto council offices I asked [production designer Amy Maguire] to build window plugs (where designers create a window) so we could bring daylight into the belly of the building where otherwise it would be gloomy and dark. This created interesting pools of light and contrast where we could stage different beats of the story. It was an unusual piece of staging for something that could otherwise have been a conventional ballroom scene.”
He used helium balloons to help light spaces in period houses partly to protect the delicate cornicing from rigging. They came in useful during the debutante’s ball scene, too, where the balloons were towed down the staircase as the camera team worked their way into the bowels of the building.
The British cinematographer expounded on his process with O’Sullivan, recalling that at one point in his career he was mostly shooting commercials.
“And as marvellous as that was in itself, traveling, seeing the world earning good money and making some interesting work it, you know, it can get very stultifying,” he said. “You kind of find yourself yearning to be able to hold a shot longer than two seconds and work with actors, I think it is great to have a good mixture [of work] so that you stay fresh and challenged.
Kristine Frøseth in “The Buccaneers,” now streaming on Apple TV+.
He started out his career shooting on film and says his goal remains to shoot in camera as much as possible. “You also have to be cognizant of the post processes and the ability you have in the grade to work with the colorist. Colorists are artists, too, and they can bring an awful lot to a show and surprise you with some of the solutions and make transitions which you thought wouldn’t work.
“It’s really important to have that in your back pocket. And I suppose my dialogue with the DIT and the VFX onset is one of reassurance that I just, I can say, ‘you know, look, I haven’t got the time, or the resources perhaps to deal with a certain problem, but do you think that will be okay, in terms of exposure, in terms of separation?’ These are experts around you doing their job for a reason, and you’d be foolish not to take on board their input.”
The foundation of his craft remains lighting and shooting it the way that you want it to be done, “if you walked away from it that day and never saw the image again, which is often the case on commercials. Because you can’t follow commercials through post production as much as you can drama. You have to trust that the image is there. So yeah, I try to walk away from set feeling that yes, I have got the essence of that, and it looked the way I wanted to look, and I’m not going to have to do too much in post.”
Shot by director of photography Zac Nicholson, BSC, “The Pursuit of Love” is a romantic comedy-drama set between the two World Wars.
November 19, 2023
Posted
November 18, 2023
How AI Reunited The Beatles for “Now and Then”
TL;DR
Advertised as the last Beatles song, “Now and Then” was built from a recording made by John Lennon shortly before his murder in 1980, using the same AI technology that director Peter Jackson used in his documentary “Get Back.”
The AI tool originated from forensic investigation work developed by the New Zealand police and developed by VFX studio Weta.
This could be the first in a long stream of work that’s salvaged or saved using artificial intelligence.
The Beatles, guided by producer George Martin, were famously pioneers of new technology in making seminal studio albums like Sgt. Pepper’s Lonely Hearts Club Band and The White Album; so using an AI tool to complete their final song should be seen as a natural evolution.
As John Lennon’s son Sean Lennon says in a making-of video, “My dad would’ve loved that because he was never shy to experiment with recording technology.”
“Now and Then” was built from a recording made by John Lennon shortly before his murder in 1980, using the same AI technology that director Peter Jackson used in his documentary Get Back to clean up and separate voices in archival recordings.
Co-produced by Paul McCartney and George Harrison’s son Giles Martin, the track features elements from all of the Fab Four — including a Lennon vocal track that was first recorded as a demo tape in the 1970s.
The track was included on a cassette labelled “For Paul” that Lennon’s widow, Yoko Ono, gave the three surviving Beatles in the 1990s as they were working on a retrospective project.
At the time the band members tried to complete Lennon’s demo but considered it unsuitable for release. It wasn’t until July 2022, Jackson told David Sanderson at the Sunday Times, that McCartney contacted him for his help in producing a new version.
From Peter Jackson’s “Now and Then” video for The Beatles
The audio software, called Mal (machine audio learning), allowed Lennon’s vocals to be separated from the demo. The track was then rebuilt with new performances from McCartney and Ringo Starr, along with Harrison’s guitar parts from their shelved recording session in the SC.
As is clear from the making of doc the chief difficulty with using the original cassette recording was that Lennon’s vocal was too indistinct in places with the piano and the sound of a TV playing in the background.
Weta’s AI, developed for Get Back, managed to cleanly isolate the vocals from the background allowing the mix to finally be made.
As Jackson explained to Rob LeDonne at Esquire, the tech originated from forensic investigation work developed by the New Zealand police.
“When it’s noisy and they want to hear a conversation between two crooks or something, they can isolate their voices. I thought that’d be incredible to use, but it’s not software that’s available to the public.
“So, we contacted the cops and we asked, “Do you mind if we brought some tape to the police station and we ran it through your confidential audio software?” So, we did a 10- or 15- minute test and the results were really bad. I mean, they did the best they could. But you realize that for law enforcement, the quality and fidelity of the audio doesn’t need to be good, so it was far, far short of what we could use.
From Peter Jackson’s “Now and Then” video for The Beatles
Weta took the theory of it and made a tool capable of producing high quality audio. “To hear some of those early songs in a fully dynamic way… You realize what you’ve been hearing is quite a limited range of audio,” Jackson said. “You don’t realize how crude the mixing on some of the early songs were, how muddy they were.”
In the making-of video below, McCartney expresses his doubts about making full songs out of Lennon’s demos, out of respect for the late songwriter’s unfinished work.
“Is it something we shouldn’t do?” McCartney says. “Every time I thought like that I thought, wait a minute, let’s say I had a chance to ask John, ‘Hey John, would you like us to finish this last song of yours?’ I’m telling you, I know the answer would have been, ‘Yeah!’”
Martin says human creativity is still at the heart of the song. Even if AI was involved, it wasn’t used to create synthetic Lennon vocals. “It’s key for us to make sure that John’s performance is John’s performance, not a machine learning version of it,” he told David Salazar at Fast Company. “We did manage to improve on the frequency response of the cassette recording…but it’s critical that we are true to the spirit of the recording, otherwise it wouldn’t be John.”
Jackson also directed the video for the track which contains a few precious seconds of the first-ever film ever shot of The Beatles on stage in Hamburg in the early 1960s.
“A Beatles music video must have great Beatles footage at its core,” he said as part of a lengthy statement about the project on The Beatles website. “There’s no way actors or CGI Beatles should be used. Every shot of The Beatles needed to be genuine. The 8mm film is owned by Pete Best, the band’s original drummer. Clare Olssen, who produced the video, contacted Best to get a few seconds of his film.”
From Peter Jackson’s “Now and Then” video for The Beatles
Angela Watercutter at Wired suggests that “‘Now and Then’ signals, if anything, not just the last Beatles song but the first in what could be a long stream of work that’s salvaged or saved using artificial intelligence.”
Indeed, Jackson has hinted at the possibility, as The Guardian’s Ben Beaumont-Thomas reports, of more Beatles music to come culled from archival footage he went through when editing Get Back, the eight-hour docuseries about The Beatles.
“We can take a performance from Get Back, separate John and George, and then have Paul and Ringo add a chorus or harmonies. You might end up with a decent song but I haven’t had conversations with Paul about that,” he said.
With 60+ hours of restored footage, three-part Disney+ docuseries provides an intimate counter-narrative to the final days of the Beatles.
November 13, 2023
Posted
November 13, 2023
“Special Ops: Lioness” Director/Cinematographer Paul Cameron Aims for Action and Emotion
Zoe Saldana as Joe in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
TL;DR
“Special Ops: Lioness” director and DP Paul Cameron talks about working with the show’s star-studded cast and stepping into the director’s chair.
Cameron speaks about finding freedom in restriction and how a cinematography background feeds his approach as a director.
“Sometimes you need to be a bit bold and break the ‘Five Cs of Cinematography’ [camera angles, continuity, cutting, close-ups, and composition] and deconstruct them,” he says.
Show creator Sheridan Taylor is very, very specific about scripts: “If there’s time to do additional lines, or additional shots, or coverage, or anything of that manner, that’s fine, but you’ve got to shoot the script, and you’ve got to treat it like the Bible.”
Cinematographer Paul Cameron, ASC is most known for his collaboration with Tony Scott (Man on Fire, Déjà Vu) and he has taken lessons from the late director’s approach into his own directing work.
In Paramount+ series Special Ops: Lioness, for instance, Cameron takes an unconventional approach to coverage.
In Cameron’s hands, even a standard dialogue scene between two actors has extra dynamism and energy that come simply from looking for unorthodox angles or alternating focal lengths in a manner that might seem counterintuitive.
“The idea of matching singles or overs in a conventional cutting pattern has never really become part of my vocabulary,” Cameron told IndieWire‘s Jim Hemphill. “It’s more about what looks good on each side — a 65mm lens on Nicole Kidman’s side might be better with a 50mm on the other side with Zoe Saldana, or one side might be more emotional at a steeper angle on an 85mm,” he said.
“Sometimes you need to be a bit bold and break the ‘Five Cs of Cinematography’ [camera angles, continuity, cutting, close-ups, and composition] and deconstruct them.”
Zoe Saldana as Joe and Nicole Kidman as Kaitlyn Meade in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
Speaking to Matt Hurwitz at the Motion Picture Association’s The Credits, he added, “With Tony, I learned to just be fearless with cameras and put them in places I think are emotionally appropriate and not necessarily coverage-oriented. Looking for a shot, say, with a steep angle, a little too close, to make it just the right level of uncomfortable if the scene calls for that. It’s a matter of what makes it feel right, as opposed to matching focal lengths on lenses and distances, which many shows do.”
Taylor Sheridan created and wrote the military thriller that also stars Morgan Freeman, Laysla De Oliveira, Michael Kelly and Jennifer Ehle.
The genesis of the story was a real unit that the CIA set up in Afghanistan for handling female prisoners. Taylor’s idea: “What if we take young Special Ops women and put them in situations with high terrorist targets?” Namely, befriending the daughter of a target, or a sister — female to female.
“This way, they could either track and/or take action against high level terrorists. That, to me, was a pretty extreme and interesting idea. It may be slightly different than what Taylor does with a lot of his other shows, but it’s so female-oriented,” Cameron told Owen Danoff at Screen Rant.
He served as the DP on the first two episodes and directed Episodes 5 and 6, and in collaboration with Sheridan and pilot director John Hillcoat established the kinetic visual language for the series.
“Sheridan is very, very specific about scripts,” he says. “If there’s time to do additional lines, or additional shots, or coverage, or anything of that manner, that’s fine, but you’ve got to shoot the script, and you’ve got to treat it like the Bible.”
The challenge is that high-caliber of talents like Kidman, Freeman and Saldana are used to improvising their lines. So how did Cameron handle that?
“It’s just always that situation, like, ‘Listen, we’re going to shoot the lines the way they’re written and then, if there’s an idea, we can either address it together or let’s get Taylor on the phone, and we’ll see if it’s something we want to address or extend a little time to shoot as well,’ he told Danoff. “Again, it seems very constrained, but it’s kind of freeing in the sense that you really have that voice of the writer and that showrunner, and that’s what you’re doing.”
Cameron was also heavily involved with Westworld, helping create the initial look of the show and directing episodes even in the series’ final season.
Working with Jonathan Nolan on Westworld, Cameron saw a director who “had linear beliefs of story and stayed with it, and doing that within the work of television,” he says in part two of his interview with Hurwitz. “The reason I started directing there was because I could see somebody setting the bar as high as I’ve ever seen.”
He also learned how to handle a massive amount of scenes in a limited time window. “We might lose a day for some reason and need to make it up, and even with all my experience, I was, like, ‘Oh, my God — how are we gonna do all this stuff?’ And, inevitably, we did it. And that gave me great confidence when I went to direct on Westworld.”
Zoe Saldana as Joe in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
Zoe Saldana as Joe in Season 1 of “Special Ops: Lioness.” Cr: Lynsey Addario/Paramount+
For Special Ops: Lioness, he had to hand his director of photography hat to cinematographers like Niels Albert, John Conroy and Nichole Hirsch Whitaker, something that can be difficult for someone who’s been in their shoes for so many decades.
Most important, he says, is to make sure to include them in prep as much as possible, evaluating scenes and locations, “And to really be open to big decisions,” he says. “What is this scene about? What are the storytelling aspects, and how are we going to manifest it in this location?”
While Cameron and Hillcoat originally considered using a large format camera, like the Sony Venice or the Arriflex Alex LF, the two settled on the popular Alexa Mini LF for most of the production.
Hillcoat didn’t want to see any lens built after 1980, so Cameron gathered an eclectic set for the shoot, including Canon K35s and Zeiss uncoated Super Speed lenses (with both rear end and no coating). “They all react so differently,” he told Hurlitz. “The K35s have a great softness on large format, falling off on the edges really nicely. The uncoated lenses have different qualities of halation [spreading of light beyond the source] and blooming and flaring. So if there’s something bright, the image just blooms a little, or the top halates a little bit.”
Locations
Baltimore stood in for countless locations in nearby Washington, DC. The production also shot in Morocco and Mallorca, Spain. The ISIS compound seen in the first episode was shot at a location in Marrakesh, as was the first meet between Cruz and her target, Aaliyah (Stephanie Nur), Amrohi’s daughter, filmed in the city’s new upscale Rodeo Drive-like shopping area, Q Street, subbing in for Kuwait City. The show’s wedding sequences were filmed at a beautiful house on the ocean in Mallorca. Additional sets were also built in Mallorca, including the White House Cabinet Room, seen in several episodes. Beach scenes representing The Hamptons were shot at Rehoboth Beach, Delaware, 120 miles from Baltimore.
“The challenge on this show was a lot of it takes place in DC, and we were situated in Baltimore, which is not the easiest place to film,” Cameron told Tom Chang at Bleeding Cool.
“We had to make a lot of Baltimore locations work for DC and get the establishing and aerial shots. It came together, but it’s always a challenge when you’re not in the exact place. I enjoyed going over to Morocco, I had some things directed there, and I had several scenes shot for John on episodes one and two there. It ended up being the better part of seven months. That was a challenge unto itself.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
November 13, 2023
“A Murder at the End of the World” — This Show Has Everything: True Crime, Tech Paranoia and Truly Gorgeous Visuals
“A Murder at the End of the World:” Emma Corrin as Darby Hart. Cr: Chris Saunders/FX
TL;DR
Although set in a glossy, hi-tech world, the creators of FX mystery series “A Murder at the End of the World” embraced imperfection and avoided trying to fix everything in post.
Cinematographer Charlotte Bruus Christensen, ASC talks to NAB Amplify about working with series creators Brit Marling and Zal Batmanglij.
She shot with custom-made detuned lenses and devised LUTs for each of the three principal locations in New Jersey, Iceland and Utah.
At first glance, a murder mystery set at a remote luxury retreat for some of the world’s most influential people recalls shows like The White Lotus and Glass Onion: A Knives Out Mystery, but new seven-part FX series A Murder at the End of the World has a different spin.
“With its time-jumping structure, uniquely eerie tone and warnings about artificial intelligence and climate change, it is also unmistakably the work of the idiosyncratic creators behind Netflix series The OA, Sound of My Voice and The East,” Esther Zuckerman writes in The New York Times.
Those creators are Brit Marling and Zal Batmanglij — a creative team who’ve been together since their first short film in 2007.
Emma Corrin and Harris Dickinson in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin and Harris Dickinson in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Harris Dickinson and Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Their new show — marking the first time Marling, a writer and actor, has stepped behind the camera as director — is an Agatha Christie-inflected whodunit, featuring a Gen-Z amateur detective played by Emma Corrin (Diana Spencer in The Crown).
“I knew that Brit was going to be a natural director; I just didn’t understand how much I would enjoy the experience of watching Brit direct,” Batmanglij tells The Hollywood Reporter. “Certain actors, when they get into the directing chair, just have a sensitivity. I saw Emma[Corrin] and Harris [Dickinson] bloom in certain ways when Brit was working with them, and that inspired me to want to go take acting classes.”
Marling explains, “When you’ve spent a lot of years acting, you’re really aware of what actors need to create their best work.”
Marling also co-stars as the wife of Clive Owen’s tech billionaire, who invites a motley crew of guests including an environmental activist, a roboticist and an astronaut to his Icelandic retreat, where one or more of them wind up dead.
“It was really eerie, actually, to see the number of things that, when we had set out to write it four years ago, were science fiction,” Marling told Zuckerman. “When we talked about any of this stuff with people, we had to explain what is a deepfake, what is an AI assistant, what’s a large language model — how does that work? And then by the time we were editing it, to see everything come to pass.”
Marling tells Vulture, “Directing feels like you’re taking the world-building part to its ultimate conclusion.”
To film their story, Marling and Batmanglij sought out acclaimed cinematographer Charlotte Bruus Christensen, ASC, who shot horror hit A Quiet Place; All the Old Knives, directed by Janus Metz Pedersen; Aaron Sorkin’s directorial debut, Molly’s Game; and Denzel Washington’s film Fences.
“At heart this is a coming-of-age story about a child of the internet who knows more how to live her life in cyberland than in the real world,” Christensen tells NAB Amplify. “The script had this larger than life quality as if the world of the internet can’t be contained or quite grasped.”
She continues, “As a teenager I remember thinking the stars were so beautiful but there was an unfathomable distance between them and me. That is how I think we all felt about the cyberworld in this story. You can’t put it into a cage.”
The Danish DP has enjoyed a long-standing relationship with director Thomas Vinterberg, which began when her own short films caught his attention. This led to Christensen’s first feature film, Submarino, which earned her a Danish Film Academy award for best cinematography. She also shot The Hunt and Far from the Madding Crowd for Vinterberg.
“From what I know, Zal loved The Girl on the Train (shot by Christensen for director Tate Taylor) but it was one of those processes where our agents got in touch. I was in New York having just shot Sharper (dir. Benjamin Caron) so we had our first meeting there and it was like first love. No one who meets Brit can fail to fall in love with her.”
Having shot a number of features back-to-back, Christensen wasn’t particularly looking for a TV project. Instead, it was the co-director’s passion and the story itself that convinced her to take the job.
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Harris Dickinson as Bill. CR: Chris Saunders/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Harris Dickinson as Bill. CR: Chris Saunders/FX
“A Murder at the End of the World:” Harris Dickinson as Bill, Emma Corrin as Darby Hart. CR: Eric Liebowitz/FX
From “A Murder at the End of the World:” Harris Dickinson as Bill. CR: Chris Saunders/FX
“A Murder at the End of the World:” Harris Dickinson as Bill, Emma Corrin as Darby Hart. CR: Eric Liebowitz/FX
From “A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Chris Saunders/FX
From “Murder at the End of the World:” (l-r) Jermaine Fowler as Martin, Emma Corrin as Darby Hart, Ryan J. Haddad as Oliver, Pegah Ferydoni as Ziba, Joan Chen as Lu Mei. CR: Chris Saunders/FX
From “A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Alice Braga as Sian. CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart, Alice Braga as Sian. CR: Chris SaundersFX
“A Murder at the End of the World:” Ryan J. Haddad as Oliver, Alice Braga as Sian, Javed Khan as Rohan. CR: Eric Liebowitz/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Lilja Jons/FX
“A Murder at the End of the World:” Emma Corrin as Darby Hart. CR: Chris Saunders/FX
The central character’s crime solving cyber skills might recall The Girl with the Dragon Tattoo by Swedish author Stieg Larsson. Christensen says their chief cinematic reference were the films of great Polish auteur Krzysztof Kieślowski, and in particular the Three Colors trilogy (1993-1994 Red, White and Blue).
“We learned a lot from Kieślowski movies and wanted to emulate that tone, something very raw yet cinematic and truthful,” she says. “It’s the way that he took simple ideas and then photographed that idea.
“In these days when you can move the camera so much, even virtually, you can have it fly through a keyhole, under a bed, through a wall; we wanted something that felt raw and which retained those happy accidents, those glitches or scratches that are evidence of something real. We wanted an analog style.”
She elaborates, “Our question to ourselves was how do we make it feel minimalist? For us, perfection was imperfection. We didn’t want to be afraid of imperfections but to embrace all the things that can go wrong and not try to fix everything in post. You really have to work hard to protect that because the instinct from your colleagues in post-production is to fix things.”
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Harris Dickinson and Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
Emma Corrin in “A Murder at the End of the World.” Cr: Christopher Saunders/FX
To photograph the series she selected the ARRI Alexa equipped with a set of spherical lenses from Panavision that Christensen had previously used on the three-part FX mini-series Black Narcissus — that Christensen also directed.
“The image needed to be messed up a little and these lenses added that less-than-perfect quality,” she said, explaining that Panavision’s VP of optical engineering, Dan Sasaki, “detuned the lenses to achieve a softness and vignetting to break up the digital sharpness and cleanness and push the lenses to capture a less perfect image.”
She devised LUTs for each of the three principal locations in New Jersey, Iceland and Utah. “The color contrast was important to creating an energy between scenes as we move from white ‘desert’ to ‘red desert,’” she says. “Among our first creative conversations was about how to delineate between the real world and the cyberworld.”
She approached the show “like a seven-hour movie, as one story and one journey in terms of lighting,” operating the A camera with occasional second unit work for pick-ups.
While Batmanglij and Marling swapped directorial duties on the episodes, Christensen lensed all seven over the 100-day production period.
“I love prep and being in control of what we’re doing but here I learned how to prep while shooting,” she explains. “If Zal was directing for three days than Brit would be prepping her next block of two to three days and vice versa but I was busy shooting all the time.
“So when either director came to me with a new idea that they’d thought about I had to be quick to re-evaluate. So, I learned to go and chat with the director who wasn’t directing that day in my lunch break to tap into their thoughts and to prep for the next block while shooting.”
https://youtu.be/Xf7fEXANx2c
The billionaire’s Icelandic retreat recalls the opulence of the Roy family in Succession or the forest mansion in the sci-fi feature Ex Machina. It was built on soundstages in New Jersey and presented the biggest production challenge to the DP. The budget wouldn’t allow for the build of the entire set so they built half, dressed it for half the show, and then flipped it around, dressing the other half of the hotel weeks later.
“It’s a circular hotel but we only had space to build half of it at a time so we’d shoot the one half then, with the other half of the set dressed, we’d shoot the same scenes but in the other mirrored half. We also had to connect those scenes to Iceland. We had snow on the stage to link to snow in Iceland.”
https://youtu.be/RVPx5fcLRdI
Working within a LED Volume might have solved the need to dress and redress the scale of sets but would not have delivered the analog aesthetic they desired.
While the co-creators and directors naturally sing from the same sheet, Christensen says that they were different in the way they executed things.
Making her directorial debut, “Brit is a very organized with thorough prep and previz. She needed that security while Zal allowed for a more spontaneous approach. It’s not quite improvisation but he wasn’t scared of seeing what happens on the day and reacting to that.”
Although she says that the shoot during winter and under COVID conditions in Iceland was particularly tough, Christensen wouldn’t hesitate to work with the duo again.
“Their passion for the story and the camaraderie they bring to set is something to be valued. It was a full on experience but I have to underline that Brit and Zal were an amazing team — which, trust me, does not always happen.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
November 12, 2023
“The Holdovers:” Alexander Payne and Kevin Tent on the Director-Editor Collaboration (and They Should Know)
Paul Giamatti and Dominic Sessa in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
TL;DR
Director-screenwriter-producer Alexander Payne and editor Kevin Tent, ACE reunite for their eighth feature film, “The Holdovers.”
A period film with Payne’s characteristic tragi-comic elements starring regular actor Paul Giamatti, the comedy-drama is generating awards buzz.
The film marks one of the few occasions where Payne has not worked from his own script, although Tent says this made no difference to his craft approach.
The film’s 1970 setting is evoked with needle drops of classic tracks by The Allman Brothers Band, The Temptations and The Swingle Singers, among others.
Longtime friends and collaborators, director-screenwriter-producer Alexander Payne and editor Kevin Tent, ACE reunite for their eighth feature film, comedy-drama The Holdovers, which has been generating awards buzz.
Set in 1970, The Holdovers tells the tale of Paul Hunham (Paul Giamatti), a curmudgeonly instructor at a New England prep school who remains on campus during Christmas break to babysit a handful of students with nowhere to go. He soon forms an unlikely bond with a brainy but damaged troublemaker, and also with the school’s head cook, a woman who just lost a son in the Vietnam War.
Since their first project together, Citizen Ruth in 1996, the duo has made Election, About Schmidt, Sideways, The Descendants (for which Tent was nominated for an Academy Award), Nebraska and Downsizing. Payne was Oscar nominated for adapting the screenplays for Election, Sideways and The Descendants (winning twice) and nominated as best director for Sideways, The Descendants and Nebraska.
Director of photography Eigil Bryld, actor Dominic Sessa and director Alexander Payne on the set of their film “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Director Alexander Payne and actor Dan Aid on the set of their film “The Holdovers,” a Focus Features release. Credit: Seacia Pavao
Director Alexander Payne and actors Paul Giamatti and Da’Vine Joy Randolph on the set of their film “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
A Character-Driven Period Drama
In keeping with these stories, The Holdovers is character-driven so don’t expect car chases, gunfights or explosions. “It is about people and the pain they carry in their lives and how opening oneself up to others around you can help relieve that pain and sometimes maybe even help you to grow,” describes The Rough Cut host Matt Feury, who talked with both Payne and Tent for the Avid-sponsored podcast.
Payne conceived the basic framework for the movie about a dozen years ago after watching a restoration of the 1935 French comedy Merlusse. About five years ago, he received a TV pilot out of the blue, which prompted him to call the writer, David Hemingson.
“I said, ‘Hey, you’ve written a great pilot. I don’t want to do it, but would you consider writing a story for me?’ That’s how it happened.”
The Holdovers is among the few occasions where Payne has not worked from his own script, although Tent says this made no difference to his craft approach.
“On The Descendants we really toned back the comedy because it felt a little forced, but here the tone kind of came prepackaged into the cutting room. Nothing ever seemed forced.”
Dominic Sessa, Paul Giamatti and Da’Vine Joy Randolph in director Alexander Payne’s “The Holdovers,” a Focus Features release.
Cr: Seacia Pavao
Paul Giamatti and Dominic Sessa in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Dominic Sessa and Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Da’Vine Joy Randolph, Dominic Sessa and Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Dominic Sessa and Da’Vine Joy Randolph in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Editing (Just Enough)
The Holdovers largely focuses on two or three main characters, which means that for an editor there aren’t a lot of places to hide when the director has shot long takes of dialogue and reaction.
“Sometimes it is tricky,” Tent agrees, “because Alexander gets amazing performances, but I think it is because he lets them take their time and find the lines properly.
“We try not to cut too much. It is a challenge to keep things moving, picking up the pace, but keeping the performances solid. We had some challenging scenes because we had a couple of fairly long talking scenes, and we’re trying to condense them as the film was evolving.”
He adds, “We tightened in a lot of the scenes to get to where the boys were leaving sooner. And we’re always doing that internally within scenes, dropping lines, that kind of stuff.
“But I think the screenplays really is so amazing. Just the reveal of Paul, as you dig deeper into Paul, you find out so late in the movie that he basically ran away from home, and then you find out that his dad beat them. Normally, people try to set all those things up right in the beginning, and I really appreciated the way things were slowly revealed here.”
Additionally, Tent tells A.Frame, “We’re pretty careful about not getting anything too sentimental or too sappy.”
How do they achieve that? Tent says, “It’s really a discipline in how we’re cutting the performances, I would say. So, if it doesn’t seem like it’s ringing true, then we would probably cut it out.”
The film’s 1970 setting is evoked with needle drops of classic tracks by The Allman Brothers Band, The Temptations and The Swingle Singers, among others.
“Mindy Elliot, our associate editor and assistant editor, started putting music in and then we work with music editor Richard Ford, who helped us with both score and needle drops,” Tent says.
“With needle drops you can’t get too committed to anything because it costs so much money and it’s just such a back and forth with [licensing]. But in the beginning on this movie, I couldn’t really hear the music in it. Mindy suggested putting in one of the Swingle Singers’ Christmas songs and that became something dramatic that we use a lot, which was great.”
Signature Dissolves
Tent also talked about the use of dissolves, a signature Payne-Tent storytelling technique.
“We use a lot of them in The Holdovers, but we’ve always used them. It’s been part of our film language all the way back to Citizen Ruth. There’s a couple of really interesting ones in The Holdovers. I think that actually people thought were mistakes at first, and we’re like, ‘No, we did that on purpose.’”
Tent explains to A.Frame, “[W]e’re doing the same things that we’ve always done. But people think the dissolves now are because we’re trying to make it a ’70s film, but not really. We always used them.”
But unlike Sideways, he notes, “there was not a lot of predesigned dissolves.” Instead, “they were all made up in post. But they’re very effective, and I think they smooth out cuts and stuff like that.”
Dominic Sessa, Da’Vine Joy Randolph and Paul Giamatti in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Da’Vine Joy Randolph in “The Holdovers,” a Focus Features release. Cr: Seacia Pavao
Da’Vine Joy Randolph and director Alexander Payne on the set of their film “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Dominic Sessa in director Alexander Payne’s “The Holdovers,” a Focus Features release. Cr: Seacia Pavaoin
Seventies (Re)Immersion
With Jami Philbrick at Moviefone, Payne elaborates on the 1970s setting: “I don’t remember exactly the moment, but connecting the dots, I thought it would be neat for the movie, to just give it something special. Nebraska’s in black and white, which just gives it something a little special formally. I just thought, ‘Well, wouldn’t it give this movie something special if we make it look and sound like a movie made in 1970.’
“But what it did, especially as my first period film, was give us the idea that we’re pretending that we’re working in 1970 making a low-budget contemporary film at that point. I think that helped our sense of aesthetic, that the sets and the costumes look as lived in, grimy and old as they would’ve been had we been making just a low budget contemporary movie back then.”
He adds, “I always put a lot of thought into the movies in terms of what car the protagonist drives. It’s always an important thing to think about. It tells you as much about the character as their apartment does. The good ones, I think, were Paul Giamatti’s red Saab in Sideways. Then the best one is Matthew Broderick’s Ford Festiva, a little teeny tiny pathetic Ford Festiva in Election.”
Seventies movies were formative for the 62-year old filmmaker, as he recounts to Jake Coyle reporting for AP News. Payne screened several classics for crew and cast including The Graduate, The Last Detail, Paper Moon, Harold and Maude and Klute.
“We weren’t trying to consciously emulate the look and feel of any single one of those films but we all wanted to splash around in the films of our contemporaries, had we been making a movie then.
“My birthday parties, we’d go see Chinatown or One Flew Over the Cuckoo’s Nest. But that’s the period when I was a teenager and a sense of taste was being imprinted on me.
“And what I was told was a commercial American feature film. Now they’re considered art films or whatever, the last golden age. Well, you never know when you’re living in a golden age.”
If FX’s The Bear reminds you of a Martin Scorsese film, you won’t be surprised editor Joanna Naugle is a devotee and used his movies as references for the show.
November 5, 2023
“Fair Play:” How to Throw Your Audience Off Balance
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Courtesy of Netflix
TL;DR
“I am someone who writes my fears, and I was afraid that my career would cost me my relationship. So I wanted to write a movie about that,” says writer director Chloe Domont about her hit Netflix thriller, Fair Play.
The film addresses everyday questions around modern masculinity, mining a specific type of male dread that manifests itself in an obsession with being “alpha,” fueled by a thriving podcast and YouTube industry.
Domont hopes that her film raises questions about how the link between female empowerment and male fragility might be dismantled.
Chloe Domont’s thriller Fair Play provoked a Sundance bidding war that Netflix won for $20m and put the writer and director in the spotlight.
It’s the sort of triumph she is still wary of, in terms of how it impacts her own relationships, and was built on strength through adversity in what calls the “toxic link” between female empowerment and male fragility.
“What I really want to explore with this film is why is it that a woman being big, makes a man feel small?” she told Maggie McGrath in a video conversation. “Like why are those two things so closely linked? And I think it’s a systemic societal problem. I think that it’s the way society raises boys to believe that masculinity is an identity and that they have to fit in the box, that success is a zero sum game. And it’s not.”
According to Moviemaker, the film mines a specific type of male dread that manifests itself in an obsession with being “alpha,” fueled by a thriving podcast and YouTube industry.
Fair Play tells the story of two young employees at a cutthroat hedge fund, desperate for promotions. They’re secretly engaged, because company policy prohibits interoffice relationships. But things get nasty when Emily (Bridgerton’s Phoebe Dynevor) begins to far outperform Luke (Alden Ehrenreich, Solo: A Star Wars Story).
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Courtesy of Netflix
Domont had earned a BFA Degree in Film & Television from New York University’s Tisch School of the Arts, and by 2017, was earning steady TV directing jobs on high-profile projects like HBO’s Ballers, CBS’s Clarice and Showtime’s Billions.
The idea for Fair Play was “burning inside of me” she told McGrath as a result of her own experiences. “I had this feeling as my career was starting to take off, where my successes didn’t feel like a win, [but] like a loss because of the kinds of men I was dating. These were men who adored me for my strengths, or my ambition, but at the same time, they still couldn’t help but feel threatened by the very same things that they adored me for because of the way that they were raised.
“It just made me realize how much hold these ingrained power dynamics still have over us. So I wanted to put that on screen and be as ruthless with it as … the nature of the subject matter itself.”
Domont’s favorite scene sums this feeling up and was the first thing that came to mind when she started writing the script. This is the scene when Emily gets a promotion but her first reaction isn’t excitement; it’s fear.
Fair Play. Phoebe Dynevor as Emily in “Fair Play.” Cr. Sergej Radovic / Courtesy of Netflix
“That walk home and the dread and the silence of when she gets in there to tell him about it. The way we shot it, too, is I wanted her to feel very small in the frame. So there’s a couple shots that are over his shoulder and he’s very dominant in the frame. She’s very small in the frame and looking away and afraid to even look at him. And I just felt like that encompassed what I was trying to explore.”
She elaborates on this with McGrath, saying that while there may be some progress in American corporate culture as a result of #metoo, she also feels there’s a slow erosion of women’s careers.
“It might not be blunt force trauma, but it’s a death of 1,000 paper cuts. This kind of bad behavior was ignored, and then normalized. And then the scary part is usually after that, it’s escalated. So that’s why it’s so important that those little tiny breadcrumbs you are constantly leaving. It’s like a snowball that it constantly builds.”
Domont hopes that her film raises questions about how the link between female empowerment and male fragility might be dismantled.
“How can we demystify the role that men are raised into thinking they’re supposed to fill? How can women embrace their successes without fearing that it’ll hurt them on some level? And how can we love and trust one another, in a world that’s so dependent on the very power dynamics that get in the way of that of that love, and trust and respect?”
“Fair Play,” behind the scenes: Phoebe Dynevor as Emily. Cr. Sergej Radovic / Courtesy of Netflix
Much of the tension in the on screen relationship is attributed to Domont’s work with editor Franklin Peterson, whose credits include episodes of Homecoming, Calls and Gaslit.
“Even talking to Menno [Mans], my DP, we were constantly reminding each other, ‘Pressure cooker, pressure cooker,’” Domont told Peter Tonguette of Cinemontage. “We wanted to build up this ballooning tension — this balloon that just keeps getting bigger and bigger, and you know it’s going to burst at some point, you just don’t know what or when. The idea was building up this tension of this couple who can’t escape each other, really.”
Peterson explained that his first thought was to start off the film like a straight drama: “You sell the characters as if they are a couple you are going to just really root for, and then you pull the rug out slowly from underneath them. [But] Chloe said, ‘No, I don’t want to do that. I want, from the very first shot, to keep you unsettled.’”
He also explained that test screenings were “wildly” helpful. “It’s an R-rated movie for people who want to see an R-rated movie about a toxic relationship, or are willing to go on this ride with us. Once you enter that realm, you’re now asking, ‘How do we make this the best version of that movie for that group of people?’”
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Courtesy of Netflix
One of the most difficult scenes Peterson cut was the final scene with Campbell and Emily. It carries with it a delicate balance between what the characters say vs. what they mean. That balance has to also contend with an overall tension keeping the audience unsure what will happen.
As he explained to Filmmaker, “The coverage isn’t complex but to modulate the performances, guide the pace, and accommodate new lines meant we went through dozens upon dozens of versions. We would test the movie with a version of the scene we thought worked, only to realize that while solving one issue, we created another. It’s an example of how the hardest editing work will never show on the screen.”
One thing that never changed was the story’s ending. Domont knew what she wanted to say, and was never tempted to let her characters off with a pat resolution.
Fair Play. (L to R) Alden Ehrenreich as Luke and Phoebe Dynevor as Emily in “Fair Play.” Cr. Sergej Radovic / Courtesy of Netflix
“I don’t write one word until I know what the ending is,” she told Moviemaker. “That ending is where the story and the genre come together, in one final punch.”
“It’s working within the thriller genre, which uses violence as a means to solve conflict,” says Domont. “So that was important.”
To watch Fair Play, you would think it was shot entirely in Manhattan, where the story takes place, taking over the city’s many real hedge-fund offices and overpriced apartments, restaurants and bars.
In fact, the production was based in Belgrade, Serbia — where Fair Play executive producers Rian Johnson and Ram Bergman had recently made much of Glass Onion: A Knives Out Mystery.
Domont told Moviemaker, “Ram was advocating for us building there, because he was like, ‘This is the way to build a set the way you want. This is a way to put the most amount of money on the screen. And the crews are excellent.’ So that was what we did. We built all the sets in Serbia. And then we shot all the exteriors in New York, because the movie does not work if you don’t shoot the exteriors in New York.”
She took full advantage of being able to have sets designed to her specifications.
“I intended for it to be kind of a claustrophobic film, in the sense that the characters are trapped between their home life and the workspace, and they go from one enclosed space to another and they can never escape each other,” she explains. “And because we’re in these same spaces for so long, I wanted to build them. And it was very important for me to build them. I had a very specific idea for how those spaces should be and feel, to feel claustrophobic in different ways.”
“Fair Play,” behind the scenes L to R: Alden Ehrenreich as Luke, Phoebe Dynevor as Emily Brandon Bassir as Dax. Cr. Sergej Radovic / Courtesy of Netflix
Per advises that good principles for filmmaking, like pacing, the rule of thirds, and color psychology all still apply for TikTok videos.
He also says that he loves gear, but it isn’t strictly necessary to make good content. The most important element is a good story. (The second most important element is being able to hear that story, so investing in a microphone is a good idea.)
“A lot of people waste time on their hooks. You have about 60 to 90 seconds, maybe two minutes, if you’re making a TikTok,” Adrian Per told the audience at B&H BILD Expo. (Watch his full talk, below.)
Specifically, he warns, “A lot of people waste time explaining who they are. Or if they’re selling a product. They’re telling you about the product immediately. … that’s not how you sell that. And that’s not how you sell yourself.”
Instead, “you want to get into the premise of your video immediately,” Per advises. For his format, that means telling the camera: “Today, we’re going to learn about sound design.” To save time but add detail, he will add a written description. For example, “This is how you do sound design for free. This is how you do sound design on a budget. This is how you do sound design for under $200.”
“Bam!” Per says. “If you’re interested, you’re gonna stay for a few more seconds.”
Regarding the “intro” and “follow and like” trends, Per says, “I promise you, nobody cares who you are at the beginning of the video. But maybe towards the end, they will.”
STRUCTURE AND PACING
Even in short-form video, “you still want to keep the basic principles of storytelling,” Per explains.
No matter your kind of content, Per says, “There are little tidbits and moments within your story or within your day that you can create tension. That’s what keeps us entertained. That’s what keeps us watching. We want to see you solve something.”
“It’s just really condensed,” Per says. “So it’s still a three-part story. It’s still a beginning, middle and end.”
“Don’t rush. You know, you still want to deliver a story,” he says. “You still want it to breathe.”
After all, “if people like your content, they’ll follow, and they’ll see more parts” if you run out of time to capture all your thoughts in one TikTok or Reel.
Remember, Per says, “You’re delivering a story, so you want people to hear the story.”
“Audio is really important,” he says. “You can watch a shitty quality video with good audio, and you’ll be in tune. But you can’t watch an 8k or a 4k video with shitty audio, there’s wind blowing in the background, you’re not going to watch it. And that’s just a fact.”
But “As long as it’s clear, you can use anything. I personally use these wireless lav microphones,” Per says. He adds, “I use my phone as my recorder a lot.”
Also, Per says, “Picking out your music is really, really important. I pick out my music first. Like I’ll go through my songs on my Spotify playlist and say, ‘Alright, I’m going to do it to this pacing.’”
He suggests, “Fitting your dialogue or your voiceover, in the pockets, and within the tempo of your music is important.” He explains, “It’s like a psychological secret that you can use as a tool to get people more tuned into your content.”
Also, Per says, “I want to hear what’s going on. If you’re at the beach, and I don’t hear the waves it feels weird, right? If I’m at a park and I don’t hear birds, it feels empty. These are things that weren’t really noticed. But they’re felt.”
To achieve that, he says, “I meticulously go through all of my sound design. You know, I have a whole folder of things that I’ve recorded on my phone, whether it’s waves, whether it’s me driving in my car, the sound of me chopping vegetables, I throw it in there, and I match it with my footage. And that’s just the free way to do it. I know there’s like subscriptions out there for stock sounds but they’re really expensive. I’ve downloaded sounds from YouTube.”
Adrian Per. Image courtesy of the creator
SHOTS AND LENSES
“A lot of these things, they’re felt,” Per explains. “They’re not really noticed or seen. But when a story is done, right, you’ll notice them because it feels… different.”
When creating short-form content, Per says, “I still keep in mind the rule of thirds, I still keep in mind, like the science behind my lens choice.”
Understanding “lenses, focal length, that’ll help a lot. OK, if you can afford it, if you have the access to multiple focal lengths, they can help,” he says.
“When I’m trying to deliver something that’s personal, or deliver something that I want you to really pay attention to. I’ll use a 35 to 50 millimeter because it blurs out the background,” Per says.
Conversely, “with a wide lens, it distorts things. It’s anxiety inducing. It can feel scary,” he says.
Also, “a formula I like to keep in mind: I’ll go from wide, extreme close up to medium to extreme close up to close up too wide. I like to give it variation in that something that will help your story and will keep your audience attention retained,” Per shares. After all, “I don’t want it to feel boring, right?”
Nonetheless, Per says, “I film over 50% of my content on my iPhone. There’s a lot of pickup scenes in my videos that you would never know that I shot on an iPhone. I just put on the cinematic mode and plug it in there. I color graded to match my cinema camera. And nobody will ever know.”
After all, he says, today, “everybody has a good quality camera on them. Whether it’s an Android, green bubbles, or an iPhone. I’m just kidding, Android quality is actually it, their cameras are actually better. I just don’t want to inconvenience my friends in the group chats.”
Ultimately, “You don’t need flashy editing, no tricks. You don’t need to [have] After Effects. Stories are told with just regular cuts in movies. If you know how to do it, that’s great. That’s awesome if it serves your story, even better.”
COLOR GRADING
Color grading is another nice-to-have for creators, as far as Per is concerned.
“Depending on the emotion, I will color grade to help that story, but it’s also not necessary, if you don’t know how to color grade,” Per says.
“If you’re talking about something somber or sad, desaturating your color or making it cooler will help tell that story,” he explains. On the other hand, “If it’s a hot, bright summer day … or if you’re making something happy, exciting, maybe you want to add some more saturation, maybe you want it to pop.”
In total, “Color grading [takes] probably 30 to 45 minutes” for Per, who notes, “I’ve made my presets for that for myself already.”
“On average, it’ll probably take six hours,” Per says, “for one piece of content, dammit. Sounds like a long time.”
“When it comes to my Sunday short films, it takes me about 15 to 30 minutes to write it. I don’t know. Yeah, I mean, it’s 90 seconds, right? So I try not to think too hard. And I’m confident in how I speak and how I deliver. So when it comes to writing, I kind of just talk things out with the music.
“And that takes about 15 to 30 or so, filming it. I take anywhere from 90 minutes to four hours, sometimes, filming a 90 second video, which sounds pretty insane. But I don’t go anywhere past four hours or so. I feel like that’s just overshooting and if I am taking over four hours just because I didn’t plan it as well as I should have. For the most part, it’s under two hours and with editing. It takes me about an hour, sometimes 30 to 45 Just because I look at my script.”
But Per says, “I try to deliver quality and value without sacrificing my day.”
Also, he advises, “the more times you do something over and over, the better you will get. So when you spend a bunch of time on one piece of content, and not put it out or try to perfect it I think, you know that time spent on perfecting that one thing by putting hours into it. I think it hurts you in the long run.”
Casey Neistat is most famous as a YouTuber, but that wasn’t his goal… his career “wasn’t an option” when he started creating videos.
October 30, 2023
AI Will Be the Most Impactful Technology in 2024, Say Global CTOs (and Pretty Much Everyone Else)
TL;DR
An IEEE study, “The Impact of Technology in 2024 and Beyond,” found AI as the technology most likely to have the greatest impact on a broad range of industries from film to medicine.
Prompt engineering and the ability to verify AI’s deliverables are required skills needed to generate meaningful outcomes with generative AI.
The study also finds that 5G rollout is still a work in progress and hasn’t kept pace with expectations.
Extended reality, cloud computing, 5G and electric vehicles are also among the top five most important technologies in 2024, according to the IEEE, but let’s guess what comes in at the top.
The survey of global technology leaders from the US, UK, China, India and Brazil found AI to be the most impactful technology next year, encompassing predictive and generative AI, machine learning (ML) and natural language processing (NLP).
By contrast, extended reality (XR), including metaverse, augmented reality (AR), virtual reality (VR) and mixed reality (MR), came second, just ahead of cloud computing in third.
AI, though, was voted in the top place by more than 65% of respondents, which included 350 CTOs.
In 2024, AI applications and algorithms that can optimize data, perform complex tasks, and make decisions with human-like accuracy will be used in diverse ways, the study finds.
According to the survey, among the top uses of AI in the coming year will be real-time cybersecurity, increasing supply chain efficiency, aiding and accelerating software development, automating customer service, and the speeding up screening of job applicants.
However, integrating AI into existing work isn’t as straightforward as flipping a switch. In the study, nearly half of respondents said they see difficulty integrating AI into existing workflows as one of the top three concerns when it comes to using generative AI in 2024.
“New use cases of generative AI and their integration into the general architecture may turn [out] to be serious challenges,” said IEEE life senior member Raul Colcher. “Good business analysts and system integrators will be essential.”
Artificial intelligence in its many forms will be the most important area of technology in 2024. Cr: IEEE
In 2024, we can expect to see more sophisticated AI applications and algorithms that can optimize data, perform complex tasks and make decisions with human-like accuracy. Cr: IEEE
Providing oversight of AI model input data and accuracy of the outputs, managing the integration of AI with existing functions and undergoing training for these skills are some of the many ways humans will work with AI in the future. Cr: IEEE
The top three cybersecurity concerns in 2024 remain the same as last year: data center vulnerability, cloud vulnerability, and security issues related to the mobile and hybrid workforce/employees using their own devices.
Additional data from the survey illuminates the challenge. Respondents were asked to list the top skills they were looking for in candidates for AI-related roles.
“Prompt engineering, creative thinking and the ability to verify AI’s deliverables — these three skills are what you need to generate meaningful outcomes with the aid of generative AI,” said IEEE senior member Yu Yuan.
IEEE member Todd Richmond added, “We need to collectively figure out what are ‘human endeavors’ and what are we willing to cede to an algorithm, e.g., making music, films, practicing medicine, etc.”
The benefit of AI is clear to many, but there’s a tricky part: potential risks. Among them is the risk of overreliance on generative AI for facts. As the IEEE puts it, the problem is that those facts aren’t always accurate. And with all forms of AI, it can be difficult to find out how, exactly, the software arrived at its conclusion.
In the survey, 59% of respondents cited an “overreliance on AI and potential inaccuracies” as a top concern of AI use in their organizations. Part of the problem is that the training data itself can be inaccurate.
“Verifying training data is difficult because the provenance is not available and volume of the training data is enormous,” said IEEE life fellow Paul Nikolic.
In 2024 and beyond, expect intense efforts to ensure that AI results are more accurate, and the data used to train AI models is clean.
Are we the last generation to make decisions about AI? Gartner offers 10 predictions for technology in 2024 (and beyond).
October 30, 2023
Posted
October 30, 2023
From Pixels to Profits: How Virtual Influencers are Rewriting the Rules of Fame, Commerce and Authenticity, Part 2
In Japan, the first TV commercial to feature an AI model sparked a nationwide dialogue on the future of advertising and entertainment.
TL;DR
Disrupting industries from e-commerce to marketing to pop music, virtual influencers continue to rake in followers and profits.
In Japan, the first TV commercial to feature an AI model sparks a nationwide dialogue on the future of advertising and entertainment.
In China, virtual influencers are becoming a strategic asset in the e-commerce landscape, especially in livestreaming, offering a cost-effective alternative to human hosts.
Ethical and cultural complexities continue to surround the rise of virtual influencers, with concerns about transparency and the ethical use of digital likenesses.
Regulatory action is emerging, as seen in India’s new guidelines for social media influencers, including virtual ones, to disclose promotional content.
As virtual influencers continue to multiply in the marketing space and prove their economic might in the music industry and beyond, the question looms: What’s next for these digital disruptors?
Part 1 explored the rise of virtual influencers in South Korea, their impact on the music industry, and some of the ethical concerns surrounding their human-like personas. In this second installment, attention is directed to Japan, where a groundbreaking TV commercial featuring an AI model has ignited a nationwide dialogue on the future of both advertising and entertainment. From the innovative utilization of AI in Japanese advertising to the strategic incorporation of AI-generated personalities in China’s e-commerce landscape, the far-reaching impact and ethical complexities of virtual influencers cannot be denied.
Japan’s AI Revolution Hits the Airwaves
Japan has aired its first-ever TV commercial featuring an AI model, sparking a nationwide conversation about the future of advertising and entertainment. The commercial, produced for Japanese tea brand Ito En’s Oi Ocha Catechin Green Tea, carries a forward-thinking message, “The time to change the future is now!” and features a woman who starts drinking the beverage so she can live a healthy life in the future. The woman, an AI model, ages over the course of the ad as she is depicted skipping around, drinking the tea and smiling.
https://youtu.be/DEoG1NCdmdY
“We weren’t intending to create a commercial personality with AI,” a company spokesperson Ito En explained to Shiho Fujibuchi at Japanese media outlet Mainichi. But the company determined that using AI would be the “best way” to age a character 30 years into the future.
The artificial character was created by Tokyo-based developer AI model Co. by training an AI model on a large number of faces, after which designers and other artists made changes.
On social media, some users made comments pointing out the lifelike features of the spokesmodel, Bryan Ke reports at NextShark, but others were more critical. “Yeah, the technology is impressive and all, but realizing ‘This person doesn’t really exist’ makes me feel sort of empty inside.”
While the commercial has been largely well-received, it has also raised questions about the ethical implications of using AI in media. Some are wondering if this could be a “scandal-free future,” given that AI models don’t come with the controversies that sometimes plague human celebrities, Casey Baseel observes at Japan Today.
“With AI models, there’s no risk of them getting involved in scandals,” one comment on the video read.
“If you’re upset about companies using AI models, you should complain to celebrities who cause scandals,” read another.
“Japanese celebrity endorsement marketing is almost entirely focused on the spokesperson’s image,” Baseel writes. “So when that image becomes sufficiently cracked, it has the potential to take the entire promotional strategy down with it, and it wouldn’t be a shock if the advantage of being able to opt out of all those risks is part of why Ito En is going with an AI model this time.”
In China, the world’s largest e-commerce market, virtual influencers aren’t just a trend; they’re a business strategy. These AI-generated personalities are becoming increasingly prevalent, especially in the realm of livestreaming. But what does this mean for human influencers, and what’s next on the horizon?
“Today, livestreaming is the dominant marketing channel for traditional and digital brands in China,” Zeyi Yang notes in the MIT Technology Review. Human influencers can broker massive deals in just a few hours, selling more than a billion dollars’ worth of goods in one night. However, the cost of training and retaining these human hosts is significant, making AI-generated streamers a cost-effective alternative.
“Since 2022, a swarm of Chinese startups and major tech companies have been offering the service of creating deepfake avatars for e-commerce livestreaming,” says Yang. “With just a few minutes of sample video and $1,000 in costs, brands can clone a human streamer to work 24/7.”
According to Huang Wei, the director of virtual influencer livestreaming business at the Chinese AI company Xiaoice, these AI-generated streamers won’t outshine star e-commerce influencers but are good enough to replace mid-tier ones. “It’s harder to get a job as an e-commerce livestream host this year, and the average salary for livestream hosts in China went down 20% compared to 2022,” Yang reports. However, the potential for these AI streamers to complement human work during off-hours makes them a valuable asset.
“Now, all the human workers have to do is input basic information such as the name and price of the product being sold, proofread the generated script, and watch the digital influencer go live,” says Yang. “A more advanced version of the technology can spot live comments and find matching answers in its database to answer in real time, so it looks as if the AI streamer is actively communicating with the audience. It can even adjust its marketing strategy based on the number of viewers.”
Silicon Intelligence, a Nanjing-based startup, plans to add “emotional intelligence” to its AI streamers. “If there are abusive comments, it will be sad; if the products are selling well, it will be happy,” says Sima Huapeng, the company’s founder and CEO. This emotional layer could add a new dimension to the shopping experience, making it more interactive and engaging for consumers.
As virtual influencers continue to rise in prominence, they bring with them a host of ethical and cultural questions that societies around the world are grappling with. From regulatory responses to public sentiment, the landscape is as complex as it is fascinating.
One of the most pressing concerns is transparency. “Many virtual influencers already present as human-like, and it may become increasingly difficult to distinguish between them and real people,” Mai Nguyen notes at The Conversation. This is particularly problematic in advertising, where the line between reality and virtuality can blur, leading to ethical dilemmas.
India has been proactive in addressing this issue. “In January, its Department of Consumer Affairs made it mandatory for social media influencers, including virtual influencers, to disclose promotional content in accordance with the Consumer Protection Act, 2019,” Nguyen reports. Similarly, TikTok has updated its community guidelines to require clear disclosure for synthetic or manipulated media.
Nguyen points out that the relationship between virtual and human influencers seems “more poised for coexistence than a total replacement.” This sentiment varies from country to country, influenced by cultural norms and public opinion. For instance, in South Korea, virtual influencers are seen as a less problematic alternative to human celebrities, while in other countries, the rise of virtual influencers has been met with skepticism.
“For now, virtual influencers can’t connect with people the way a real person can,” Nguyen says, highlighting the limitations of current technology. However, as AI continues to advance, incorporating emotional intelligence into these virtual personalities could change this dynamic, making them more relatable and engaging.
Nguyen also delves into the potential for exploitation, especially when it comes to the use of a person’s digital likeness. “People may unwittingly or desperately sell off their digital likeness without consent or adequate compensation,” she warns, raising questions about consent and ethical use of digital identities.
Virtual influencers are more than digital novelties; they’re a disruptive force across industries — from marketing and music to e-commerce. In marketing, they offer a cost-effective, scandal-free alternative, especially in tech-savvy countries like South Korea. In the music realm, they’re landing record deals and challenging traditional notions of talent. Meanwhile, in China’s booming e-commerce landscape, they’re becoming a strategic asset, promising to make the shopping experience more interactive through emotional intelligence.
However, their rise is not without ethical and cultural complexities. As they become more human-like, issues around transparency and ethical use of digital likenesses are emerging, prompting varying degrees of regulatory action across countries.
The future of virtual influencers is a tapestry of technological innovation, ethical considerations, and cultural nuances. As AI continues to advance, these digital personalities are poised to become more sophisticated and emotionally intelligent. But as they evolve, so too will the ethical and cultural questions they raise, making the landscape ever more complex and intriguing.
As digital screens become more embedded in our daily lives, the line between the virtual and the tangible is becoming increasingly blurred.
October 29, 2023
“Saw X” Cinematographer Nick Matthews Seeks Beauty in Brutality
“Saw X presented the dual challenge of needing to uphold the franchise’s established familiarity while also venturing to introduce the story in a new, thrilling way,” says director and “franchise stalwart” director Kevin Greutert.
He adds, “I also wanted to overturn and have fun with the tropes of the Saw series while taking care not to disappoint those who have long loved these movies.”
DP Nick Matthews brings the fun and a fresh perspective to the Mexico City installment, while honoring what Greutert calls the “sacred iconography” surrounding John Kramer and the Saw world.
“I think, ‘How do I take this character and craft something that walks you into that kind of space and into that sort of a world? How do I create shape and darkness within a space?’ For me, it’s about thinking in terms of deep background, midground and foreground, and then letting things fall off in a lot of places.”
“Saw X.” Photo Credit: Alexandro Bolaños Escamilla, Cr: Lionsgate
John Kramer’s Giallo Inspiration
Saw is “very much rooted in Italy’s giallo filmmaking,” Matthews explains. To achieve this look in Saw X, Matthews opted for a Sony Venice, which he rated at ISO 2,000 paired with Cooke Panchro /i Classic lenses and Pearlescent 1 filters, according to Jenkins.
Additionally, he sought to create the giallo look with practical lighting only. He told AC, “We start with whitish-blue fluorescents and golden tungsten lamps. Then we tumble into this dirty palette of sodium-vapor orange, arsenical greens, red emergency lighting, and ochre yellows, all built into industrial fixtures.”
For Saw X, Matthews explains, “I wanted to hark back to the dirtiness, the grittiness, the grime, the pervasive darkness, these poppy giallo colors of the early films, but I didn’t want to do it in a way that felt very heavily done in the DI; I wanted to do it with lighting. I wanted to create three-dimensional color spaces where you have these primary, secondary and tertiary colors populating the film.”
Matthews told No Film School that shooting in Mexico brought “a certain tonal palette to the movie.” But as DP, he says, “I’m trying to design a world that you can walk into and put a camera in and it’ll photograph well.”
Speaking about the colors, Matthews said, “I took the color palettes we used in Mexico and dramatized them for both the abductions because every character who ends up in these traps is abducted in these very giallo-like sequences. Anything that was going to become a tertiary color in the trap sequences I would use as the primary color first — a deep red, or rusty yellow, a sickening green, or a sodium-vapor color.”
“Saw X.” Photo by Alexandro Bolaños Escamilla, Cr: Lionsgate
Practical Mood Lighting
In addition to setting the mood, lighting was crucial to differentiating between the different challenges, known colloquially as “traps,” that John Kramer poses to those who have wronged him.
“By the time we got into the traps, we were shooting, easily, around 80 shots a day. So, we were lighting fairly 360” Matthews told Jenkins. Because of space and time constraints, “Everything was LED and dressed into practical housings so they looked like they were part of the set,” he says. “However, I’d typically bring a few small instruments out onto the floor like Arri SkyPanel S60-Cs or Asteras to wrap the light further or, in the case of the brain-surgery trap, to add an ochre yellow to the fill side of the character, which dirties up the image and the light.”
Shawnee Smith as Amanda Young in “Saw X.” Photo Credit: Alexandro Bolaños Escamilla
“Saw X.” Photo Credit: Alexandro Bolaños Escamilla, Cr: Lionsgate
Shawnee Smith as Amanda Young in “Saw X.” Photo Credit: Alexandro Bolaños Escamilla
Shawnee Smith as Amanda Young in “Saw X.” Photo Credit: Ivan Meza
An another example, which Matthews shared with Screen Rant’s Grant Hermanns, was “When the emergency lights go off, and we sort of have this big reveal towards the ends, just like, I really wanted the whole room to be bathed in red. And so I remember when I asked the production designer for 50, emergency-like spinners, and he’s like, ‘What the fuck, like, 50.’ And I’m like, ‘Yeah, I need like, one in every corner.’”
Because of the film’s complexity, Matthews told No Film School, “This movie had more interdepartmental conversations than any other project I’d worked on.”
He explains: “Once you get into a trap, you’re dealing with special effects, prosthetics, VFX, production, design, lighting, ultimately, and cinematography. And all of those elements have to come together, costumes as well.”
Paulette Hernandez as Valentina in “Saw X.” Photo Credit: Alexandro Bolaños Escamilla
In addition to complexity, timing was of the essence so that they did not have to reset any of the traps. Matthews recalls to Screen Rant, “All the brain trap stutter-frame stuff was shot on my Blackmagic [6k] just with me, like, you know, walking around while they’re shooting other pieces. And I’m doing it because we just have to move like lightning.”
To achieve the stutter frame look, Matthews employed a “lens whacking” technique, in which “you just connect the lens from the sensor plane, and you’re just kind of playing, which it’s a way to stimulate getting those like film rollouts that they got in the early days of like shooting film and had rollout and then they would use the rollout in the film. And so it’s like playing with light leaks and stuff like that.”
Virtual influencers are disrupting industries from e-commerce and marketing to pop music, raking in followers and profits.
In South Korea, these AI-generated personalities are embraced as scandal-free alternatives to human celebrities, becoming deeply integrated into both pop culture and the business landscape.
In the music industry, AI-generated pop stars are landing record deals and creating a divide among artists over the role of AI in music.
Ethical concerns arise as these AI personas become more human-like, prompting calls for global regulatory action.
As digital screens become more embedded in our daily lives, the line between the virtual and the tangible is becoming increasingly blurred. Meet virtual influencers: AI-generated personalities that are racking up followers, signing record deals, and even out-earning their human counterparts. From dominating Instagram in South Korea to revolutionizing e-commerce in China, these digital darlings are not just a technological marvel but a cultural phenomenon, raising questions about authenticity, ethics, and the future of influence itself.
The Rise of Virtual Influencers
In the fast-paced world of social media marketing, innovation is the name of the game. Brands are always on the hunt for fresh ways to engage audiences, and virtual influencers are emerging as the new frontrunners in this digital race. Far from being mere eye candy, these AI-generated personalities are proving to be formidable players in the marketing arena.
What sets virtual influencers apart from their IRL counterpoints is their meticulously crafted personas. Free from the vulnerabilities and controversies that often plague human influencers, these digital avatars offer an idealized image that’s hard to resist. Writing on Medium, influencer marketing specialist James Sterling notes how they are “a cost-effective and flexible way to produce content and promote brands. They don’t age, get sick, or have scandals.”
When it comes to ROI, virtual influencers are a marketer’s dream come true. Operating around the clock without the need for breaks, benefits, or bonuses, they offer unparalleled economic advantages. As Sterling notes, “Brands can clone a human streamer to work 24/7.”
For audiences, virtual influencers “are a source of entertainment, inspiration, and connection,” says Sterling. “They provide an escape from reality and a glimpse into a fantasy world.”
Examples include Lil Miquela, a 19-year-old Brazilian-American model and musician, and Shudu Gram, a South African supermodel. Contrary to what one might expect, these artificial entities are remarkably skilled at forming genuine connections with their audience. Their interactions are so convincingly human-like that they often blur the lines between reality and simulation.
One of the most compelling features of virtual influencers is their adaptability. “Their personas can be fine-tuned to resonate with specific demographic groups,” Sterling explains, making them invaluable tools for targeted marketing campaigns.
While the ascent of virtual influencers is undoubtedly exciting for marketers, it’s not without its ethical pitfalls. As Sterling warns, the technology behind these digital personalities has the potential for “problematic use in revenge porn, identity scams, and political misinformation.”
In South Korea, a country celebrated for its technological advancements and cultural exports, virtual influencers are becoming more than just a trend — they’re a societal phenomenon. But what makes South Korea such fertile ground for these AI-generated personalities? The Economist provides some compelling insights.
In the wake of recent scandals involving South Korean television presenters, the country is finding a less problematic solution in virtual humans. As The Economist points out, these digital figures are “increasingly common globally” but have found a unique resonance in South Korea due to their scandal-free nature. “Within the past year one [TV presenter] was fired for swearing on live television and another for slandering a dead comedian,” the article notes, highlighting the pitfalls of human celebrities.
The Economist introduces us to Rozy, South Korea’s first virtual influencer, who has become a sensation since her Instagram debut in 2020. Created by Seoul-based firm Locus-X, Rozy is a “beautiful 22-year-old who works as a singer, model, and a sustainability champion.” She’s not just popular; she’s profitable. “Rozy… is estimated to have made more than 2.5 billion won ($1.8 million) last year.”
Virtual influencers in South Korea are not just cultural phenomena; they’re also economic assets, and these virtual figures have become deeply integrated in both pop culture and the business landscape. Rozy, for instance, is “best-known for an advert for an insurance company in which she danced across the rooftops of Seoul.”
While The Economist doesn’t delve into this, it’s worth noting that South Korea’s technological infrastructure likely plays a role in the rapid adoption of virtual influencers. The country’s tech-savvy population is more open to new forms of digital interaction, providing a ripe market for these AI-generated personalities.
The music industry, a realm traditionally dominated by human talent and creativity, is undergoing a seismic shift with the advent of virtual influencers. These AI-generated pop stars are not just novelties; they’re landing record deals and even sparking debates among established artists.
BBC technology writers Shiona McCallum and Liv McMahon note that the music industry is no stranger to virtual characters as pop stars, ranging from Gorillaz and Hatsune Miku to Polar and Alvin and The Chipmunks. “Like many of today’s human artists, they’ve won Grammy Awards, held concerts as holograms, and can even be ‘cancelled’ over controversial comments,” they write.
Noonoouri, a digital character created by German designer Joerg Zuber, is merely one of the latest virtual influencers to land a record deal. “Created using motion capture and advanced graphics, she’s been signed to Warner Music as its first avatar artist, rubbing shoulders (virtually) with Ed Sheeran, Dua Lipa, Cardi B and Ashnikko at one of the industry’s biggest labels.”
While some artists like Grimes and David Guetta are embracing AI technology to “experiment with their music production,” others like Sting and Sheeran have voiced criticisms. This divide underscores the complex relationship between technology and artistry in the music industry. Irish musician and singer-songwriter Hozier even told the BBC’s Newsnight that he would “consider striking over the threat posed to the industry by AI.”
Jamie Njoku-Goodwin, chief executive of industry association UK Music, tells McCallum and McMahon there is a lot of excitement in the industry about the opportunities AI could provide for artists and producers, but the need for regulation is paramount so it “can enable human creativity, not erode it,” he says.
“It’s about knowing what content and what data AI is being trained on, [and] about ensuring there’s adequate labelling so we know whether or not a piece of music is AI-generated.”
Musicologist Dr. Shara Rambarran, who has written extensively about virtual performers, also has a positive outlook, commenting that while the recent trend of digital pop stars is unlikely to slow down in the future, they don’t present a threat to human artists.
“It’s not a new concept at all, they’ve always existed in some shape or form,” she says. “But will it overtake everything in the music industry? I don’t think so. I think there’s going to be room for everybody. There’s always going to be another innovative creation we’re going to be talking about and if this doesn’t work out, something’s going to replace it.”
Disrupting industries from e-commerce to marketing to pop music, virtual influencers continue to rake in followers and profits.
October 29, 2023
How the Team From “The Killer” Sustains Its Style and Structure
“David Fincher told me this is a film about someone’s process, and the camera must be an objective ghost in the room,” says Erik Messerschimdt, ASC, about The Killer, his latest collaboration with the director.
After all, “I think it’s always interesting to watch somebody use their tools with great precision,” Fincher tells The Guardian.
“This is someone who never allows anyone to be close to them, but suddenly you are there – what does that feel like?” the cinematographer elaborated to Emily Murray at GamesRadar.
“I think sometimes we are looking for the bigger picture, the themes, but with this film you can take it as being about all sorts and just go for it – capitalism, nihilism, humanity, etc.
“But, for me, it was all about how you bring the audience to a place they are not used to being: close to this assassin.”
Based on the French graphic novel series of the same name, the film stars Michael Fassbender as the nameless assassin who goes on an international manhunt even if he continually insists to himself isn’t personal.
The Killer “is an eminently re-watchable revenge movie, morbid and sardonic and wickedly funny, the latter of which hasn’t been highlighted nearly enough in early press. Think John Wick, if Keanu Reeves was a sociopath with a penchant for bucket hats, Amazon and inadvertently xenophobic quips about Germans. Oh, and if he loved The Smiths,” sums up GQ’s Jack King.
A key text for Fincher and Messerschmidt was the 1967 crime thriller Le Samouraï from director Jean-Pierre Melville.
Fincher also encouraged Messerschmidt read the source material for The Killer, a French graphic novel by Alexis Nolent.
“I read it in French and I don’t speak French. But it was interesting, because I learned with graphic novels you don’t necessarily need the dialogue, and our film has so little dialogue too. It made me think a lot about composition and framing. We weren’t making Sin City, so it didn’t have to look like a comic, but we did use similar techniques when looking at where to put the camera, how close, etc.”
The lack of dialogue was something he was particularly drawn to, being excited at the prospect that the camera would really have to do the talking.
“All of my initial conversations with David weren’t about style, but instead pace and scene structure. We have a more nuts-and-bolts approach discussing how we are going to tell the story with the camera, then the style is born from that. We were always talking about point of view — when do we see what Fassbender’s killer sees and when do we watch him instead? How does that affect the interpretation of the scene?”
After working on several commercials and television shows, Messerschmidt ended up on the set of director Fincher’s Gone Girl, working as a gaffer.
The duo bonded, with Fincher then recruiting him as DoP on TV series Mindhunter and biographical drama Mank, for which Messerschmidt won the Academy Award for Best Cinematography.
The Killer was shot in Paris, the Dominican Republic and then Chicago and New Orleans. For Messerschmidt, the location scouting was a process of kind of natural discovery.
“We looked at the locations that Fincher had already seen and started to talk about the structure of the movie and it kind of evolved from that,” he told Deadline.
Editor Kirk Baxter told the same Deadline interview about his process: “It was a tricky movie to edit because it’s a straight line in terms of the story. I think people often talk about editorial when you’ve got timelines that are going back and forth, or six different characters that are weaving in and out but we’re following one person all the way through. And surprisingly, it made it actually harder for me, because it’s sort of a highly polished straight line.
“You’re working on the idea of a perfectionist, and he needs to be shown with precision and so it sort of just translates straight back to trying to heighten his world and to make things appear simple.”
“I think for this movie, because it is so much about precision and process, it has to cut so beautifully. It has to cut like butter,” Messerschmidt tells The Film Stage’s Nick Newman.
Unusually for a cinematographer, he says he doesn’t enjoy lighting. “I’m almost reluctant,” he admitted at the Mallorca International Film Festival. “I find that the thing that interests me most about cinematography is how and where we put the camera and how we use it to tell the story.
“The resulting imagery that we create, for me, has to come from that place. But I don’t, I don’t think of myself as a photographer. For me, it’s all about communication with the director about how we’re telling the story and how we’re cutting the cutting the scene apart into pieces that can then later be assembled.”
“[I]t’s a conversation about balance and color temperature and all the technical stuff. It was much more a conversation about camera direction and scene structure than it was about lighting,” Messerschmidt explains to The Film Stage.
“The film, as a whole, I think is really a conversation about subjectivity and point of view. We wanted to use the technique of putting the audience inside the Killer’s head and then stepping back and being objective, jumping back-and-forth with that. You see it in the movie with sound; you see it with picture,” Messerschmidt says.
Fincher himself has something of a reputation for being very exacting. He famously asks his actors to perform numerous takes to ensure perfection. Props, such as John Doe’s notebooks in Se7en, are meticulously crafted by hand.
What does Messerschimdt think? “I love it. On a David Fincher film set the decisions are immediate. Like, this shirt or these shoes, this color or that color. I think for David and I now [we have] a shorthand. We walk into a location, we end up standing in the same corner, and we look at each other and know intuitively what we are going to do.”
He adds to GamesRadar, “His reputation is that he’s a very controlling, detailed person and I think that’s terribly unfair. He is the most collaborative person I know and very interested in surrounding himself with people who bring something to the conversation.”
Fincher himself has told different interviewers contrasting things. He told GQ there was no resemblance (except a superficial one), while he also admitted to The Guardian that “There are certain parallels” between himself and the character of The Killer.
Messerschimdt admits that the director can also be “intimidating,” especially on set. “I told myself from the beginning to speak up if you disagree, but you have to pick your moments. I am more comfortable now in speaking my mind, but I also believe it’s a cinematographer’s job to conform to what the director responds to – you are trying to execute their vision.
“Luckily, we have enough shared sensibilities, which makes that easy – we walk onto a location, stand in the same place, look at each other, and nod. You don’t get that too often.”
The Killer deploys a soundtrack made of songs from 1980s British band The Smiths which is the music the protagonist listens to on his headphones while getting down to work.
Fincher told GQ: “I love the idea of a guy who has a mixtape to go and kill people. But if we have all of these disparate musical influences, are we missing an opportunity to see into this guy is? So The Smiths became a kind of stained glass window into who this guy was.”
Baxter tells Deadline, “It was David’s idea from the start for the audience to sort of live in the back of the killer’s skull, and we see what he sees, and we hear what he hears. So when it’s his POV, the music that he’s playing, takes over all the sonic space.”
Sound designer Ren Klyce elaborates, “When you see the film you will hear this voice of Michael Fassbender, his interior monologue and in fact, in the film itself, when he’s on camera, he barely speaks. He’s always speaking in his mind. And so that’s a very interesting set of circumstances because on one level you get to know him, but on another, you don’t really know him because it might be his thoughts that are to be trusted or not to be trusted.”
Fincher has been a staunch user of RED’s camera systems over the years, and this continues with The Killer, which marked his first use of their newest unit. “The RED V-RAPTOR [8K W and XL] addressed some color issues we had experienced in the past,” Messerschmidt reports to ICG, “plus it was small enough to go anywhere and was a good match with the KOMODO, which we also used. We also changed up by going 2.35, as scope seemed more appropriate given our location work and many of the shots featuring the killer and his prey together in the frame.”
A-Camera 1st AC Alex Scott was afforded several weeks of prep at Keslow Camera, readying ten cameras for Paris and shipping twelve cameras for plate views down to the Dominican Republic. “There was only limited second-unit work,” Scott also tells ICG. “For driving shots in the DR and a splinter unit for New Orleans [DP’d by Tucker Korte]. The plate cameras had full-frame Sigma Zooms. They determined the necessary angles, and then we’d measure things out with the corresponding vehicle and a stand-in on stage in prep so the plate camera could match our notes.”
The globetrotting scope of the film tracks with other contract killer stories.
As Messerschmidt notes: “The movie is told in chapters, with the character in a different locale each time, progressing from Paris to the DR, Florida, Chicago and New York City. Perhaps sixty or seventy percent of the interiors were shot on stage in New Orleans.” New Orleans also stood in for Florida, with St. Charles, Illinois filling in for N.Y.C. scenes. Shooting finished in L.A., where additional shooting was done months later.
“The tone and visual aesthetic was established and maintained on the set,” adds Messerschmidt. “We’ve had the same post supervisor and same colorist [Eric Weidt] for some eight years, and have developed a very streamlined color-management workflow on set: a single show LUT, no CDL’s, no LiveGrade. We monitored in HDR on-set with Sony 17-inch monitors and had HD dailies – editorial had HDR as well – in DCI-P3 and Dolby PQ Gamma.
“We had some abstract conversations about what these various parts of different countries looked like and felt like,” he elaborates. “David was emphatic that the audience experience each one as a discrete and different environment.
“To me, Paris always feels cool and blue, especially at night and even in summer. That cool shadow, yellow highlight look was a big part of the night work there, and it developed from stills we took while scouting.”
Messerschmidt explains to The Film Stage: “Paris, to me, always kind of feels like it has this split-tone quality to it: at night it has this coolness that’s contrased agains the bright, sodiumvapor sreetlights that central Paris is famous for.”
In contrast, for the Dominican Republic, he says, “I started thinking about what humid looks like and what Santo Domingo looks like. It’s… a cornucopia of various colors.”
Cinematographer Charlotte Bruus Christensen approached the FX series “like a seven-hour movie.”
October 27, 2023
Viva “Cassandro”! Cinematographer Matias Penachino Steps Into the Ring
Gael García Bernal in “Cassandro.” Cr: Amazon Content Services
To plan a biopic with a pre-existing, ready-made documentary (“Cassandro, the Exotico!”) for ideas and inspirations from seems magnifico. But for “Cassandro,” cinematographer Matias Penachino and his director and co-writer, Academy Award-winner Roger Ross Williams, shied away from using it or, in Penachino’s case, even viewing it.
Saúl Armendáriz (played by Gael García Bernal) is a gay amateur wrestler from El Paso who rises to international stardom after he creates the character Cassandro, the “Liberace of Lucha Libre.” In the process, he upends the macho wrestling world and his own life. The movie is based on a true story.
Gael García Bernal in “Cassandro.” Photo: Alejandro Lopez Pineda Cr: Amazon Content Services
Gael García Bernal in “Cassandro.” Cr: Amazon Content Services
Gael García Bernal in “Cassandro.” Cr: Amazon Content Services
“When I told my friends that I was going to make this film, they all knew about Cassandro because of this documentary. They were happy for me as the documentary was so good that a movie about the same person, they thought, would be as good. I didn’t want to watch it.
“I just didn’t want so many references. When you’re so involved in making a biopic, you don’t want to be so engaged in the actual character that you don’t make something special,” Penachino says.
Penachino instead sought inspiration and cinematic references from a different visual genre altogether — Mexican street photography.
Photographic Influence
“I wanted to change the idea of looking back at films as a reference to making a film. For this film, I preferred basing the cinematography in still photography. In particular, documentary photography instead of cinema.
“I went to Roger with a lot of ideas for street photography and their unique take on the world. They can tell you a story with just one frame. With Roger being a documentary maker and this being a biopic, it was clear that the film should resemble a portrait.”
So as Penachino shifted his influence from movies to the street photography of Mexico, he widened his approach to photographers from Europe and North America, including photographers from the UK like Peter Dewhurst and Craig Atkinson.
“I came to Roger with books and other references, and he said yes. We then based the aesthetics of the movie on still photography. Also, the rules for the language of the film were based on still photography, like the zero-degree angle with the camera almost always at the character’s height,” he says.
This spectator or voyeur approach wasn’t handheld, as you might expect, but Matias didn’t want a documentary feel.
“With the director known for being a documentary maker, people would expect that style. In this way, we turned it around by minimizing the camera movement. The camera is static almost constantly; it does need to move when the characters move. But if they’re not moving, neither is the camera; it just inter-cuts without moving or just pans for coverage.”
Gael García Bernal and Roger Ross Williams in Cassandro. Photo: Alejandro Lopez Pineda Cr: Amazon Studios
Cassandro was director Roger Ross William’s first narrative movie, and Matias appreciated how he embraced the differences.
“I think the success of our partnership really came down to the meticulous work we put into pre-production, the exquisite production design, and a production process that was very mindful of the world we were portraying,” Penachino tells British Cinematographer.
“Going from a small crew of about three, including a camera person and sound, to working with around 70 people is quite an experience. You now have seven people behind the camera and the rest on set. But I wanted to make it comfortable for him by not moving the camera so much and letting things happen. So, in a way, it’s shot like a documentary with tools that would help that but as fiction.”
Gael García Bernal in
“Cassandro” Cr: Amazon Content Services
Roberta Colindrez and Gael García Bernal in “Cassandro,” photo by Alejandro Lopez Pineda, Cr: Amazon Content Services
Roberta Colindrez and Gael García Bernal in “Cassandro.” Photo Alejandro Lopez Pineda, Cr: Amazon Content Services
Roberta Colindrez in Cassandro. Photo: Courtesy of Cr: Amazon Content Services
Benito Antonio Martnez Ocasio and Gael García Bernal in
“Cassandro” Cr: Amazon Content Services
Raúl Castillo in “Cassandro.” Cr: Amazon Content Services
That’s where the comparison stops, as the shoot included an ARRI Mini LF with Panavision H Series lenses with an aspect ratio of 1.44:1. “That’s not far from 4:3, which again reminds us of a portrait by being more like a medium format image. The lenses are super soft with a tremendous round bokeh with excellent resolution; they were amazing, really amazing.”
Boxing movies of late have tried to reinvent the way fight scenes are shot, “Creed III” being an example. “Cassandro” wasn’t going to compete on that level.
“We had a couple of different types of shots, but it was clear from the beginning that this wasn’t a sports movie. The film is about a guy who comes out of the closet and reinvents himself in an ultra-macho country.
“The camerawork was designed to tell the story more than present fighting in any different way. We weren’t looking for that.”
“We strategically moved the camera only when absolutely necessary, emphasising a deliberate stillness that further underscored our choice to portray the character from an outsider’s perspective. When it came to capturing the intensity of the wrestling matches, we collaborated with Alberto Ojeda, the film’s Steadicam and camera operator,” Penachino tells British Cinematographer.
“Together, we planned camera movements and angles, investing time in rehearsals and learning directly from the wrestlers, with a special emphasis on Gael’s amazing performance. The primary objective was to stay intimately close to the action to keep the main character in focus while being faithful to the visual language of lucha libre.”
Virtual celebrity Lil Miquela, an online persona who does not exist in the real world, but has become one of the world’s most popular virtual influencers on Instagram, with three million followers.
The future of influence is here: a digital avatar that captivates millions of adoring fans while offering unparalleled customization and round-the-clock availability.
Virtual influencers are transforming the way content is created, consumed and marketed online. They represent an electrifying dance between cutting-edge technology and our desire for connection. But, at the same time, they are yet another product being peddled by marketers that want our money.
Upon closer inspection, we can see the risks that emerge with these blurred realities.
What Are Virtual Influencers?
While virtual influencers aren’t a particularly new concept — virtual Japanese popstar Kyoko Date has been around since 1996 — recent advances in technology have thrust them into the spotlight.
Also called digital influencers or AI influencers, these digital personalities have a social media presence and interact with the world from a first-person perspective.
They’re created by 3D artists using CGI (computer-generated imagery), motion-capture technology and AI tools. Creators can make them look and act exactly how they want, and their personas are thoughtfully developed to align with a target audience.
There are three main types of virtual influencers: non-humans, animated humans and life-like CGI humans. Each one provides an innovative way to connect with audiences.
Why Do Virtual Influencers Exist?
Advancements in AI, the rise of social media and visions of the metaverse (in which the real and virtual worlds are blended into a massive immersive digital experience) are synergistically fueling the growth of virtual influencers.
Their popularity has prompted marketing agencies to embrace them as a cost-effective promotional strategy.
While real influencers with millions of followers may demand hundreds of thousands of dollars per post, one 2020 estimate suggested virtual influencer Lil Miquela charged a more reasonable £6,550 (currently about US$7,965).
Virtual influencers have clear benefits when it comes to online engagement and marketing. They don’t age, they’re free from (real) scandals and they can be programmed to speak any language. It’s no surprise a number of companies and celebrities have caught onto the trend.
In 2019, supermodel Bella Hadid posed with Lil Miquela in ads for Calvin Klein in what one columnist dubbed a “terrifying glimpse of the future.”
Since then, virtual influencers have become even more popular.
In 2021, Prada introduced a CGI ambassador for its perfume Candy. More recently, Lil Miquela has popped up in a number of high-profile brand campaigns and celebrity interviews. Even rapper Timbaland has said he is considering a collaboration.
The Transparency Issue
Virtual influencers have a unique cultural dimension. They exist in a murky space between our world and the virtual which we’ve never quite explored. How might they impact us?
One major concern is transparency. Many virtual influencers already present as human-like, and it may become increasingly difficult to distinguish between them and real people. This is particularly problematic in an advertising context.
Virtual influencers often feature alongside real celebrities.
As the market for virtual influencers grows, we’ll need clear guidelines on how this content is used and disclosed.
India has taken the lead on this. In January, its Department of Consumer Affairs made it mandatory for social media influencers, including virtual influencers, to disclose promotional content in accordance with the Consumer Protection Act, 2019.
Similarly, TikTok has updated its community guidelines to say: “Synthetic or manipulated media that shows realistic scenes must be clearly disclosed. This can be done through the use of a sticker or caption, such as “synthetic,” ‘fake’, ‘not real’, or ‘altered.’”
A Messi Way To Make Money
The emergence of virtual replicas of real people (including deepfakes) has led to new discussions about how a person’s likeness may be used, with or without their consent.
On one hand, celebrity deepfake porn is on the rise. On the other, celebrities are including “simulation rights” in their contracts so their likeness may be used in the future. Take global football star Lionel Messi, who allowed PepsiCo to use a digital version of him to promote Lay’s potato chips.
While this might introduce opportunities for talent expansion, it also raises exploitation risks. People may unwittingly or desperately sell off their digital likeness without consent or adequate compensation.
Will the Virtual Replace the Human?
For now, the relationship between virtual and human influencers seems more poised for coexistence than a total replacement. For now, virtual influencers can’t connect with people the way a real person can (although it’s hard to say how this might change in the future).
As for human content creators, virtual influencers are both inspiration and competition. They’re transforming what it means to be creative and influential online. Whether they like it or not, human creators will need to work with them — or at least alongside them — in whatever ways they can.
A panel of experts at the 2023 NAB Show shared their advice on how to grab viewers’ attention in three seconds or less, or risk losing them.
October 30, 2023
Posted
October 25, 2023
Mamma Mia, Here I Go Again (But With Volumetric Capture)
From “ABBA Voyage,” Cr: ABBA, ILM
TL;DR
ABBA Voyage has cracked the code for holographic concert experiences, making more than $150 million in ticket sales in the tour’s first 15 months.
The Swedish supergroup pulls off the nightly performances with the aid of ABBAtars, custom holograms created using advanced motion capture technology and the work of hundreds of artists.
The concert uses a pre-recorded performance of ABBA projected onto a transparent screen. A 10-piece band accompanying the virtual avatars, performs in real time, albeit remotely.
You’d have to be living under a rock not to know that ABBA is back — and have a heart of stone not to feel nostalgia for the innocence of their pop.
ABBA’s current single-city tour, Voyage, launched in May 2022 at a custom-built, 3,000-seat arena in London on the site of London 2012 Olympics.
“ABBA Voyage is one of the most expensive productions in music history, with a price tag of £140 million (about $175 million) before the first show opened in May 2022,” writes Bloomberg’s Lucas Shaw. However, he notes that after more than a year of daily performances, “[t]hat investment is starting to look like one of the savvier bets in modern music history.” In its initial 15 month run, Voyage “generated more than $150 million in sales and sold more than 1.5 million tickets.” That translates to about $2 million a week, with an average ticket price of about £85 ($105), Shaw reports.
How exactly, is the septuagenarian Swedish supergroup pulling this off? ABBAtars.
“As major legacy acts sell off their catalogs and look to retirement, holographic shows could provide a business model to ensure their music and performances live on forever,” observes David Vendrell for TheFutureParty.
Making the ABBAtars
In 2021, the band gave fans more details about how their virtual alter-egos were created.
Cr: ILM/ABBA
The “ABBAtars” were created by more than 100 digital artists and technicians from Industrial Light & Magic (ILM). Four of ILM’s five global studios are dedicated to the project, with anywhere from 500 to 1,000 artists working on it.
The foursome was filmed using motion capture as they performed a 22-song set over the course of five weeks. ILM then “de-aged” Benny, Björn, Agnetha, and Anni-Frid, taking them back to 1979.
Cr: ILM/ABBA
“They got on a stage in front of 160 cameras and almost as many genius [digital] artists, and performed every song in this show to perfection, capturing every mannerism, every emotion, the soul of their beings — so that becomes the great magic of this endeavor. It is not four people pretending to be ABBA: It is actually them,” producer Ludvig Andersson explained in a video posted on YouTube.
In a video posted by The Guardian, Ben Morris, ILM Creative Director, said, “We create ABBA in their prime. We are creating them as digital characters and will be using performance capture techniques to animate them, perform them, and make them look perfectly real.”
Cr: ILM/ABBA
Cr: ILM/ABBA
Cr: ILM/ABBA
Well, no, this isn’t the real thing, but there’s good reason for the digital reworking.
The global appetite among ABBA fans old and new to see them perform live would be overwhelming — a tour the septuagenarian multi-millionaires would see as too exhausting.
This way they can be Björn again and again and again.
Benny Andersson came up with the term ABBAtars a few years ago, with the original concept of creating holograms.
The actual digital concert experience, referred to as “Voyage,” uses a pre-recorded performance of ABBA in their mocap suits which, using an updated version of a Victorian theatre trick called “Pepper’s ghost,” projects them onto a transparent screen. A 10-piece band accompanying the virtual avatars, however, performs in real time, albeit remotely.
In a statement released by the band, they explain that “the main inspiration to record again comes from our involvement in creating the strangest and most spectacular concert you could ever dream of. We’re going to be able to sit back in an audience and watch our digital selves perform our songs.”
Cr: ILM/ABBA
Cr: ILM/ABBA
Cr: ILM/ABBA
“We simply call it ‘Voyage’ and we’re truly sailing in uncharted waters. With the help of our younger selves, we travel into the future. It’s not easy to explain but then it hasn’t been done before.”
Over at RedShark News, Andy Stout delves into the technique used to create the Abbatars:
“At its most basic form this technique uses an angled glass screen to capture the reflection of a brightly lit image off-stage somewhere,” Stout explains. “Pioneered by John Henry Pepper following work by Henry Dircks in the 1860s, it would allow Pepper to stage elaborate shows where actors interacted with a ‘ghost,’ simply another actor performing off-stage and lit in such a way as to make their reflection appear and disappear.”
The effect is widely used in theme parks, with Disneyland’s Haunted Mansion being the most famous example. The effect has also been used for a slew of virtual tours featuring Roy Orbison, Buddy Holly, Amy Winehouse, Ronnie James Dio and Frank Zappa, but ABBA’s use of the technique is more complex than these, Stout explains:
“The ABBA staging looks to be a bit more ambitious than that (you don’t employ 1,000 of ILM’s finest unless you’re serious about bridging the Uncanny Valley) which is why, instead of touring, it is taking place in a purpose-built 3,000 seat arena in London’s Olympic Park. That lets the team thoroughly control the projections, the lighting, the effects, and match all that with the physical performance of the musicians and dancers that will be on stage with Agnetha, Björn, Benny, and Anni-Frid.”
Futurist Bernard Marr looks ahead to the implications presented by the new technology. “While we know AI will be used to recreate a young-looking ABBA, we can speculate that the next step could potentially go even further by recreating something of their personalities and behavior,” he writes. “It isn’t a huge leap to imagine they could use language processing and voice recognition to respond to song requests from the audience and, perhaps one day, even hold a conversation.”
ABBA’s virtual concert series will allow thousands of fans enjoy the experience together, says Marr, who also notes that, at their age, the actual band members might find it tiring to perform eight shows a week.
“In turn, this makes it more accessible for the fans, who might find it more difficult to attend a live arena concert that only has one date,” Marr explains. “Likewise, with Ariana Grande’s Fortnite performance, fans could, in theory, access the spectacle from anywhere in the world. In addition, in the avatar-driven environment, fans could enjoy it alongside digital representations of their friends, heightening the sense of the experience.”
Many virtual production setups rely on motion tracking to locate the camera, even when motion capture isn’t being for animation.
October 25, 2023
Posted
October 24, 2023
Pretty/Scary: Cinematographer Aaron Morton on “No One Will Save You”
TL;DR
Hulu’s sci-fi horror hit “No One Will Save You” is virtually dialogue-free, but the filmmakers aimed not to draw attention to that device while still being aware of how much extra weight the visuals needed to carry.
The most challenging aspect of the shoot for cinematographer Aaron Morton was lighting the aliens with moving lights more commonly found at rock concerts.
Lighting of the color red in particular played into the production’s choice of Sony VENICE camera.
Home invasion movie No One Will Save You can be added to the low budget horror film renaissance (think Huesera: The Bone Woman, Barbarian, M3GAN, Talk To Me) and the most streamed film across all platforms in the US when it was released last month.
Made for $22.8 million, the Hulu original is an almost wordless thriller in which Brynn (played by Booksmart’s Kaitlyn Dever) is a young woman living alone as a seamstress in the countryside fights back against alien invaders.
“He didn’t tell me about the lack of dialogue before sending me the script,” says cinematographer Aaron Morton (Black Mirror: Bandersnatch; The Lord of the Rings: The Rings of Power) of receiving the project from writer-director Brian Duffield. “Reading it for the first time it dawns on you.
He adds, “It sounds counter-intuitive given the lack of dialogue, but the way the script conveyed the tension and terror that the character’s feel did so much work to help us understand the approach to this film.
“One phrase we had throughout prep was that horror can be beautiful. We tried to make a beautiful film that was scary.”
“No One Will Save You” Director Brian Duffield
Duffield, who wrote 2020 monster adventure Love and Monsters and 2020 sci-fi horror Underwater, is drawn to ideas that smash two things together that don’t necessarily go together.
“We talked a lot about if Todd Haynes was making Far From Heaven and if aliens invaded in the middle of it what that would feel like,” Duffield commented on the set of No One Will Save You in the video featurette above.
Kaitlyn Dever as Brynn Adams in “No One Will Save You,” directed by Brian Duffield. Cr: 20th Century Studios
Kaitlyn Dever as Brynn Adams in “No One Will Save You,” directed by Brian Duffield. Cr: 20th Century Studios
Kaitlyn Dever as Brynn Adams in “No One Will Save You,” directed by Brian Duffield. Cr: 20th Century Studios
It’s the kind of a thing he brings to a lot of his scripts, says Morton, a New Zealander who previously lensed a film for the director about teenagers spontaneously exploding.
“Spontaneous (2020) was really a lovely love story between two kids who happened to be in a situation where their friends are literally exploding next to them. Brian loves smashing two disparate situations together and seeing what comes of it.”
He continues, “What is clever about No One Will Save You is that while we are learning about what’s happening to the world in terms of the alien invasion we’re also being drip fed information about Brynn’s character and what has happened to her in her life.”
“No One Will Save You” Behind the Scenes
While the script has somewhat of a conventional narrative driver from set piece to set piece, what’s missing from more traditional film is coverage — the over the shoulder, reverse shot grammar of two people having a conversation.
“We’re still using filmmaking conventions but making sure we have an awareness all the time that we were ticking the boxes for the audience in terms of what they were learning about story and character,” Morton explains.
“We knew we didn’t want to treat the lack of dialogue as a gimmick. It was just a by-product of the situation that Brynn found herself in. Our aim was not to draw attention to that device in the movie while being very aware of how much extra weight the images needed to carry in a ‘show, don’t tell’ sort of way. The nature of the film relies on the pictures doing a little bit of extra work.”
“No One Will Save You” Clip | Telephone
The most challenging aspect of the shoot for Morton was lighting the aliens. “The light is part of how they control humans,” he says. “We definitely wanted to be reminded of Close Encounters which did inspire a lot of what we were doing in our movie.”
This includes leaning into the classic tractor beam trope of being sucked into the mothership. They were lighting some reasonably large areas of swamp and night exteriors of Brynn’s house, often using cranes with moving lights. These were powerful Proteus Maximus fixtures from Elation Lighting, more commonly found at rock concerts.
“The moving alien lighting is built into the exterior night lighting. For instance, when Brynn is walking up the road in a forest, the forest is lit and suddenly a beam is on her and she gets sucked up into spaceship. We had the camera on a crane, and we had ambient ‘forest’ lighting on a crane so I could move that ambient lighting with her as she walked. We had another crane with the big alien light that stops her in her tracks. So it was this ballet of things happening outside the frame.”
He continues, “I love using moving lights (combined with the right sensor) because you can be so accurate in changing the color temp by a few degrees, even putting in Gobos to change the beam size, using all the things moving lights are great for in live events and putting them into the cinema world.”
“No One Will Save You” Clip | Power Surge
The lighting design also gave the filmmakers another way to show the audience things that Brynn does not see. “She could leave a room and just as she turns away we play a light across a window just to remind people that the aliens are right there though she’s not aware of it.”
Since the color red plays a particularly important role in depicting the alien presence, Morton tested various cameras before selecting the Sony VENICE.
“You can quickly over-saturate red, green and blue colors with certain cameras, so it’s a big piece of puzzle that you can figure it out early. We felt the color science of the VENICE was incredible in terms of capturing that red.”
“No One Will Save You” Clip | Nails
They shot anamorphic using a set of Caldwell Chameleon Primes. “What I also like about the VENICE is that you can chop and change the sensor size. The Chameleons cover the large format size very well but not perfectly so it meant I could tailor which part of the sensor we were using depending on which lens we were on.”
Elaborating on this he says, “You can very easily go from a Super 35 size 4K sensor on the VENICE to 6K using the full horizontal width of the sensor so sometimes, if I needed a bit more width in the room in a certain situation and what we were framing was forgiving enough, I could use a wide lens in 6K mode and not worry about the distortion we were getting because we’re using every inch of the image circle.”
The production shot in New Orleans but is not location specific. “It’s Generica,” says Morton, who commends the Louisiana crew.
“We ran three camera bodies all the time with two full teams so we could be shooting with two and prepping another camera then leapfrogging one to the next.”
“No One Will Save You” Clip | Brynn in the Basement
Morton has somewhat of a horror film pedigree having photographed 2013’s Evil Dead remake, the “Metalhead” episode of Black Mirror (arguably the series’ bleakest story) and The First Omen, directed by Arkasha Stevenson for 20th Century Studios starring Bill Nighy, which is scheduled for imminent release.
He was in the middle of shooting a feature for Universal in Dublin when the strikes hit and still has three weeks of work on that project to complete, for which he is using a package of Alexa 35 and Cooke S4 sphericals.
“I love going job to job to approach with fresh eyes and be given a clean slate. Whilst I welcome giving my opinion on what approach would work I’d much rather have a two-way back and forth with a director about what they think so we can get the best story to screen.”
“No One Will Save You” Director Brian Duffield
Reaction
With a big full-orchestra score from Joseph Trapanese, “Duffield admirably trying to turn his Hulu budget into Amblin production value,” reviewed Benjamin Lee at The Guardian. “It’s got the feel of a real movie, the highest compliment one can give right now to a film designed for streaming.”
Forbes critic Mark Hughes said, “The film is expertly paced and gets endless mileage from its premise. It’s one part Signs, one part Body Snatchers, and a whole lot of Cronenberg, all of which I love independently and am thrilled to see combined here.”
He adds, “the largely dialogue-free approach might sound gimmicky or distracting, but it’s neither and highlights how well written, directed, and performed the whole thing is.”
Others were more negative. Here’s Sam Adams at Slate: “[It] could have been a spectacularly scary short film. Instead, it’s a movie that starts off with an incredibly strong premise and sense of itself, and then squanders nearly all of it in a scattershot middle and confounding conclusion.”
“It’s the idea of keeping the audience off balance,” says Khan. “Oh, is this a comedic scene or is this person actually going to get killed?”
October 28, 2023
Posted
October 24, 2023
Nahnatcka Khan’s ”Totally Killer” Has Everything: Horror, Comedy, Time Travel, and True Crime
Kiernan Shipka in “Totally Killer,” courtesy of Amazon Studios
TL;DR
Nahnatcka Khan’s “Totally Killer” combines comedy, horror, time travel and true crime tropes to create a fantastic mashup movie, now streaming on Amazon Prime.
It manages to combine genres without muddying the mood of scenes by relying on a strong cast and distinct tonal shifts.
The film references classic horror movies like “Halloween” and “Friday the 13th,” while leaning into Khan’s comedic prowess.
“Totally Killer” relies heavily on practical set pieces and choreography to create convincing scares and to ground emotion of key scenes.
Nahnatcka Khan’s “Totally Killer” is much more than your standard slasher flick.
“To the people who work in comedy, it feels like a tightrope walk whenever you’re doing it, you know, because you feel the life and death of it all, but nobody else does,” Khan tells KCRW’s Elvis Mitchell. “And so to actually be able to manifest that into a true movie, and story was really satisfying.”
And don’t forget time travel! Khan’s latest feature is actually “a classic slasher movie fused with Back to the Future,” writes Andrew Webster for The Verge. (He also compares the movie to “Yellowjackets” because of the shifting timeline and two sets of actors.)
But wait, there’s more! “On top of sending up beloved genres like slashers, ’80s teen comedies, and time-travel flicks, ‘Totally Killer’ also tackles a modern-day obsession: true crime,” observes Mashable’s Belen Edwards.
How exactly does this work?
“I like the idea of mashups,” Khan told Collider in a video interview. (Watch the full conversation, below.)
Ehrhart summarizes: “Amazon‘s glossiest, wildest new horror-comedy finds Jamie (Kiernan Shipka) reeling after her mother (Julie Bowen) is murdered by a masked menace known as the “Sweet Sixteen Killer.” Thirty-five years earlier, the same murderer went on a killing spree in her town, leaving her mom the sole survivor of a group of teen friends.
“Luckily, her best friend Amelia (Kelcey Mawema) just finished building a time machine for the science fair, which allows Jamie to travel back to when the first murders occurred and team up with her teenage mom (Olivia Holt) in an effort to save everyone.”
MOVIE REFERENCES AND COMPARISONS
Even as a genre-bender, “Totally Killer” is grounded by references and allusions to classic horror movies.
Although known for her comedy work, Khan is a horror fan, telling Rolling Stone she is a fan of “The Conjuring” universe, as well as the original “Halloween” and “Friday the 13th.”
Editor Jeremy Cohen tells 1428 Elm, “I rewatched ‘Scream’ and I watched ‘Halloween’ again and again” in preparation. The end result includes some “pretty explicit references to ‘Halloween,’” he says, adding “ Even the opening shot, with everyone trick or treating and crossing the street. There’s also a pull-out from the house, which is somewhat of an homage to Halloween. There’s even a part where the killer stabs someone and does a Michael Myers head nod.”
In addition to the visuals, Cohen says he “used a lot of temp score from ‘Halloween’ when putting this together and John Carpenter throughout. We also reference some of the music from ‘X,’ which has that sing-songy soundtrack by Chelsea Wolfe.”
Another nod to the horror genre is the Sweet 16 Killer’s mask.
The Sweet Sixteen Killers in “Totally Killer,” courtesy of Amazon Studios
“I think that mask is really crucial for a good slasher movie, especially for this one because I wanted it to feel original, but it had to feel of the time, like something that could exist back then.
“But also, I didn’t want the whole movie to feel retro and nostalgic because Jamie is from this present, she’s from the future, so you have this sort of Gen Z energy going into this John Hughes world,” Khan tells Taylor.
To craft the mask “of a handsome man being scary,” they worked with Tony Gardner and Alterian Inc., Khan says. To get the right vibe — Khan wanted “just the right amount of camp” — leaning into Kiefer Sutherland’s look in “Lost Boys,” as well as 1980s heartthrobs and Dolph Lundgren.
THR’s Brian Davids says the end result is a “Gary Busey meets Zack Morris concoction,” which Khan agrees is an “amazing description.”
DISTINCT GENRES AND TIMELINES
Cohen explains that despite the fact that the movie is a mashup, Khan wanted scenes to fall distinctly into certain categories.
He says, “The comedy parts are the comedy parts, and the horror parts are the horror parts. There aren’t a ton of laughs when the killer is chasing someone with a knife. We worked on the pace of it, so when you’re watching a scene, you don’t know if a joke will pop out next or the killer.
“It involved figuring out ways to play with the cutting patterns and tension so you don’t know what’s going to come next. When things arrive, they come at unexpected moments, whether it’s a laugh or a death.”
Kiernan Shipka in “Totally Killer,” courtesy of Amazon Studios
Khan told The Wrap, “To get in this space and do a mashup and then check your swing a little bit on the horror part, I feel like that would’ve been a disservice. That’s the fun of doing something like this. It’s the idea of keeping the audience off balance with the comedy so you don’t know like, Oh, is this a comedic scene or is this person actually going to get killed?”
Defining Periods
“We wanted transitions to help you know where you were” in time, Cohen explained. “There’s a bit in there, where we cut from the fresh 80s Billy the Beaver to a dilapidated beaver. There aren’t that many scenes in 2023, and they’re straight forward. With the present timeline, we kept it grounded more in comedy and the family life. When you go back in the 80s, it gets a little bit bigger in terms of the volleyball game, the 80s mom who hasn’t tried the cocaine, and all that kind of stuff. Our DP and production designers did a really cool job distinguishing visually between the two as well.”
The comedy scenes set in the 1980s were supposed to feel “inviting… to the audience,” DP Judd Overton told Frame & Reference (listen to the full conversation below). He added that this time frame was supposed to have a “John Hughes” vibe.
With that in mind, Overton says, they determined “having all that anamorphic wonkiness going on wasn’t quite the right way to go for what’s basically an ensemble comedy with a lot of body bits.”
He explains, “It was just too too strong a look. So we what we ended up opting for is the Geckos… rehoused vintage lenses, kind of like a K 30 size, sort of, you know, 70s, vintage glass, but they’re very clean, and they behave nicely. You know, nice fall off, and you get the occasional sort of rainbow flares and things like that. So they’re vintage without being too — you know, they don’t put up too much of a patina between that between the audience.”
However, they did pull out the Orion 21mm for the time machine scenes, in which some disorientation felt more than appropriate, he told Kenny McMillan.
Anna Diaz as Heather Hernandez, Olivia Holt as Teen Pam, Liana Liberato as Tiffany Clark, Stephi Chin-Salvo as Marisa Song, Kiernan Shipka as Jamie Hughes in “Totally Killer”
THE FINAL GIRL TROPE
Khan notes that “the idea of the final girl is something that exists in the genre” of slasher films, and she took the opportunity to give “these women more agency” and update the trope.
As the daughter of lone survivor Pam (Julie Bowen), protagonist Jamie (Kiernan Shipka) has inherited both trauma and a set of survival tools, Khan explains to Rolling Stone. And then the inciting incident of Pam’s murder creates “an interesting handoff for the final girl idea.”
Kiernan Shipka and Olivia Holt in “Totally Killer,” courtesy of Amazon Studios
Khan says, “Something that was appealing to me was the idea that even though Jamie is being hunted, and there is a vicious killer on the loose, she’s actually kind of hunting him. She’s propelling the story in a way that feels new to me because she will not stop until she stops him. That unrelenting drive of this young woman who’s at the center of this movie just feels like a new kind of shade on that idea of a final girl.”
“I think what was compelling about this is … there’s a killer that’s hunting, but the main character is the one that is driving everything, like she is hunting this killer in her own way,” Khan explains to KCRW.
SET PIECES AND STUNTS
Choreography is also key to an effective horror film. “Totally Killer” stunt coordinator Simon Burnett helped Khan “shoot as much practically as we could,” she told Collider. His role required him to manage both a dodgeball fight with “the chaos of war” as well as encounters with a deadly serial killer on a waterbed and a Gravitron carnival ride.
“The first kill or fight sequence was really fun to put together,” Cohen tells 1428 Elm. “It was really fun to get to use the footage of her doing these stunts and just to work on selling the intensity and brutality of that scene. There’s a lot of little editing tricks, like speed ups and cutting a frame here and there. It’s interesting because some small little frame difference can make a difference between whether or not a hit or stab sells or if people think it looks weird.”
The waterbed was designed by Liz Kay and the scene was shot in a real home, so Khan says they used five cameras to capture lots of angles and avoid a reshoot.
And that Gravitron was also real and purchased with very little time to spare for the crucial final sequence. “They were down for all of it,” Khan says of the team at Blumhouse, which she attributes to a genuine enthusiasm for making movies.
To simulate the Gravitron’s movement (a no-go with cameras inside), Khan explains, “DP Judd Overton and his team had a lighting rig going outside because there’s like small cracks in between the panels so you can see the lights moving. And that suggests movement. And then we have the practical effects guys in there with blowers…so [you get] that effect of being stuck to the wall.”
The Gravitron “is as close to outer space as I’ve shot,” Overton joked to Frame & Reference.
Shooting the end scene, he says, “was a real challenge. It was a lot of fun.”
While the crew was limited in the modifications that they could safely make, Overton says they created a practical period look with “some LED strips, but they had a casing on them that basically looks like a like an old neon.” That was important because they “created a chase that really felt that we could build it up, so that as the as the film progresses, and as the Gravitron speeds up, you know, you see it in the lighting.”
“Even though Gravitron wasn’t moving… you feel it,” Overton says of the VFX’s efficacy.
2023 NAB Show New York to Feature 55 New Exhibitors and Many, Many New Products and Services
In 2023, NAB Show New York will feature nearly 275 exhibitors, including 55 first-timers on the show floor. The number of new exhibitors represents a 65% year-over-year increase.
The exhibit hall will also showcase a number of new products and services.
Apantac 4K 60 KVM over IP switching and extension
Black Box Emerald DESKVUE
Dalet InStream
DiMetis Boss Media Exchange
Famerswife 7.0
FUJINON Duvo HZK25-1000mm F2.8-F5.0 PL Mount Cinema Box Lens
FUJIFILM GFX100 II Digital Camera
GB Labs Unify Hub
Huizhou City LATU Photographic Equipment Co. GVM Pro Lights Series
“The strong showing of first-time exhibitors is a testament to NAB Show New York’s commitment to connecting the most relevant products, practices and people propelling the broadcast, media and entertainment industry forward,” said NAB Global Connections and Events Executive Vice President and Managing Director Chris Brown.
Other major brands featured at the Show will include: Avid, B&H, Canon, Cisco, Blackmagic Design, Evertz, Fujifilm, Grass Valley, Harmonic, Imagine Communications, Lawo, LiveU, Maxon, Panasonic Connect, Ross Video and Telestream.
Get in on key trends and powerful intel at NAB Show New York’s Insight Theater! This is where you can catch up and connect the dots. Between process and products. Between the ways we now create and consume content. This is an intimate space to glean coveted insight and interact with the people and products transforming the industry.
Motion capture from “Guardians of the Galaxy Vol. 3,” courtesy of Framestore
TL;DR
A look at how motion capture data is edited, connecting that data to a virtual character and how mocap can be used within a virtual production.
Recording the finest details of motion is one of the downsides. Where motion capture data must be recorded and potentially modified, it quickly becomes clear that is difficult to edit the unprocessed data, but there are solutions.
For performers, one of the creative advantages of virtual production is seeing the virtual environment in which they are performing. Using motion capture techniques extends this into capturing the motion of performers to drive CGI characters. New technologies are rapidly transforming the creative freedom this brings.
The BroadcastBridge brings us up to speed on this as part of a wider 12-part article series on virtual production. It looks at editing motion capture data, connecting that data to a virtual character and how mocap can be used within a virtual production.
In the tutorial, Phil Rhodes explains how in conventional animation, the motion of an object between two positions is usually described using only two positions — or waypoints — which are separated in time. Changing the speed of the object’s motion simply means reducing the time it takes to move between the two points.
However, motion capture data records a large number of waypoints representing the exact position of an object at discrete intervals.
“It’s often recommended that motion data should be captured at least twice as frequently as the frame rate of the final project, so that a 24fps cinema project should capture at least 48 times per second,” the BroadcastBridge advises.
“That’s well within the capabilities of most systems, but it does complicate the process of editing motion data. It’s impractical to manually alter dozens of recorded positions per second and achieve a result that looks realistic.”
Editing Mocap Data
Tools have been developed to facilitate motion capture data editing. Some of them rely on modifying groups of recorded positions using various proportional editing tools; a sort of warping. Others try to reduce the number of recorded positions, often by finding sequences of them which can be closely approximated with a mathematical curve.
This can make motion capture data more editable, but too aggressive a reduction of points can also rob it of the realism of a live performance, risking a more mechanical, artificial look which is exactly what motion capture is intended to avoid.
Often, motion capture used where a performer is working live alongside a virtual production stage won’t be recorded, so there won’t be any need or opportunity to edit it. Other problems, such as intermittent failures to recognise tracking markers, might cause glitches in positioning that might usually be edited out.
“Working live, a retake might be necessary, although well-configured systems are surprisingly resistant to — for instance — markers being obscured by parts of the performer’s body.”
Rigging and Scale
Connecting motion capture data to a virtual character, requires that character model to be designed and rigged for animation. Where the character is substantially humanoid, this may not present too many conceptual problems, although the varying proportions of different people can still sometimes cause awkwardness when there’s a mismatch between the physique of the performer and the virtual character concerned.
“Very often, the character will be one which looks something other than human. It may be on a substantially different shape, scale or even configuration of limbs to the human performer whose movements will drive the virtual character,” writes Rhodes.
Various software offers different solutions to these considerations, allowing the performer’s motions to be scaled, remapped and generally altered to suit the animated character, although this has limits.
“Although motion capture technicians will typically strive to avoid imposing requirements on the performer, the performer might need to spend time working out how to perform in a manner which suits the virtual character. This approach which can make a wide variety of virtual characters possible.”
The article describes how mocap systems require at least some calibration, which might be as simple as moving around the capture volume with a specially designed test target. Some of the most common systems, using spherical reflective markers, may require some calibration for each performer, especially if the performer removes or disturbs the markers.
Virtual Production and Mocap
Many virtual production setups rely on motion tracking to locate the camera, even when motion capture is not being used to animate a virtual character.
As such, almost any virtual production stage might rely on at least some calibration work, though there is often some variability in how often this is done; performance capture spaces might do so twice daily, requiring a few minutes each time.
“As with many of the technologies associated with virtual production, motion capture, where it’s used, is likely to be the responsibility of a team provided by the studio itself,” the BroadcastBridge reports. “Most of the work required of the production will be associated with the design of the virtual character which will be controlled with motion capture.”
The report concludes, “The technical work of connecting that character’s motion to the capture system is an item of preparation to be carefully planned and tested before the day. With those requirements fulfilled, using an actor’s performance to control a virtual character can provide an unprecedented degree of immediacy.”
While it certainly adds another layer of technology to the already very technology dependent environment of virtual production, it creates a level of interactivity which was never possible with post production VFX.
Martin Scorsese: “Killers of the Flower Moon” and American Mythology
Making of “The Killers of the Flower Moon”
TL;DR
During an ever-fascinating tour through his life and career in an interview with Edgar White at the BFI London Film Festival, 81-year old director Martin Scorsese decries the demand for “content.”
As filmmaking technology becomes easier to access, he hopes filmmakers can use that accessibility to tell stories that matter.
He explains how he had to rework “Killers of the Flower Moon” to concentrate on a central love story, rather than the FBI.
Martin Scorsese continues to fear for the future of cinema, deriding the demand for “content.”
He likens “content” to having TV on in the background, or the noise of a radio while you go about your daily business.
“And now, of course, I keep TCM on as much as possible.”
For Scorsese, naturally, movies are an art form and going to the cinema is almost spiritual. “The experience of seeing a film with a lot of people is really still the key, but I’m not sure that that can that can be easily achieved at this point,” he said.
“I’m afraid that the spectaculars, or what they call franchise films, will be taking over the theaters. I always ask for the theater owners to maybe create a space where younger people [could see] a new film, which is not a franchise film, sharing it with everybody around them. So that they want to go to the theater, and it doesn’t get you to the point where [they] could see it at home.”
TRAILER FOR “THE KILLERS OF THE FLOWER MOON”
Old Fashioned… Innovation
Scorsese has made his thoughts felt on this before, and most people will be sympathetic to his cause. Naturally, he continues to make movies the old-fashioned way: on film, on location, with big themes and subjects told at length. His latest, The Killers of the Flower Moon, clocks in at nearly three-and-a-half hours.
The celebrated director is not immune to digital technology, though. He used cutting-edge techniques, of course, to de-age actors for The Irishman. He understands that the use of digital video cameras enable younger filmmakers today to make movies on their own terms, just as he did with breakthrough drama Mean Streets half a century ago.
“If I was able to shoot digital, or even just video, I would have shot Mean Streets, and that way, I wouldn’t have to pay for the lights… which means we don’t need the studio.”
Scorsese’s inspiration for Mean Streets (1973) was John Cassavetes’ verité work Shadows (1959), and he thinks similar freedom is afforded to filmmakers using digital technology.
“It’s so much freedom that I think you have to rethink what you want to say and how you want to say it and use that technology,” he warns. “Ideally, what I hope is that — and I hesitate to use the word — but serious film could still be made with this new technology so that it can be enjoyed by an audience on a big screen.”
Percorso dei Ricordi
Wright did well to steer Scorsese, always a vivacious and passionate communicator, through a tour of his entire career, playing clips from movies including Mean Streets, Taxi Driver, Goodfellas, The King of New York and more in a roughly 90-minute presentation on stage.
Before all this, the 81-year-old filmmaker revealed that he had a “natural enthusiasm to want to share an experience” with other people. “We couldn’t afford to go to theater so it was in the movie theaters at the time. I wanted somebody to share and enjoy it with — all together. At a certain point, I began to get very, very excited by sharing as much as possible my experience with younger filmmakers. And then from their films, I get reinspired. It opens up a whole world,” he said.
“I always thought of myself more as a teacher than as a filmmaker.”
Leonardo DiCaprio as Ernest Burkhart and Lily Gladstone as Mollie Burkhart in “Killers of the Flower Moon.” Cr: Apple TV+
Lily Gladstone as Mollie Burkhart and Leonardo DiCaprio as Ernest Burkhart in “Killers of the Flower Moon.” Cr: Apple TV+
Robert De Niro as William Hale and Leonardo DiCaprio as Ernest Burkhart in “Killers of the Flower Moon.” Cr: Apple TV+
We learn that before making films he made what he calls storyboards, or frames, of stories, shot by shot. Sadly, “the earliest ones I destroyed. I made my own versions and color painting watercolors, but then I threw all that away,” including it seems a “Roman epic, which I never finished,” but had planned for cameras booming down on a Roman legion entering the gates of Rome.
He explained that during his time at NYU making short films, his technique for making movies came first, but the stories and themes he really wanted to explore came later. At college in 1964 he was inspired by seeing Bernardo Bertolucci’s film Before the Revolution. Bertolucci was only 22 at the time, and it was already his second film.
“I wanted to be able to express myself that way. It had such a joy of not only filmmaking, but of life, and it had such depth of culture, but I don’t come from that culture. I don’t look at politics. Ultimately, I had to find a way of expressing myself from my own culture. What kept me going was the ambition and the determination to reach that level that I had seen in Before the Revolution.”
He shared that the character of Johnny Boy played by Robert De Niro in Mean Streets is based on someone who is still alive and that the playfulness of Johnny Boy and Harvey Keitel’s character in the film stemmed from “Abbott and Costello or Bing Crosby and Bob Hope” on the road movies.
Later, he notes that the notorious scene in Goodfellas — in which Joe Pesci’s character says, “You think I’m funny. Funny how?” — “actually happened to Pesci… he took a chance with this guy and he got out of it but it was terrifying. And so that kind of banter, so to speak, structured, joking, enjoying themselves puffing up… that sort of thing was something that we did intentionally.”
For De Niro’s famous line reading of “You talkin’ to me?” in Taxi Driver, Scorsese says, “Bob improvised. I asked him to talk to the mirror. There’s a shot of him controlling the gun, which kind of comes from Shane [a film he knew was special when he watched it at age 12]. But primarily I was at his feet. And I was just saying do it again. And he would do it again. And then he just got into a rhythm, you know.”
He credits editor Tom Rolf with cutting this scene “so beautifully constructed,” (editors Marcia Lucas and Melvin Shapiro also cut the picture) and has warm words for Thelma Schoonmaker, with whom he worked on his first film, Who’s That Knocking At My Door, and then on every picture since Raging Bull.
“What’s great about Thelma is that she has no film theory — she brings just the passion, the philosophy of it, and really has no preconceived ideas about which filmmakers are more important. So we could just look at the footage like we used to work in documentaries back in the 60s.”
Robert De Niro as William Hale and Jesse Plemons as Tom White in “Killers of the Flower Moon.” Cr: Apple TV+
His modern classics like Raging Bull, The King of Comedy and The Last Temptation of Christ all came from personal sacrifice and a tenacity on Scorsese’s part to “will them into existence,” said Wright.
“I agree, but part of it was I still have the desire to utilize film to tell stories — but what the hell story did I want to tell? The kind of stuff that was being made in the in the studio system just wasn’t working for me. They tried, they really tried to give me different ideas. I said, ‘No,’” he recounted.
“Ultimately, it’s a battle and not giving up. And maybe the films weren’t as good as the ones [I made in] the ‘70s but it didn’t matter. I had to get certain things made the way I wanted to make them.”
For the main characters in The King of New York, played by De Niro and Jerry Lewis, the director says, “I think I was having trouble coming to terms with myself. How much of me is Rupert, how much of me is Jerry, how much of me wants to be Jerry. But Jerry is a mess too. I mean, all this going on. And it may be a little too close to home. It was extremely uncomfortable. I had difficulty shooting it because of that, and in the editing too.”
He says Schoonmaker and her husband, the late great British filmmaker Michael Powell, helped him get over the finish line.
Scorsese’s First “Western”
And so onto Killers of the Flower Moon, the nearest Scorsese has come to making a Western.
“I grew up watching westerns. And I loved them because, you know, I couldn’t go anywhere. I couldn’t go near animals, I couldn’t run, I couldn’t do so because of the asthma. Whenever they tried to take me to a park or whatever, I started getting an asthma attack or get an allergic reaction to all the nature around me. So for me to see it on the screen — beautiful palomino horses — this was heaven for me.”
From “Killers of the Flower Moon,” Cr: Apple TV+
As other film historians have argued, he identifies the end of the classic genre with Sam Peckinpah’s The Wild Bunch (1969), “that ended part of the history of America too.”
Now he is using the form to retell a true story which deconstructs foundational American myths, but his approach changed radically during its writing with script writer Eric Roth. Instead of focusing on the FBI agent, which was to be played by Leonardo DiCaprio, the story shifted to the love affair between a white settler and an Osage Nation woman.
“We had to really rethink the picture, but I was glad that I had that time during COVID to rethink it. Eric said to me, ‘Where’s the heart of the movie? And I immediately said, ‘Well, it’s Molly and Ernest because they’re in love.’ And I found that out from hanging out with the Osage in Oklahoma, that it isn’t as simple as people coming in and shooting and poisoning. It’s the betrayal of trust. They said, ‘You have to understand Molly, that despite everything, they were in love.’”
If the heart of the picture was going to be Molly and Ernest, than DiCaprio decided he should play Ernest. “We had to take the script and rip it inside out. Which is what we did.”
Lily Gladstone and director Martin Scorsese on the set of “Killers of the Flower Moon.” Cr: Apple TV+
The story may appear to be unearthed from the archive but Scorsese points out that it has been present in numerous magazines “like Penny Dreadfuls,” and also articles in Harper’s Magazine, as well as in Hollywood.
“Even in musical numbers, in movies in the ‘50s, there was always a number with the Native Americans dancing around with oil shooting up behind them. But the thing about it was that by 1958 nobody remembered it, except when they made The FBI Story,” in 1959 directed by Mervyn Leroy and starring Jimmy Stewart, which is “basically a greatest hits of the FBI,” he said.
“I thought of actually recreating the shooting of that film towards the end of my film, but, instead, it went another way.”
From director Martin Scorsese’s “Killers of the Flower Moon.” Cr: Apple TV+
This is where you can catch up and connect the dots. Between process and products. Between the ways we now create and consume content. This is an intimate space to glean coveted insight and interact with the people and products transforming the industry.
Evan Shapiro
From a keynote session on What’s Next for M&E with Media’s official Unofficial Cartographer Evan Shapiro to a conversation presented by the American Cinema Editors with the editor from Only Murders in the Building, the Insight Theater sets the stage for the trends and technologies you need to know.
Media cartographer and industry observer Evan Shapiro is set to deliver the keynote address at NAB Show New York. Known as media’s official unofficial cartographer for his visual charting of the industry’s continual evolution, Shapiro’s speech will center on “What’s Next” for Media & Entertainment. He’ll use this keynote to lay out what to expect in the next era of media, whether we’re ready for it or not.
Attendees can look forward to new research and insights, as well as Shapiro’s honest assessment of how the M&E industry can grapple with its next era. Preceded by remarks from NAB President and CEO Curtis LeGeyt, this keynote session is scheduled for Wednesday, October 25, at 10:30 a.m. on the Insight Theater stage.
A trio of industry pros discuss current technology trends pushing the boundaries of storytelling as a prelude to NAB Show New York.
October 15, 2023
Live Events Have Become a Whole Cinematic Thing
From Beyoncé’s “Renaissance: A Film by Beyoncé”
TL;DR
It’s not enough to just go to a concert any more. People want to experience the event before they go and after they’ve been. The business of live is changing too.
For artists or brands like Beyonce, Harry Styles or Taylor Swift, as well as entertainment companies, business has become much broader than selling out a tour or a movie or merch.
Audiences seek compelling, profound experiences that allows them to have agency and authenticity.
Getting tickets to live events is more difficult than ever because the global market for live events is a monopoly run by Ticketmaster.
Despite widespread public criticism and political scrutiny of Ticketmaster parent company Live Nation Entertainment, solutions are not forthcoming.
Taylor Swift’s Eras Tour only began in March and is set to become the biggest tour of all time only a third of the way through its worldwide run, having already grossed over $2.2 billion in North America. According to Live Nation, Beyoncé’s Renaissance World Tour finished having earned north of half a billion at the box office.
Are these mega-star anomalies or is such success replicable?
“This feels like a cultural moment we’re living in,” says Adam Chitwood of TheWrap, joining a conversation that assessed trends in live entertainment.
“It’s not enough to just go to the concert. People wanted to experience the concert before they go and after they’ve been.”
Everyone agrees that the pandemic has influenced how we now view live events. The fact that for two years fans couldn’t go out has generated a pent-up desire (mania) to ensure that they do now that they can.
But it’s not just about live or music. The billion-dollar global box office takings of “Barbie” were in part propelled by audiences participating in the experience more than they would any standard movie — dressing up and attending multiple showings.
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
We want to enjoy an experience in the company of others, including strangers.
“[Audience members] want to go with a bunch of friends, they want to buy the merch, and they want to participate in every way with their full energy,” confirms Levi Jackson, head of music marketing at WME (William Morris Endeavor). “Now they want a shared experience.”
For artists — or do we call them brands? — like Beyoncé, Harry Styles or Taylor Swift — as well as entertainment companies — business has become much broader than selling tickets or merch.
“We have all these different products and verticals that are involved with each actor or event,” explains Ross Gerber, Co-Founder, President and CEO of Gerber Kawasaki Wealth and Investment Management.
Jeff Clanagan, president of Hartbeat, Kevin Hart’s production company, notes that despite the higher cost of living, the demand for live experiences has rocketed: “Ticket prices have never been at this level. Fans are paying $300 to $1,000, you know, sometimes more depending on the artist.”
Compelling, Authentic Experiences
That Taylor Swift’s concert film is releasing into cinemas while her tour still has a year to run was never going to diminish the demand to see her live. “Absolutely zero chance it’s going to impact ticket sales,” says Clanagan. “There’s still a huge audience that might not have gone to that stadium to see Taylor or Beyoncé because of the ticket prices, but also people who went to the shows want to really have that experience in a theater. So it’s just another touch point for the consumer to share that experience.”
Not every artist can command this volume, however. Fri Forjindam, who leads global business development, branding and communications for Mycotoo, thinks that’s down to artist authenticity: “You can’t quantify an emotional connection that resonates with people. That means there’s a promise that’s being made. [The artist is] saying you’re going to get all of me, you’re gonna get my full catalog, you’re gonna get performance showmanship, tech, everything VIP. There’s an experiential overlay that is delivering on that promise, as opposed to just gouging.”
Rather than just blindly consuming anything an artist does, she thinks fans are extremely discerning. “They don’t want bulls**t. They want to come and have a compelling, profound experience that allows them to have agency and authenticity, and to see that in the things that they’re engaging with.”
Mycotoo has worked with the Studios on IPs from Netflix’ “Stranger Things” global tour to “The Mandalorian” to Prince’s Paisley Park to create experiences from theme parks to live events to brand activations.
Forjindam says the job is to leverage IP into an ecosystem that engenders loyalty. Whether it’s a concert, a museum or theme park, how do you take all those principles and turn it into a revenue based experience or entertainment destination?
boygenius performing “Not Strong Enough”
Leveraging Ideology and Mythology
One ingredient to success is understanding context. It’s vital, she says, “to have a shared emotional experience align with a brand and artists that reflects who they are in their ideology, in their consumer spending, in their way of life, in their sexuality in all the things that make you whole. It can’t just be about seeing the artists, there needs to be something deeper.”
For example, when working with Netflix on the “Stranger Things” global tour, the intent wasn’t to recreate the show, but to give fans a reason to get excited about the next season of the show on Netflix.
The goal was to “give them a physical place where they can commune with others and have this sort of ‘choose your own adventure’ [experience] and be the hero of their own story, using live performance. It’s redefining what live entertainment is first and then figuring out what the revenue verticals are to make it a viable business proposition.”
Ticketing Trauma and Technology to the Rescue?
It’s true that fans continue to have difficulties getting tickets to live events. The global market is pretty much a monopoly run by Ticketmaster and parent company Live Nation Entertainment received widespread public criticism and political scrutiny over blunders in selling tickets to the Eras Tour. There’s no easy answer.
Jackson says, “We’ve worked with a bunch of tours and talent, and we worked with every ticket company and I think the challenge is actually too complex for an individual artist to fix. Even for someone like Taylor Swift and Beyoncé. These companies are so big, you know, the contracts that they have, the tickets are difficult, but the technology of ticketing is actually so challenging.
From “Kevin Hart: Reality Check,” courtesy of Peacock
Gerber admits he can’t express his true feelings about Ticketmaster, “because it’s, controversially, you know, negative,” but suggests that an individual’s smartphone could be a better way of validating tickets.
Forjindam agrees: “We’re using technology to attempt to solve the climate crisis. We’re using technology for automated vehicles and smart cities and literally building Ukraine from the ground up. Why can’t we use technology to figure this [ticketing issue] out?
“How do we allow it to be able to maybe learn what someone’s user pattern is, or fandom level is, as a way to give them additional points to get ahead of the line because they are a legitimate fan, regardless of whether they can afford $1,500 or not.”
From “U2:UV Achtung Baby Live At Sphere”
Not coincidentally, there is a trend toward upgrading and building venues with new technology, not just giant LED screens but also better sound and lighting systems, to give fans a more immersive experience. The pinnacle of this right now is the Las Vegas Sphere.
“The Sphere is challenging artists to really think about that experience,” emphasizes Jackson. “Every show that’s going in there at the moment has to be bespoke to that venue. It’s making a unique experience as a destination at that venue — people are flying in from around the world to go to the Sphere.”
“In the end, this is about trying to make a connection with our audience,” says U2’s Bono. “it’s Las Vegas or bust, baby.”
October 15, 2023
Taylor’s Version: How the “Eras Tour” Concert Film Could Change Cinema
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
TL;DR
Taylor Swift’s “The Eras Tour” is a nearly three hour concert film that’s receiving both critical acclaim and big box office dollars.
The studios are reportedly less than pleased about the distribution deal Swift struck with AMC Theatres. (Some might even say there’s a likelihood of bad blood between Swift and the studios…)
Concert films must strike a balance between recreating the concert-going experience and improving upon it, giving attendees the sense that they both had the best seat in the house and an additive, cinematic experience.
Willman says Swift’s performance in the concert film is “2 hours and 45 minutes of nearly nonstop acting, writ large for the back row of SoFi Stadium and, now, Imax and Dolby.”
The concert film “will be playing AMC, Regal and Cinemark theaters this fall. AMC also is releasing Taylor Swift: The Eras Tour on its own in what is a first distribution initiative for the circuit, however, it has tapped Variance Films to book the title. The pic will also be booked at Cineplex theatres in Canada and Cineopolis in Mexico,” according to Deadline.
Swift’s film “set a single-day domestic record for AMC after selling $26 million in less than 24 hours, besting the all-time benchmark previously held by “Spider-Man: No Way Home” ($16.9 million),“ Variety reports.
“The world’s biggest music star teaming with the world’s biggest theater company on a film that could play for months is as close to a no-brainer as exists in entertainment,” Puck’s Matthew Belloni writes about the AMC deal. (Also a no-brainer, Belloni reports, was Universal’s decision, under duress, to change “The Exorcist: Believer’s” release date from the targeted Friday the 13th that would have competed directly with Swift’s movie.)
“For traditional studios, ‘The Eras Tour’ might be the most profound example of the money that’s been left on the table because of dragged-out negotiations with the Writers Guild of America and the Screen Actors Guild,” David Sims writes in The Atlantic.
“Perhaps it feels like a stretch to claim that concert films will be what saves cinema, but with Hollywood running on fumes, it’s much more possible for their movies to have an impact—or at least for the large impact they would have no matter what to seem like the only thing happening at the multiplex,” writes Wired’s Angela Watercutter. “And not for nothing, but finance types are literally out here claiming these two artists [Taylor Swift and Beyoncé] boosted the US gross domestic product in the third quarter of this year.”
The Making of an “Eras” Experience
The creators behind the film deserve massive praise.
Willman writes, “A team of five editors gets credit for assembling all this work in such a hurry following a shoot at her U.S. tour finale in L.A. just two months ago. But the movie reflects the ethos Wrench has showed off in other concert films, like the recent ‘Billie Eilish Live at the O2,’ in not cutting just to create excitement where it already exists.”
In terms of the visuals, Willman observes, “A healthy balance is struck between knowing there’s a hell of a lot to take in in this stage production and knowing that the thing we most want to take in is Swift herself.”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
Swift herself gave the creators an enthusiastic cosign during the surprise early premieres at AMC Theaters in the Grove on Thursday. THR’s Kristen Chuba reports that Swift told audiences, “It is the perfect capture of what this show was like for me.”
She also told attendees, “I think that you’ll see that you’re absolutely a main character in the film, because it was your magic and your attention to detail and your sense of humor and the ways that you lean into what I’m doing and the music I create.”
Per Calum Marsh at the New York Times, “Filmed over three nights in August at SoFi Stadium in Inglewood, Calif., and directed by Sam Wrench, ‘The Eras Tour,’ like most concert films, aims to capture some of the magic of seeing the artist perform live.”
Despite this very tough assignment, “Apart from maybe those digital title superimpositions, you’d be hard-pressed to point to any wrong moves Wrench makes in transferring the show from stage to screen,” Willam writes. He’s unconvinced that the announcements of the different eras (AKA albums) was truly additive for most viewers.
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
From Taylor Swift’s film “Taylor Swift | The Eras Tour”
“The demands of a film are incredibly complicated, and faithfully reproducing the look and sound of a concert for the screen is an arduous and painstaking process for the filmmakers and their crews,” writes Calum Marsh.
“You can make [the concert film] an equally good experience, but it has to be a filmic or cinematic experience rather than trying to compete with the live experience,” film and music video director Jonas Akerlund told Marsh.
Akerlund explained that time and money and opportunity are the secret to making a concert film cinematic, followed by editing “with the precision of a four-minute music video.”
Part of that is ensuring the audio is superb — maybe even better than it was live. “The main thing we’re trying to do is provide the theatrical audience with the best seat in the house,” John Ross, the rerecording mixer on “The Eras Tour,” explains.
Variety’s Willman argues that the film achieves this: It “magnifies all of it, in a next-best-thing-to-being-there way (even though no one who missed it will completely shed their FOMO).”
To achieve this, Marsh says, “the tone of the room is essentially applied like a filter to the raw sounds recorded from the artist onstage. This filter, known as impulse response, takes readings from actual physical places, then ‘synthetically reproduces the sound of a real space like a club or stadium,’ said Jake Davis, the lead mix engineer at SeisMic Sound, an audio facility in Nashville that specializes in concert films.”
From Taylor Swift’s film “TAYLOR SWIFT | THE ERAS TOUR”
Mixers are also tasked with making “ the concert film sound more like what the artist wanted than what necessarily occurred on the night it was filmed,” Marsh says.
Precision – but not perfection – is the name of the game for concert films. Marsh writes, “It’s a bit like touching up a portrait in Photoshop: it’s tempting to clear blemishes, but too much airbrushing can make you look fake.”
Despite all of the behind-the-scenes effort, Marsh writes, “The best concert films will, to the unwitting viewer, seem like nothing more than filmed concerts — the filmmaking itself remains invisible.”
For artists like Beyonce or Taylor Swift, the business of live entertainment has become much broader than simply selling out a tour.
October 6, 2023
Something in (Around) the Atmosphere: Refik Anadol’s AI Sculpture “Machine Hallucinations”
From Refik Anadol’s “Machine Hallucinations: Sphere”
TL;DR
“Machine Hallucinations: Sphere” is an immersive digital experience projected onto the Las Vegas Sphere featuring dynamic abstract imagery of space and nature.
Artist Refik Anadol uses data taken from seven years of his “Mission Hallucination” series, and merges them into an equirectangular projection technique to make two different works.
Anadol likened the process to how Claude Monet was “inspired by the atmosphere and became this incredible impressionist painter.”
“This is one of the most Blade Runner moments ever,” media artist Refik Anadol says of his AI-generated animation wrapping the outside of the Las Vegas Sphere. “A science fiction moment that, finally, merges media arts and architecture, embedding technology into a physical environment that exists in the real world.”
“Machine Hallucinations” debuted on September 1, and will run through New Years Eve. The piece, like other works by Anadol, uses publicly available data and machine learning algorithms to create large-scale animated abstractions.
Anadol and his team created “two chapters,” or two versions of the work, that run one after the other, repeatedly, using dynamic programming. “Meaning they play at different speeds, with different forms, colors and shapes each time. It’s generative art,” Anadol told Deborah Vankin at the Los Angeles Times.
The first “chapter” uses about 1.1 million publicly available images taken by satellites and spacecraft, including from the International Space Station and NASA’s Hubble telescope. The AI transforms these beautiful images of the Earth, the universe, their colors, and their forms into what Anadol calls “data pigments” to create an animated image that morphs organically over time.
From Refik Anadol’s “Machine Hallucinations: Sphere”
From Refik Anadol’s “Machine Hallucinations: Sphere”
From Refik Anadol’s “Machine Hallucinations: Sphere”
From Refik Anadol’s “Machine Hallucinations: Sphere”
From Refik Anadol’s “Machine Hallucinations: Sphere”
In “Machine Hallucinations: Nature,” Anadol uses 300 million publicly available photographs of flora and fauna to create a different form of “pigment.” These natural blocks are then animated by data of the wind and gust speed, as well as precipitation and air pressure, all captured from sensors in Las Vegas. Speaking to Jesus Diaz at Fast Company, Anadol likened the process to how Claude Monet was “inspired by the atmosphere and became this incredible impressionist painter.”
For the Sphere (or rather the Exosphere, since it is on the outside of the building) the artist had to deploy a totally new projection mapping technique and rebuild his AI models. He explained to Diaz that his studio applied his real-time generated AI artwork to something called an equidistant cylindrical projection — a type of model that is used to make maps from spheres like the Earth. With the map in hand, a project can then wrap to the sphere.
“To me, it’s questioning reality,” he added to Vankin. “[It’s] this incredible architectural form in public urban space and this incredible art form. We’re used to canvas and sculpture and paintings and video, but this time, the whole building is a canvas — and not one with corners. It’s challenging our perceptions. It’s a really powerful statement and experiment reinterpreting the limits of our understanding of what is a canvas.”
“Refik Anadol: Unsupervised” uses an AI model trained on 180,000 artworks from MoMA’s collection to produce a stream of digital images.
October 6, 2023
Step Inside: The Spectacle of “U2:UV Achtung Baby Live At Sphere”
From “U2:UV Achtung Baby Live At Sphere”
TL;DR
U2’s residency at Las Vegas’ Sphere opened September 29 and is a multisensory experience that seems to be living up to the critical hype of the new venue.
MSG Entertainment built its own camera system and a post workflow in order to accommodate its 16K by 16K screen. The company also designed a custom media recorder to capture all that data.And Sphere’s experience wouldn’t be complete without a tailored audio set up: a164,000-speaker audio system that can isolate specific sounds, or even limit them to certain parts of the audience.
Artist Marco Brambilla’s “King Size” video is also playing during U2’s performance of “Even Better Than the Real Thing.” He used AI to make the highest resolution video collage of all time, dedicated to rockstar Elvis Presley.
In addition to “U2:UV Achtung Baby Live at Sphere,” the megadome features a short film by director Darren Aronofsky and its Exosphere is covered in a Refik Anadol installation.
U2 debuted its show, “U2:UV Achtung Baby Live at Sphere” September 29 and, writes Rolling Stone’s Andy Greene, “The Sphere somehow managed to live up to years of hype with its dazzling 16K resolution screen that transported 18,600 fans from the stars in the night sky to a surreal collage of Vegas images, the arid deserts of Nevada, and the information overload of Zoo TV.
“And the sound wasn’t the sludgy, sonic assault you typically get at an arena or stadium concert. It is clear, crisp, and pristine, making earplugs completely unnecessary. As advertised, this was a quantum leap forward for concerts.”
U2 Guitarist the Edge told Wired’s Steven Levy that the sound quality is a step above because Sphere ‟was designed with audio in mind, whereas most of the venues we end up playing … were designed primarily for sports, where the sound is a very kind of low priority. It’s really paid off.”
Learn more about U2’s residency here, and watch the band’s performance of “The Fly” from the show here.
From U2’s video for “Atomic City,” courtesy of U2
Sphere is described by Deluxe SVP of Innovation Richard Welsh as ‟probably the biggest, most elaborate manifestation of all of the technologies that you might experience in a cinema-like environment now, translated to a huge, huge space.”
For more on the technology that enables Sphere, you can watch our video of Welsh in conversation with Eric Cantrell and Roman Sick.
Developers of the 366-foot-tall, 516-foot-wide dome are aiming to reinvent every aspect of the live event experience, and Sphere is the culmination of seven years of work, with a budget that reportedly stretched to more than $2.3 billion (making “it the most expensive entertainment venue in Las Vegas history, beating out the $1.9 billion Allegiant Stadium,” per The Impossible Build.)
Inside, that translates to a venue that can seat 17,600 people; and 10,000 of them will be in specially designed chairs with built-in haptics and variable amplitudes: Each seat is essentially a low-frequency speaker. There’s also the option to shoot temperature- and direction-controlled (or scented!) air at fans.
In addition to the massive dome and custom seating, MSG Entertainment built its own camera system and a whole post-production workflow, which together comprise a system it calls Big Sky, in order to accommodate its 16K by 16K screen, believed to be the world’s highest resolution. Because this screen “covers almost the entire interior’s curved walls and roof,” The Impossible Build YouTube channel predicts it will be like stepping inside “a real life metaverse.”
The Big Sky single lens camera boasts a t316-megapixel sensor capable of a 40x resolution increase over 4K cameras. It can capture content up to 120 frames per second at the 18K square format, and even higher speed frame rates at lower resolutions.
These special cameras also require custom lenses to do the job, including a 150-degree field of view, which is true to the view of the sphere where the content will be projected, and a 165-degree field of view which is designed for overshoot and stabilization and is particularly useful in filming situations where the camera is in rapid motion or in a helicopter.
Additionally, MSG Entertainment designed a custom media recorder to capture all that data including uncompressed RAW footage at 30 Gigabytes per second with each media magazine containing 32 terabytes and holding approximately 17 minutes of footage.
Sphere’s immersive sphere-ience, err, experience wouldn’t be complete without a tailored audio set up. The 164,000-speaker audio system that can isolate specific sounds, or even limit them to certain parts of the audience was designed by German start-up Holoplot. (That means certain audience sections could listen to a movie in different languages, or even different instruments.)
From U2’s “U2:UV Achtung Baby Live at Sphere” promo
Make Way for the King
“We’re pushing the envelope in the visual area as far as you can. All the artists we worked with have given us incredible material that we feel really connects with our music,” the Edge told Wired. “But in the end, the songs dictate what we put on the screen and what we do as a band in performance.”
For example, artist Marco Brambilla’s “King Size” video is playing at Sphere now through December, during U2’s performance of “Even Better Than the Real Thing.”
Brambilla used AI to make the highest resolution video collage of all time, dedicated to rockstar Elvis Presley. The 16K resolution, AI-generated immersive artwork for the venue’s opening celebrates the king of rock’n’roll with glorious, exaggerated excess.
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
To create “King Size,” Brambilla started out by feeding over 12,000 film samples of Presley’s performances including the 33 movies he starred in into Stable Diffusion. This allowed him to catalog hours of footage and select what he needed with ease. He also utilized a combination of Stable Diffusion and Midjourney to generate “fantastical versions of Elvis” for the short.
“I knew it was going to be something epic and iconic, and I knew it would be odd at the highest level,” Zane Lowe told Apple Music. (Watch the full interview, above.) But Lowe added, “What I didn’t realize though, was that it was going to create such a profound feeling.”
Aronofsky and Anadol
What other content can be seen at Sphere in its debut year?
Director Darren Aronofsky was commissioned to shoot “Postcard From Earth,” the first piece of cinematic content for the Sphere, with the Big Sky camera wielded by Oscar-nominated cinematographer Matthew Libatique. It’s featured now as the marquee show for The Sphere Experience.
“At its best, cinema is an immersive medium that transports the audience out of their regular life, whether that’s into fantasy and escapism, another place and time, or another person’s subjective experience. The Sphere is an attempt to dial up that immersion,” Aronofsky tells Carolyn Giardina.
In another article for The Hollywood Reporter, Aronofsky explained, ‟I wanted to shoot macro shots [for ‘Postcard’] because to present them in 18K to audiences with that level of detail would be something no one’s ever seen before.” (One end result of this aim: jumping spiders. You were warned!)
All told, these intensive captures resulted in ‟a whopping half-petabyte of data,” per THR.
As Wired’s Steven Levy put it: ‟While U2 used the Sphere to create a genuine concert, ‘Postcard’ is more of a mind-stretching theme-park attraction.”
The Sphere’s interior isn’t the only way to experience groundbreaking content; viewable from parts of the Strip, the Exosphere currently displays an “AI data sculpture” created by Refik Anadol and dubbed “Machine Hallucinations: Sphere.”
From Refik Anadol’s “Machine Hallucinations: Sphere”
The Exosphere is covered with nearly 580,000 square feet of fully programmable LED paneling, creating the largest LED screen in the world. The paneling consists of approximately 1.2 million LED pucks, spaced eight inches apart. Each puck contains 48 individual LED diodes, with each diode capable of displaying 256 million different colors.
With the Sphere, Jesus Diaz writes for Fast Company, “the building is the canvas — a bland engineering marvel that transforms into something visually arresting once Anadol gets his hands on it.”
Artist Refik Anadol employs machine learning algorithms to create an immersive digital experience projected onto the Las Vegas Sphere.
October 6, 2023
Elvis Has Definitely Not Left the Building: Marco Brambilla’s AI-Generated “King Size”
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
TL;DR
Artist and filmmaker Marco Brambilla used AI to make the highest-resolution video collage of all time, dedicated to rock star Elvis Presley. The work debuts at the Sphere, a new venue for immersive entertainment in Las Vegas, as part of a new limited residency by U2.
With a storyline about the gradual growth and collapse of an icon, Brambilla employed what he calls “the language of excess” for “The King,” inspired by Hollywood spectacle and the work of painters Hieronymus Bosch and Pieter Bruegel.
Brambilla calls AI “a blunt instrument” that helps with references and inspirations, but it doesn’t really create intention. “That’s still the artist’s department. For now.”
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
A 16K resolution AI-generated immersive artwork for the opening of Sphere in Las Vegas celebrates the “king of rock’n’roll” with suitably exaggerated excess.
The latest Elvis extravaganza is the work of Italian artist Marco Brambilla. He was commissioned to create the largest video collage ever to fit the giant Sphere screen and to display during U2’s inaugural residency at the $2.3 billion entertainment complex.
The four-minute, hyper-detailed, image-dense video depicts Presley in different incarnations from young army officer to swaggering movie star to bloated has-been — as well as Vegas itself, “which somewhat similarly evolves from a small desert oasis into the neon epicenter of debacle-spectacle,” writes Jori Finkel at The Art Newspaper.
“I’m using the language of excess,” Brambilla told Finkel. “I wanted it to be a spectacle in the tradition of Hollywood, Busby Berkeley and Irwin Allen. It’s really over the top.”
“The storyline is really the gradual growth and collapse of an icon and also how Las Vegas went from being a desert to a glamorous destination to a mega Disneyland,” Brambilla told Jo Lawson-Tancred at Artnet. “Those two hyperboles seemed very well connected to me.”
“I’ve always been inspired by Bruegel and Bosch and this idea of multiple storylines existing in the same frame, but with video.”
How He Made It
Given that he had only four months to make it, which is much less time than it has taken to create his previous video artworks, Brambilla turned to AI to help out.
“The AI allowed me to work much faster in finding the material I wanted. The process became a kind of stream-of-consciousness exercise between myself and the AI model,” he explains to Lea Zeitoun at designboom.
He started out by feeding over 12,000 film samples of Presley’s performances, including the 33 movies he starred in, into Stable Diffusion. This allowed Brambilla to catalog hours of footage and select what he needed with ease. For example, he could simply search his dataset for “crowd in a concert,” and the AI model would pull all of the related clips up immediately for his sampling.
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
He then used both Stable Diffusion and Midjourney, another AI model, along with his own text prompts to generate the fantastical versions of Elvis: an Elvis rising from a casino table out of piles of coins; a surrealist Salvador Dali-style guitar; a statue of the singer’s head modeled off the stainless steel sculptures at Rockefeller Center. He also “revamped some looks from Elvis (2022), the biopic by Baz Luhrmann, who shares something of the artist’s aesthetics of excess,” per Finkel.
One prompt was “What would Elvis look like if he were sculpted by the artist who made the Statue of Liberty?”
Another was “Elvis Presley in attire inspired by the extravagance of ancient Egypt and fabled lost civilizations in a blissful state. Encircling him, a brigade of Las Vegas sorceresses, twisted and warped mid-chant, reflect the influence of Damien Hirst and Andrei Riabovitchev, creating an atmosphere of otherworldly realism, mirroring the decadence and illusion of consumption.”
The artist also used CGI to edit the samples and inject more details into the video collage, collaborating with a post-production studio in Paris.
After stitching together all of these images, Brambilla ran tests to make sure the video did not feel dizzying. He switched from a vortex-like format, which he found to be too intense, to a scrolling model much like how we view content on phones. He also slowed down a number of the clips so they would be easier to digest at this scale.
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
“It’s like looking through a window. If the work doesn’t cut too quickly or move too fast, it’s actually quite soothing,” he says. He described the Sphere’s screen, consisting of thousands of LEDs, as more membrane than wall. “There’s no feel of architecture when you’re inside. This is the first time I’ve seen something that is impossible to replicate at home or in a conventional movie theatre.”
By the end, Elvis is represented by a monument that towers over the video’s frenetic activity. “It’s almost like we’re in Elvis’s head,” he tells Artnet. “It’s his own memory of Vegas, of how it started. It’s a very subjective point of view so it’s all the neurons firing and everything coming together.”
AI: Tool or Collaborator?
What are Brambilla’s thoughts on using AI as a tool or a collaborator?
“I see it more as a tool at this point, he tells designboom, [but] I assume it may become more of a two-way dialogue at some point. Technically, this project is more of a hybrid – it uses the collage technique combined with AI and computer graphics to create a more seamless ‘Canvas’. The process of making it was also informed by the ‘Collaboration’ with the AI tool, which often led to unexpected associations that found their way into the work.’”
He reports that only about 20% of the output images actually looked like Elvis. “But some really interesting accidents came out of it,” he told TIME. “It was a stream-of-consciousness experiment: You’re working with a tool prompting you to make associations you wouldn’t have made. AI can exaggerate with no end; there’s no limit to the density or production value.”
Brambilla continued this line of thought with Finkel: “What I found is that it was very good at sketching, making conceptual sketches and hybrid images. It often comes up with things that are very magical.”
He found that AI was a huge help in speeding up the process of ideation. “What it doesn’t do well,” he told Artnet News, “is make an output that’s really specific. It fights backs, so you never quite get the exact result but you get options. What I chose to do is take these imaginations and use them as a sketch for CGI artist to modify.”
He added, “AI is a blunt instrument that helps you get references and inspirations, but it doesn’t really create intention. That’s still our department. For now.”
From Marco Brambilla’s “King Size” video. Cr: Marco Brambilla Studio
King Size will play during U2 concerts at the Sphere (29 September-16 December, appropriately enough, during the group’s 1991 hit, “Even Better than the Real Thing”; Brambilla then plans to show the work at his Berlin gallery, Michael Fuchs Galerie, using an Elvis-inspired soundtrack.
Led by industry pros, sessions also include in-depth looks at how to build your own mobile or home studio from the ground up;
Showcasing stories about New York, the winner of the “Empire State of Mind” photo competition will be announced, along with a presentation of the case study of the participants.
Through the lens of her latest collaboration with Fujifilm, director of photography Sarah Whelden will lead a pair of organic discussions about productivity, performance and morale.
NAB Show New York‘s newest show floor attraction connects photography and video and offers opportunities for professionals working in both disciplines to connect and learn from each other.
The Photo+Video Lab is designed for “all who leverage a hybrid mix of equipment to capture and produce content” and is also perfect for those who want to learn videography or photography.
This lab will offer workshops, Q&A sessions, photowalks, demos of Fujifilm and Adorama products, meetups and more:
Join Frederick Van Johnson, founder and editor-in-chief of the This Week in Photo (TWiP) podcast network, as he explores a contemporary approach to mobile content creation in this insightful session. Discover how he effortlessly crafts high-quality videos, engaging courses, and captivating podcasts using a laptop, tablet, or smartphone. It’s time to break free from the conventional home office setup!
In this session, you’ll gain insights into the essential hardware to support a mobile lifestyle, the best software options and how to utilize them, a comprehensive production workflow from inception to completion, strategies for publishing and sharing your content effectively, and much more. You can watch his video, “Making Content as a Mobile Creator,” here.
Join director of photography Sarah Whelden in an organic discussion about productivity, performance and morale. Through the lens of her latest collaboration with Fujifilm, and drawing on her decade of experience DPing narrative features, commercials and documentaries, Sarah discusses consensus, setting a vibe and navigating communication with everyone on set at various career stages. You can watch her introductory video here.
Step into the future of digital asset management with AI and discover how AI can elevate your video and photo game. In this immersive session, featuring XeroSpace XR/Web3 Producer Elena Piech, attendees will experience and explore how AI tools can be integrated into photo and video workflows. Piech will explain how artificial intelligence tools can streamline content organization, efficiently automate editing tasks and even enhance your creative focus.
You’ll explore the transformative capabilities of AI tools in streamlining and enhancing your photography and video workflows. From intelligently organizing and tagging vast image libraries to automating time-consuming editing tasks, AI empowers you to focus on what you love most — capturing breathtaking moments.
Do you want to set up a home studio but are overwhelmed by the complexity of it all? In this concise presentation, Frederick Van Johnson, founder and editor-in-chief of the This Week in Photo (TWiP) podcast network, will break it down for you and provide a simple, step-by-step approach to get your home studio up and running in no time.
Gain a comprehensive overview that covers all aspects of setting up a home studio, from hardware and software to techniques, scheduling, and more. Whether you’re a budding content creator, musician, podcaster, or someone interested in recording high-quality audio and video, this presentation will give you a practical blueprint to set up your studio quickly and efficiently.
“Empire State of Mind presented by Bamboo” is seeking the next great photographer and their creative collaborators to shoot an exclusive creator merch drop. The twist is, the creator will be taking to the streets of NYC for an epic fan-meets-creator photoshoot. On Bamboo, creators will build a collaborative feed with their team to showcase their artistic style.
They can include their chosen collaborators to present their creative, photographic approach to fashion on the streets of New York. The winner will get to lead the merch shoot for the renowned creator, Avori Henderson, and be tagged in the final post by Avori. The exciting journey will be documented and shared on the Bamboo platform and social media. The winner of the contest will receive a cash prize of $4,000 to help fund their creative journey. Learn more about the contest here.
The “Empire State of Mind: Photo Contest Finale” session at NAB Show New York will demonstrate new storytelling techniques, allowing attendees to discover innovative ways to blend visual mediums. The session will also cover social media tools, providing concrete practices for leveraging photos and videos for social media channels, as well as strategies for content monetization and winning methods for crowd-sourcing photo/video projects.
Reprising her session from Wednesday, October 25, director of photography Sarah Whelden leads an organic discussion about productivity, performance and morale. Through the lens of her latest collaboration with Fujifilm, and drawing on her decade of experience DPing narrative features, commercials and documentaries, Sarah discusses consensus, setting a vibe and navigating communication with everyone on set at various career stages.
This new and exclusive-to-NAB Show New York attraction is for all who leverage a hybrid mix of equipment to capture and produce content. Dive into a full-on immersion into the photo and video world, this integrated workflow experience will allow you to sample the latest tech and gear side-by-side from iconic brands and innovative newcomers.
From “Reflections,” a short film by Fujifilm photowalk guide Yolanda Hoskey
Interested in exploring Hudson Yards with a camera, lighting, a model, and a guide? Sign up for the Fujifilm Photowalks at NAB Show New York.
October 2, 2023
How Yolanda Hoskey Visually Catalogs the Black Experience
TL;DR
Yolanda Hoskey is a New York City-based photographer who specializes in portraiture and street photography. Prior to working as a still photographer, she worked in theater and film.
She is dedicated to cataloging the diversity of the Black experience. Fujifilm’s color science and Capture One editing software both help her to captureher vision.
Hoskey will be one of the guides for the Fujifilm Photowalks at NAB Show New York. Register to attend here (use the code AMP05 for free admission) and then sign up for your preferred time: October 25 at 10:30 a.m., 1 p.m., or 3:30 p.m. and October 26 at 10:30 a.m., 1 p.m., and 3:30 p.m.
“Why I found the camera, it’s a little bit layered,” says New York City portrait photographer Yolanda Hoskey.
“I’ve been in the creative industry for the last 10 years,” Hoskey says. But the majority of her career has not been working in still photography.
Prior to working as a full-time photographer, Hoskey worked in both theater and film for seven years; first she was a stage manager, theater set designer, costume designer, director, and then transitioned to film, where she worked as a production designer and creative producer.
“Because of my experience in the theater industry, the film industry, I’m very much drawn to storytelling,” Hoskey says.
Additionally, the seeds for her photography career were actually sown in her first year of college, when her mother passed away.
“I was thinking about just trying to remember her, her legacy, our time together, and I realized I only had the same six pictures, and one video of her because we didn’t … document our existence,” Hoskey recalls.
That experience shapes her work today. “I want to be able to give that back to the community that I identify with and create, kind of, this catalog and this collection of memories of real people, to say that they were here, they mattered and that they were loved, as an ode to my mom.”
Hoskey understands that not everyone is drawn to her community of origin, which she describes as “the projects in East New York,” so she says, “part of the work that I do as a photographer, is trying to kind of debunk and de-stigmatize and create new representations of the Black experience as nuanced, as non-monolithic, as multidimensional. And so I’m trying to create this catalog of this vast representation of the Black identities that are different from mine.”
YOUTUBE UNIVERSITY
Because Hoskey did not go to film school, she says, “I definitely am a graduate of YouTube University. I am always one pulling on my community” to level up her game.
“I just YouTube questions and [watch] videos that answer my questions, and I don’t really go to one person’s specific channel,” she explains.
One of her favorite creators on social media is Adrian Per, also known on social media as @OMGAdrian. Hoskey says, “He is so helpful. His videos are so engaging. And he’s talking about video making, adding coloring, how do you [do] sound and he’s like telling you exactly how he does the work that he does. … I[‘m] always saving them. Oh, I’m gonna try this later.”
In addition to her peers, social media and YouTube, Hoskey says she’s gained knowledge from “an online learning platform called Domestika,” which she says is relatively affordable and offers a wide variety of classes. “
Whatever your industry is, there’s a course for that. And there’s someone who is doing the style of photography that you’re doing. And they’re literally giving you, like, this is what I do, this is why I do it. These are the tools I use, and then they encourage you to go and practice.”
HOSKEY’S KIT
Hoskey may have started shooting on her iPhone, but she knew that wasn’t her end game.
“I feel my photography is very personal. And I applied that same way of thinking to when I bought my official camera,” the Fujifilm X-T3, she explains. “A lot of my early work is on the X-T3, X-T4.”
Hoskey has stayed loyal to Fujifilm’s X line. “About last year, I switched to the Fujifilm XH2, and that has been my go to camera for editorial work, for in-studio work.” She chooses this camera, she says because she feels like “that’s the clean, the polished, the sharp, the vibrant colors… That’s more of my polished, professional work. ”
For a different vibe, she say, “I’ll use my X-T5, if I just want to go out and take street photography photos or just, you know, just capture everyday life. …those photos feel like the most film out of the collection of cameras that I have…on the X series. They all feel like film photos, those nostalgic film photos.”
“And then recently,” she says, “I’ve actually tried full frame cameras, and I’m kind of obsessed. You might see a lot of work from me using full frame cameras.”
Hoskey also shares her “trusty three lenses that I will never leave the house without” because she says they offer “the most range with the type of portrait photography I do.”
“So the first is my 56 millimeter. It’s a great portrait lens. The level of detail it gives me, and it’s just super sharp. I love that lens.” So much so that she has two versions of the same lens! She explains, “I got the newer version, and it’s even better than the original one that I had.”
Also, “instead of a 35 millimeter I’ve been using a 33 millimeter. It’s a slight difference, but that is my favorite lens of all time that I have. That is the lens that I always shoot with. It is always in my bag.”
“And then for a little razzle dazzle,” she says, “I to add a 80 millimeter, because I getting super wide shots, but I love using wide lenses for close shots, just because of how it elongates the body creatively.”
Lighting accessories are also important to Hoskey.
“Because I started as a natural light photographer. I always have a reflector in my bag,” Hoskey says. She is especially enamored with the silver side of the reflector to get the desired effect.
“I think I use continuous lighting because I came from film and theater,” she posits.
EDITING
Hoskey uses Capture One and Photoshop to process her images. She says she loves to select her photos, but “it becomes grueling when I get to the editing process. Because I am not a person who edits all my photos the same way.”
“My editing is purely based off the vibe” of the image, she says. And that means there’s “no short way to do it.”
Processed with VSCO with j2 preset
Processed with VSCO with al6 preset
Processed with VSCO with j5 preset
Processed with VSCO with ss2 preset
She says she primarily relies on Capture One (as opposed to Lightroom) because of one feature in particular: it enables users to easily “isolate editing the skin tone from editing the entire picture.”
She also compliments Fujifilm’s color science and says “Capture One just amplifies it.”
BEST PRACTICES AND ADVICE
If you’re not ready to invest in certain tools, Hoskey has a few solutions that might come in handy.
“If you don’t have a reflector or can’t afford to buy a reflector, you can use aluminum foil, and it’ll have the same effect,” Hoskey says. “Or if you don’t have a bounce… I use tissue paper, or you can use white poster board; that’s 50 cents.”
She offers a third hack: “I tried one wildcard in place of the gels for the light. I used these colored binder dividers. They were plastic dividers, and if you put them in front of the light, it works the same as using the gel.”
A portraiture specific tip from Hoskey is that moisturized skin is crucial. For a quick fix, she says “olive oil [or] hairspray [can be used in a pinch] to make the skin look lustrous. ”
FUJIFILM PHOTOWALKS
If you’d like to learn more from Hoskey in person, NAB Show New York offers the perfect opportunity: She will serve as one of the guides leading the Fujifilm Photowalks around Hudson Yards, October 25-26.
Hoskey says attendees should expect a “good time, a lot of fun” and to jump into portraiture because there will be models available to pose during the walk.
“Definitely utilize the resources of the people who are there to support this photowalk, which is myself and other photographers and the [Fujifilm] techs,” Hoskey recommends, “to overall make yourself have the best experience.
The Fujifilm technical experts will be on site, Hoskey says, to “tell you anything you need to know about the camera that you’re using,” which will either be the X-H2 or X-H2S or the brand new GFX100 II. “They’ll be there to let you know what the settings are like, if you’re confused about what burst mode is, or what film simulations to use.”
“And if you have questions creatively, the photographer who is leading the photowalk will help you,” Hoskey says.
She explains, “I want to answer any questions people may have on what disconnects? What learning curves are they hitting? What walls?”
Hoskey would also like to emphasize composition because “when I’ve done photo ops in the past, what I’ve noticed is people are scared to move or engage or use angle high, get low, and kind of really compose their shots.”
In addition to composition, she expects attendees “to play with lighting. I started with natural light, and I want to show people the different ways in which you can shape natural light.”
“The goal is for people to walk away with images that they’re proud of, and a new skill they can apply to their photography of the future,” she says.
Interested in exploring Hudson Yards with a camera, lighting, a model, and a guide? Sign up for the Fujifilm Photowalks at NAB Show New York.
October 2, 2023
“The Creator:” Making a Sci-Fi Epic Like an Independent Film
TL;DR
The Creatorwas made for $80 million —and looks like $300 million — and Hollywood is astonished.
Actually shooting on location proved far more cost efficient than studio builds and volume work, not least because of writer-director Gareth Edwards’ savvy with creating VFX of realism and scale.
The film is IMAX and Super X screen-certified, yet was shot almost entirely on a $4,000 prosumer camera, giving the lie to the idea that blockbusters need have blockbusting budgets and the highest end gear.
In modern film economics, the entire $80-90 million budget of The Creator is usually allocated to marketing a sci-fi blockbuster the scale of Avatar or Star Wars — and Hollywood is marveling at how director Gareth Edwards got away with it.
The Creator looks like it was made for double or quadruple the price and would have done had conventional methods been applied. Instead, Edwards reverse-engineered the process by filming with a relatively small kit and crew in multiple locations, locking off the edit, and only then calling in VFX.
“They said there’s no way we can really do this because… you can’t find these locations and build sets in a studio against green screen, and it’ll cost a fortune,” the British writer-director explained to AV Club and commented on by John Owens at Frame Voyager.
“Instead, we went to real parts of the world that look closest to what we wanted, then afterwards when the film’s fully edited, we get production designer James Klein and other concept and VFX artists to paint over those frames and put the sci-fi on top.”
Edwards spent a decade at the start of his career, “doing computer graphics very cheaply in my bedroom,” Edwards explained in the same video breakdown. “I tried to learn a lot of tricks as to how to make things look bigger than they are with very little effort. Like one of the things that’s free [to build] is scale. You learn that something’s only big when it’s relative to something else. The key is always having this something else in the frame.”
With Todd Gilchrist at Variety, he adde, “Essentially, if you make sure everything in the immediate 10 or 20 meters [of the frame] is for real and that the stuff that you’re going to invent is in the distance,” he says. “The way parallax works, the brain can’t tell motion beyond about 20 meters. It’s like putting digital matte paintings behind your foreground. That’s a really good bang for your buck.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
The shooting process could really only be called guerilla filmmaking, with Edwards taking cues from his original low budget feature Monsters.
One of the most indie methods he opted for was a rejection of how VFX is normally approached in the industry today, which is to use green screen and volume stages and to push iterations of VFX up to minute before release.
What was unorthodox was the locations did much of the visual heavy lifting with environments, lighting, architecture and geology for the most part looking just as they were filmed. Only after the film was edited and locked was VFX allowed to go to work.
“There was a little bit of volume work at Pinewood but very low,” Edwards admitted to Owens. “And if you do the maths, if you keep the crew small enough, the theory was that with the cost of building a set which is typically like $200 grand — you can fly everyone anywhere in the world for that kind of money.”
Numerous VFX houses were contracted to work on The Creator, including ILM, Folks VFX, Outpost VFX, VFX Los Angeles, and Misc Studios, among others.
“We’re not saying that there wasn’t work done with the backgrounds, but VFX are not having to be fully digitally recreate every scene. This allowed more effort to be put into making the robots of this world feel even more real [because] locations, props and characters are already in the shot. Often there was no additional work or relatively minimal labor needed in finalizing a character or environment.”
According to Edwards, this approach saved them tens of millions of dollars and stands in contrast to the way CGI and VFX heavy productions are made.
To illustrate the indie nature of the shoot, Edwards says a scene near the beginning when Joshua (John David Washington) was running down the beach under crossfire was shot in a location still open to the public.
“We didn’t close that beach. If you look at our feet in the background, you can see bars and tourists. One person came over, and was like ‘What are you doing?’ because it was just the four of us with a camera running around and it didn’t look like this big massive movie.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
From “The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
Concept Cinematography
The Creator’s original cinematographer was Greig Fraser (The Batman), before he was lured away by the mega-budget Dune (for which he wound up winning the Oscar).
Before launching the production to shoot on location across the globe, Edwards filmed a proof-of-concept reel, in which he captured actors and extras in real environments without preparing them in advance to accommodate digital replacement technology.
“We wanted to do it not in a traditional system, and it was really important that we stuck with that, because sometimes in filmmaking you make small compromises, and before you know it, your small compromises become big compromises,” Fraser said in Variety. “So we worked for a number of years just doing testing and talking and looking at the way we could actually achieve this.”
He elaborated on working the concept through with Edwards in The Playlist: “We try to keep the camera-facing crew as small as possible, allowing for those resources to be put into areas that need them later on, like post-production or VFX.
A lesson I’m hoping everybody who’s reading this will learn is this: it’s possible to completely turn the filmmaking process on its head, and when we do, there are massive cost savings to be done, but also quality improvements.
“Films work because every person knows the parameters of their job, and that’s why you don’t have people stumbling over each other. It’s an established relationship; it’s efficient from a personnel standpoint, but unfortunately, with those efficiencies also come inefficiencies that we’ve hopefully turned on their heads with this movie.”
Unusual Camera Choice
Equally as unusual was their choice of a prosumer mirrorless camera, the Sony FX3.
“It’s a camera you can buy at Best Buy,” Edwards says. “It looks like film. It’s full frame, full IMAX resolution [for certification on IMAX screens], and has filmic photographic quality to it.”
Gray Kotzé at IndepthCine went deeper on the camera choice, explaining that the difference between the $3,900 Sony FX3, and a $75K Alexa Mini is remarkably small.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
The FX3 can record in 4K resolution, while the Mini LF can do 4.5K. In terms of dynamic range, Sony reports 15+ stops, while ARRI claims 14+ stops. When it comes to bit depth, the FX3 shoots 10-Bit 4:2:2 internally in SLog, whereas the Arri can shoot 12-Bit 4444 XQ in Log-C.
“While the ARRI does outperform the Sony visually, especially in the color department, the point remains that the gap between them is pretty slim, when comparing a prosumer and a professional camera and seems to be closing more and more every year.”
Does this spell the end of the Alexa forever and mean that all future Hollywood productions will use the FX3? Well, no, probably not.
Kotzé thinks the workflow and philosophy of using high end camera gear is too ingrained.
“The entire industry is set up around working with high end production cameras and I don’t think that this will change any time soon,” he says.
“Studios know what they will get when shooting with an Alexa, and producers are used to budgeting for gear in terms of an Alexa rental fee,” he says.
However, what we may see is that features from prosumer cameras — such as its high ISO base and smaller form factor — filter into the higher end cameras. And that this prosumer gear will increasingly be adopted across lower budget projects.
Questions Over One VFX Shortcut
Shortly after the trailer released allegations emerged, that the filmmakers had used footage of a real life explosion in Beirut 2020 as the basis for a shot when a nuclear warhead detonates on LA.
It’s not unusual to use original source material, stock footage for example, in the VFX pipeline but it’s not clear if this instance — which killed hundreds of people — was used in oversight.
“While The Creator’s team favored more practicality and realism over excessive the effects, this is one instance of realism that they went a little too far in using footage of a real life tragedy, especially a recent one for a purely fictional movie. It seems a little bit out of touch, to say the least and it proves that we may need more discussion when it comes to this side of digital effects in the future.”
Edwards doesn’t appear to have addressed this anywhere, but Owens does and condemn its use.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
Told in real time, Hijack is the Apple TV+ thriller starring Idris Elba that follows a hijacked plane as it makes its way to London over a seven-hour flight while authorities on the ground scramble for answers.
With so much of the action taking place in midair the production made extensive use of virtual production stages and techniques. The show could have been shot on blue/green screen, though this would have necessitated far more VFX and would not have given the actors the experience of “seeing” the film’s environment during their performance.
In addition to which, director and co-creator Jim Field Smith was keen to achieve as much in-camera as possible, as production VFX supervisor Steve Begg explained to Vincent Frei at Art of VFX.
“He hates, as I do, the giveaway camera positions and moves that signpost the unreality of a lot of CG shots, no matter how beautifully they are lit and rendered. For example, shots like flying up to a jet and passing through the window into the cockpit. We tried as much as possible to make the shots look feasible in the real world. We never have shots just outside the aircraft looking in, for example.”
Sequences featuring Eurofighter jets never have the cameras outside their canopies when we see the fighter pilots. All are shot with locked-off cameras on wide lenses within their cockpit space. All other shots of those fighter jets are on long lenses for POVs or wide in non-subjective shots.
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Jude Cudjoe and Christine Adams in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Max Beesley in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Hattie Morahan in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Jeremy Ang Jones in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Kate Phillips in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Jeremy Ang Jones in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Kaisa Hammarlund in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Director of photography Ed Moore elaborated on the need for authenticity in a case study posted to the Lux Machina website. “Almost everyone has been on a plane, so if something doesn’t feel real to the viewer, they’ll immediately be taken out of the story,” he said. “There are visual cues we all recognise when we’re on a flight, like the sun’s beams coming in and hitting your TV screen, for example. Little things like that make the plane’s world feel real. It was important for me to immerse the audience in it as much as possible, so when the hijack occurs, they feel like they are in this pressure cooker with the passengers.”
The series was shot on four Volume stages in the UK, all operated by Lux Machina and featuring a combination of LED configurations. One stage was for the cockpit itself, situated on a fully automated gimbal that faced an LED wall. Another stage was used to film the air traffic control room and featured a large LED screen that played back the flight map in real time, mimicking a real air traffic facility.
The set piece was a full-scale 230-seat Airbus A330. LED screens were installed on tracks on either side of the plane, providing moving sky content and giving the actors the illusion that they were in flight.
To create this illusion, Lux’s Virtual Art Department (VAD) created assets containing an array of digital clouds hovering above land, water or desert.
“We were able to create and control the content that was put onto these LED screens, and put them on either side of the plane to really create an immersive space for the actors to be in,” Lux Machina producer Kyle Olson explained in a promo video.
Previs on the two main VFX sequences was completed by NVIZ. As this was not an overt VFX project (though still containing 900 shots), the main creative work, comprising the jet shots and crash and a handful of matte shots, were assigned to one major vendor, Cinesite UK.
The main approach to the imagery on the LED backgrounds were cloudscapes and landscapes generated using Unreal Engine and rendered as 12K Notch LC plates for playback.
One episode featured the plane flying from the Thames estuary in London, approaching Northolt and then landing at Denham aerodrome in a spectacular crash. All the cockpit views of London and the estuary were created using a stitched six-camera array shot from a helicopter that was sped up two times on playback.
“The plane crash was originally going to be a lot wilder and audacious than the one we ended up with, with the A330 crash landing onto a motorway through loads of recently abandoned cars,” Olsen reveals in the video. “Then there was a bit of a reality check figuring out this will not be believable and countering the reality factor we’ve built up in the rest of the show, ultimately switching the location to a crash landing at Northolt [on the outskirts of London]. Northolt is too short for an A330 landing, we were told, adding to the jeopardy.”
Idris Elba in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Christine Adams in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Mohamed Elsandel in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Nasser Memarzia in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Eve Myles in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Neil Maskell in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Archie Panjabi in “Hijack,” now streaming on Apple TV+.
Eve Myles in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
Nikkita Chadha in “Hijack.” Cr: Aidan Monaghan/Apple TV+.
During pre-production, Moore had a four-screen flight simulator in his office, complete with cockpit controls. He played it for seven hours, tracking the same route the show’s plane was journeying to get a sense of the lighting.
“It gave him a lot of ideas for the type of imagery that he was interested in, and that imagery was provided to us a mood board, so to speak,” Lux CTO Kris Murray explained. “Our team could recreate portions of that, or take inspiration from them, to create customized versions of images that Ed could control.”
The system’s playback technology was at the core of the virtual shoot, allowing LED screens to display footage of cloudscapes, air traffic maps, and airport information monitors.
“We took the 3D workflow we used on large-scale productions like House of the Dragon and mashed them with the type of work we’d previously done using 2D plates,” said Murray. “That meant building a pipeline that allowed us to export content, in the same format as Unreal’s nDisplay, that could be played on a pre-rendered playback system.”
The VFX department shot master plates of the London skyline from 5:00 a.m. to 9:00 p.m., which provided a range of lighting options from sun-up to sundown, depending on what time of day the script called for. With that content on the LED wall, no post-production work was needed.
“This set piece could easily have been the most expensive location in the entire production — because prime real estate in London with views of Big Ben and the River Thames would cost you an arm and a leg to rent out,” said Spencer Chase, VP operator and technical director for Lux Machina.
Begg said that the scenes featuring the Eurofighter cockpit were most challenging.
Having elected to shoot them in the 270-degree volume on a motion base, they assumed they’d “get a nice ambient light wrap-around the cockpit and pilot” — which they did to an extent, but it reflected more than 300 degrees and you could just see over the edges of the volume.
“Being a TV show it was shot in a mad hurry (i.e., no testing time) and although everyone was initially quite happy with the results, after closer scrutiny we saw all sorts of issues that needed fixing. Lots!” Begg said.
“I’d anticipated we’d have problems with the reflections in the visors so we had them high-res scanned in order to get a good track and replace everything in them, with CG versions of the cockpit and the pilots arms and sky. The moment the reflections were sorted the shots really started to come together with added high-speed vapor on top of the Volume sky along with a high frequency shake. I stopped them doing in-camera as I had a feeling we’d be fixing these shots, big time. If we had they would have been a bigger nightmare to track.”
Pablo Larraín’s new film ‟El Conde” satirizes Chilean dictator General Pinochet as an immortal vampire as a way for the country to come to terms with its past.
A natural artistic choice was to shoot in black-and-white, as an ode to the early days of on-screen vampires, notably Carl Dreyer’s Vampyr.
ARRI made an LF monochromatic camera specifically for the project, arriving ten days prior to the first days of shooting in Chile.
Satire was probably the only way to face up to a dictator like Augusto Pinochet.
The Chilean general who took power in a U.S-backed coup in 1973 died in 2006, still with the blood of thousands on his hands. In new film ‟El Conde,” Chilean director Pablo Larraín turns the story into a stomach-turning tragicomic melodrama-horror movie.
“The chain of thought involved the fact that Pinochet died in complete freedom — and with the most vile and absurd impunity,” Larraín told The Hollywood Reporter’s Patrick Brzeski. And that impunity made him eternal in a way — we still feel broken by his figure, because he’s not really dead in our culture.
Veteran Chilean actor Jaime Vadell stars as Pinochet, who is reimagined here as a 250-year-old vampire who faked his own death and absconded to a dilapidated estate in the Patagonian countryside.
Argentina suffered a similar fate when a military junta took power, but civil society was able to bring justice to bear on some of its members in a story filmed by director Santiago Mitre as “Argentina, 1985” last year.
“We never had that in Chile, so his figure remained very vivid and alive,” Larraín says. “So, that idea took us to the figure of the vampire, and that satire was the only way to approach him.”
Larraín describes his film’s Pinochet as “an absurd superhero of evil” and says he knew early on that he wanted the film to be shot in black-and-white.
A precedent for using dark humor and black and white was set for Larrain by Stanely Kubrick’s 1964 classic‟Dr. Strangelove” as he explained to THR: “One of the smartest things Kubrick does in that film is how the satire and farce can help you face those characters without creating empathy.
“When you have a protagonist who is played by such a sensitive, interesting human being like Jaime, the big danger is that you could end up feeling empathy for him. It would be completely immoral and dangerous to do something like that. So, the satire, absurdism and filming in black and white allowed us to have the right distance from these people.”
He expounds on the decision to shoot black and white in conversation with Bilge Ebiri at Vulture: “Black and white is not only beautiful and poetic and artistic, but also creates a parallel reality. It’s a fable that you could observe from afar, and that allows you to be dark, be funny, talk about this difficult and painful subject in a way where if you are able to smile a little bit, maybe there’s a strange and awkward form of healing.”
Cinematography
In conceiving the look of the film, Larraín turned to cinematographer Ed Lachman ASC, who has been Oscar nominated twice on two period films with Todd Haynes, “Far From Heaven” and “Carol.”
They talked about landmark silent horror films like F.W. Murnau’s 1922 “Nosferatu” and Carl Theodor Dreyer’s 1932 “Vamypr,” as well as work by photographers from different eras including Sergio Larraín, Fan Ho and Maura Sullivan.
Larraín fought not only for the movie to be in black and white but to actually shoot it in black and white, a rarity in the digital age in which studios insist on color cinematography that can later be desaturated in post-production.
“The reason producers do it that way is because then they can always fall back on the color if they have markets that they can’t show it in black and white,” explained Lachman to IndieWire’s Chris O’Falt. “The contrast and the saturation, the subtlety of mid-range in the blacks and whites, aren’t the same [and] that’s why I think [‘El Conde’] really has a different look than a number of black and white films over the last few years.
“When you can actually shoot in monochromatic, you can reach back to black and white filters to modify the contrast and the mid-tones.”
A Time-Sensitive Custom Sensor
Knowing that Larraín also wanted to have the mobility of a light camera that could be used on a technocrane (to facilitate the film’s flying scenes), Lachman needed ARRI to develop a black and white sensor for its smaller (but still large format) camera, the ALEXA Mini LF.
ARRI’s large format Alexa 65 has such a monochrome sensor but the camera body would be too large to use while the Alexa XT also with new b/w sensors didn’t meet Netflix’s 4K mandate. Lachman could have chosen to shoot with a RED camera and its black and white Helium sensor (used to shoot Netflix Oscar winner “Mank” in 2020), but it seems that Lachman preferred to push ARRI to develop a new version of its cameras.
Per IndieWire, the problem was less ARRI’s willingness to build a new camera — in theory, its color scientists were confident it could work — and more that Lachman’s request came two months before the start of production, which is less time than it had taken to develop previous prototypes.
Luckily, according to ARRI’s Marko Massigner and Manfred Jahn, creating the new chip came together quicker than anticipated, and they were able to deliver three working cameras in time for “El Conde.”
Building on his vision, Lachman used lenses retrofitted with vintage glass from the 1930s and modified to work on the ARRI camera. This unique combination of equipment was then used with Lachman’s own patented EL Zone System, which employs concepts utilized by photographer Ansel Adams to control different exposure values throughout an image.
Cined has more details on the development of the new Alexa which also notes that the re-housed Baltar lenses were the same glass that was used to shoot classics “Citizen Kane” and “Touch of Evil.”
Lachman and Larraín have known each other for several years, but this is the first feature they‘ve worked on together.
“Ed can create a very particular visual poetry, but he never loses the focus on the narrative,” Larraín told Mark Olsen at Variety. “That is very important because sometimes you see beautifully photographed films that don’t have a strong and powerful narrative. It was often very moving to see the images he was creating.”
Unusually, the director himself operated the camera for the entire shoot.
“It helps me to be closer to the actors,” says Larraín. “I’m too anxious to be seated at a monitor. Even when I’m not operating, I’m standing and working and walking. I can’t just see the world created in front of me. I have to be right there. You’re part of the process.”
Vulture critic Bilge Ebiri, judged the film “an unrepentantly gore-filled horror flick… a piece of agitprop provocation [that] might be the most perverse project Netflix has ever signed off on.”
Nick Vivarelli for Variety says, “The film is somehow sparse and flamboyant at the same time; viewers may feel conflicting impulses of being charmed and repulsed.”
From the VFX breakdown for the “Black Mirror” “Metalhead” episode, Cr. Netflix and DNEG
There are alien minds among us. Not the little green men of science fiction, but the alien minds that power the facial recognition in your smartphone, determine your creditworthiness and write poetry and computer code. These alien minds are artificial intelligence systems, the ghost in the machine that you encounter daily.
But AI systems have a significant limitation: Many of their inner workings are impenetrable, making them fundamentally unexplainable and unpredictable. Furthermore, constructing AI systems that behave in ways that people expect is a significant challenge.
If you fundamentally don’t understand something as unpredictable as AI, how can you trust it?
Why AI is Unpredictable
Trust is grounded in predictability. It depends on your ability to anticipate the behavior of others. If you trust someone and they don’t do what you expect, then your perception of their trustworthiness diminishes.
In neural networks, the strength of the connections between “neurons” changes as data passes from the input layer through hidden layers to the output layer, enabling the network to “learn” patterns.Cr: Wiso via Wikimedia Commons
Many AI systems are built on deep learningneural networks, which in some ways emulate the human brain. These networks contain interconnected “neurons” with variables or “parameters” that affect the strength of connections between the neurons. As a naïve network is presented with training data, it “learns” how to classify the data by adjusting these parameters. In this way, the AI system learns to classify data it hasn’t seen before. It doesn’t memorize what each data point is, but instead predicts what a data point might be.
Many of the most powerful AI systems contain trillions of parameters. Because of this, the reasons AI systems make the decisions that they do are often opaque. This is the AI explainability problem — the impenetrable black box of AI decision-making.
Consider a variation of the “Trolley Problem.” Imagine that you are a passenger in a self-driving vehicle, controlled by an AI. A small child runs into the road, and the AI must now decide: run over the child or swerve and crash, potentially injuring its passengers. This choice would be difficult for a human to make, but a human has the benefit of being able to explain their decision. Their rationalization — shaped by ethical norms, the perceptions of others and expected behavior — supports trust.
In contrast, an AI can’t rationalize its decision-making. You can’t look under the hood of the self-driving vehicle at its trillions of parameters to explain why it made the decision that it did. AI fails the predictive requirement for trust.
AI Behavior and Human Expectations
Trust relies not only on predictability, but also on normative or ethical motivations. You typically expect people to act not only as you assume they will, but also as they should. Human values are influenced by common experience, and moral reasoning is a dynamic process, shaped by ethical standards and others’ perceptions.
Unlike humans, AI doesn’t adjust its behavior based on how it is perceived by others or by adhering to ethical norms. AI’s internal representation of the world is largely static, set by its training data. Its decision-making process is grounded in an unchanging model of the world, unfazed by the dynamic, nuanced social interactions constantly influencing human behavior. Researchers are working on programming AI to include ethics, but that’s proving challenging.
The self-driving car scenario illustrates this issue. How can you ensure that the car’s AI makes decisions that align with human expectations? For example, the car could decide that hitting the child is the optimal course of action, something most human drivers would instinctively avoid. This issue is the AI alignment problem, and it’s another source of uncertainty that erects barriers to trust.
AI expert Stuart Russell explains the AI alignment problem.
Critical Systems and Trusting AI
One way to reduce uncertainty and boost trust is to ensure people are in on the decisions AI systems make. This is the approach taken by the US Department of Defense, which requires that for all AI decision-making, a human must be either in the loop or on the loop. In the loop means the AI system makes a recommendation but a human is required to initiate an action. On the loop means that while an AI system can initiate an action on its own, a human monitor can interrupt or alter it.
While keeping humans involved is a great first step, I am not convinced that this will be sustainable long term. As companies and governments continue to adopt AI, the future will likely include nested AI systems, where rapid decision-making limits the opportunities for people to intervene. It is important to resolve the explainability and alignment issues before the critical point is reached where human intervention becomes impossible. At that point, there will be no option other than to trust AI.
Avoiding that threshold is especially important because AI is increasingly being integrated into critical systems, which include things such as electric grids, the internet and military systems. In critical systems, trust is paramount, and undesirable behavior could have deadly consequences. As AI integration becomes more complex, it becomes even more important to resolve issues that limit trustworthiness.
Can People Ever Trust AI?
AI is alien — an intelligent system into which people have little insight. Humans are largely predictable to other humans because we share the same human experience, but this doesn’t extend to artificial intelligence, even though humans created it.
If trustworthiness has inherently predictable and normative elements, AI fundamentally lacks the qualities that would make it worthy of trust. More research in this area will hopefully shed light on this issue, ensuring that AI systems of the future are worthy of our trust.
Crime Scenes: Editing “Only Murders in the Building”
From “Only Murders in the Building,” Cr. Hulu
Editing a mystery can be a delicate business. A reaction shot held a few frames too long can be a giveaway, too short and the eventual payoff could feel too obvious. This is challenging enough in a TV episode or a movie, but even more so in a ten-episode arc.
But that’s the kind of work the editors of “Only Murders in the Building” are responsible for. The well-respected series starring Steve Martin, Martin Short and Selena Gomez is about to finish its third season, and editors Shelly Westerman, ACE, Peggy Tachdijian, ACE, and Payton Koch not only have to keep the reveals coming amid the show’s often absurdist humor and moments of pathos and drama, but they also have to attack some major musical numbers for the show-within-the-show “Death Rattle Dazzle,” at the heart of the season’s story arc.
Season Three kicks off with the frustrated director Oliver Putnam (Short) finally landing a long-awaited return to Broadway to direct the mystery, starring the major movie star Ben Glenroy (Paul Rudd), Charles Haden-Savage (Martin) and an ensemble of characters played by Meryl Streep and Ashley Park, among others. Glenroy drops dead on opening night and after a brief and miraculous recovery, winds up tumbling through an elevator shaft at the Arconia, the luxury apartment building the lead characters all live in, crashing to his final doom on the roof of the elevator as Oliver, Charles and Mabel (Selena Gomez) are exiting.
The work of editing for the series involves close collaboration among executive producers John Hoffman (the showrunner), Dan Fogelman and Jess Rosenthal; the writers, the directors and actors and the trio of editors, each of whom takes responsibility for particular episodes.
Westerman, who a few years ago was adamantly opposed to the idea of remote editing (“I always said you must be in the room for creative collaboration,” she’d frequently asserted), has completely revised her feelings on the subject and acknowledges that she wouldn’t have even been able to have taken this plum job if it weren’t for the ability to work while also spending a significant amount of time in Florida caring for her parents. In fact, all three editors and each of their assistants work remotely, too.
The editing process starts before cameras roll, when they receive that week’s script and subsequently attend (virtually) the table read in New York — an essential part of the process, Westerman stresses. “Once you hear the word spoken, you hear the rhythms, you start to get an idea in your head, and you can begin visualizing an episode.”
This is followed by a concept meeting, in which all the department heads confer. “We talk at a high level about the look and tone of the episode, and then we have a tone meeting specifically with the episode director and executive producers, and we go through the script scene-by-scene and talk about what’s happening. The director will propose all their questions and editors will chime in with questions, so there are a lot of very helpful discussions that happen early on.”
Each director helms two episodes, which are cross-boarded and shot in New York, usually with six or seven days allotted for each.
“Once they’re shooting one of your episodes,” Westerman explains, “we’ll start to get the dailies. What we see might match everything we’ve talked about to that point, or they might have discovered things on set that made the scenes go a very different way. But at least all the preparation lets us start with a grounding from which to work.”
Editors are given roughly two days to get their editor’s cut together and sent off to the director. “Then on a half-hour show like ‘Only Murders,'” she says, “the directors get about two days to work with the editor, before we need to turn that [cut] over to John and the other executive producers for their feedback.”
DRILLING DOWN TO THE WORKFLOW
“Only Murders in the Building:” Selena Gomez, Martin Short and Steve Martin (Photo by: Patrick Harbron/Hulu)
Tobert (Jesse Williams) and Mabel (Selena Gomez), shown. (Photo by: Patrick Harbron/Hulu)
Curtain up on Season Diahann Carroll (Rosharra Francis), shown. (Photo by: Patrick Harbron/Hulu)
K.T. (Allison Guinn), Donna (Linda Emond), Loretta (Meryl Streep), Cliff (Wesley Taylor), Dickie (Jeremy Shamos), Ben (Paul Rudd) and Oliver (Martin Short), shown. (Photo by: Patrick Harbron/Hulu)
Charles (Steve Martin), shown. (Photo by: Patrick Harbron/Hulu)
Tobert (Jesse Williams) and Mabel (Selena Gomez), shown. (Photo by: Patrick Harbron/Hulu)
Matthew (Matthew Broderick) and Oliver (Martin Short), shown. (Photo by: Patrick Harbron/Hulu)
Loretta (Meryl Streep) and Oliver (Martin Short), shown. (Photo by: Patrick Harbron/Hulu)
Bobo (Don Darryl Rivera), Kimber (Ashley Park) and Ty (Gerald Caesar), shown. (Photo by: Patrick Harbron/Hulu)
Loretta (Meryl Streep) and Oliver (Martin Short), shown. (Photo by: Patrick Harbron/Hulu)
Charles (Steve Martin), shown. (Photo by: Patrick Harbron/Hulu)
Oliver (Martin Short), shown. (Photo by: Patrick Harbron/Hulu)
Sazz (Jane Lynch), shown. (Photo by: Patrick Harbron/Hulu)
Charles (Steve Martin), Oliver (Martin Short) and Mabel (Selena Gomez), shown. (Photo by: Patrick Harbron/Hulu)
Loretta (Meryl Streep), shown. (Photo by: Patrick Harbron/Hulu)
Cinda (Tina Fey), shown. (Photo by: Patrick Harbron/Hulu)
Cliff (Wesley Taylor), Donna (Linda Emond) and Howard (Michael Cyril Creighton), shown. (Photo by: Patrick Harbron/Hulu)
Loretta (Meryl Streep) and Charles (Steve Martin), shown. (Photo by: Patrick Harbron/Hulu)
Charles (Steve Martin), shown. (Photo by: Patrick Harbron/Hulu)
While the editors and assistants all work remotely, they are not actually doing the work on their computer or working with any of the media where they are. Boxes with Avid and media all sit securely inside the facility Pacific Post, where they are networked together via Avid-NEXIS.
Westerman, who works on a Mac “trashcan” wherever she’s set herself up to work, uses Jump Desktop to access her hardware and the network, as do the other two editors, though they happen to work on Mac Minis.
When dailies are ready, Westerman’s assistant, Jamie Clarke, is the first one notified. He will also have access to camera and sound reports and script notes, and he will QC the material to ensure that it’s all in sync and there are no technical issues. Then he organizes the scenes within Westerman’s system. Anything shot with more than one camera (most scenes in the show are covered by two and some of the musical numbers by three) into Group Clip projects, and he will load footage into bins to her specifications (each editor has their preferred method of organizing material).
“I don’t get the scenes in order,” Westerman says, “but I’ll start to build sequences pretty quickly, so that I can see how it’s flowing. By the time they finish shooting the episode, I’ve got a rough sketch of the acts put together. Then, for my two-day editor’s cut, I’m really trying to polish and tighten.”
MAINTAINING A SEASON-LONG MYSTERY
Westerman received a detailed briefing from Hoffman prior to commencement of production for the season. This provided her with a broad overview of all the episodes, “so we had some idea of what was going to happen as we got into the season. We knew they would still be tweaking the script here or there, but we had an idea very early on what was going to happen and how the mystery would be solved.”
But that isn’t the only way to proceed with the work, Westerman acknowledges. “Peggy said she didn’t want to know who the killer was,” she recalls. “She wanted to be surprised by each script. I felt like I really wanted to know. We would laugh about ‘How can you not want to know?’ But she felt it helped her with the surprises because she was surprised, as well.”
Regardless, in a story propelled by a constant revelation and clues, there needs to be an ongoing overview provided as episodes come together. Hoffman and the other executive producers, Westerman says, “will sometimes look at a version we present and say, ‘Hey, we need to see this in Episode Three because we’re going to refer to it in Episode Six. So then, we go back and finetune the episode.
“The moment that Ben Glenroy falls down the elevator shaft and the [three lead characters] run out of the elevator, turn back around to see what happened and Mabel says, ‘Are you fucking kidding me?’ — that scene comes back into play in a later episode where she’s looking at a hanky Ben is holding. I didn’t use that and one of the executive producers said, ‘The hanky’s important. We have to see her looking at it at that point.'”
There is also a moment where Charles gets into a fight on the fateful opening night that kicks the season off.
“They shot the fight scene for use in Episode Nine, but then it turned out I needed to use some of it in Five and Payton needed some of it for Six. But Peggy hadn’t cut Nine yet, so we all wound up pulling from her footage, using bits and pieces from the fight scene that worked for our episodes. Later, we went back to make sure we were all in sync with one another in terms of what we were using from the scene.”
This back-and-forth happens frequently, particularly for the recaps that show important moments from previous episodes. “One of us might do a recap and another one will say, ‘You’ve got to change that. That isn’t in the show anymore.”
POLISHING PICTURE AND SOUND
Long gone are the days when editors turned in rough cuts with ‘insert effect here.’ The final sound editing and VFX creation will continue after picture is locked, but directors and producers expect the editors to deliver scenes that are complete, and work as is. So much of that work commences while Westerman and the other editors are still sketching out scenes.
“The schedule is so accelerated compared to a feature,” the editor notes, “so as I’m going along and stringing together and polishing scenes, we’re also doing sound work, adding score, adding VFX. We’re doing all of that together so that by the time I get to the end of my editor’s cut, I’m hopefully in pretty good shape with a polished cut to present to the director.”
Editing assistants are generally skilled at basic VFX work, such as wire removal, and the show has a VFX artist on staff from the beginning of production who can step in and handle quite a lot of the work as it comes up.
“There’s a scene where one of the characters is in a basement threatening Charles and Mabel with a blowtorch,” Westerman recalls. “Of course, they couldn’t shoot with a real flame for safety reasons, so the VFX artist handled that.”
Sound Supervisor Matt Waters gets involved early on to build a wide variety of sound effects. As the season progresses, there are more and more sounds that can be re-worked and re-used. Fairly early in the season, the editors already had access to quite a few sounds of the theater where much of Season Three takes place. SFX such as specific doors opening and closing, and hallway background sounds were accessible to the editors and sound editors.
CUTTING MUSICAL NUMBERS
While the musical numbers are meant to be dramatic and advance the plot, they need to be approached differently from regular dialogue scenes. Especially, as “Death Rattle Dazzle” really gets on its feet and the routines get more elaborate.
Cutting musical sections is a different set of muscles, Westerman explains. “These are big numbers and big Broadway people like Sara Bareilles, Michael R. Jackson, Marc Shaiman and Scott Whitman were stepping in to help with the songs with so I’m not going to lie, it was intimidating at times.”
These scenes are generally shot with three cameras, and Westerman not only Group Clips all the angles from a take so she could watch them together, but she also has her assistant build what she calls a “super group” comprised of all the tapes of a certain setup, as well as all the coverage so she could observe every possible permutation of picture to each moment of the song.
When there is singing by say Steve Martin or Meryl Streep or one of the other performers, the songs were generally pre-recorded by the artist, who would then, during the shoot, sing live while being fed the playback through an earpiece so both the playback and live audio are available on their own clean tracks.
This approach leaves open the possibility of using the prerecorded audio or the live audio, depending on which plays best. In fact, many of the numbers are the result of the music and sound departments cutting extensively, sometimes syllable by syllable, to come up with the very best rendition.
“The performers go in and did recordings of all the songs a while before they were used,” the editor explains. “We get those early on so we can listen to them and get them in our heads and know the songs themselves. Then, once we get the scenes, we start assembling those right away because they took a little bit longer to craft. They’re technically more challenging. I’ll get it laid out first, and then I can go back and find these little moments that help tell the story.”
Once the musical scenes are cut, the music production team and music editor Michah Liberman goes in and re-works the sound, sometimes alternating between the prerecorded and the live versions.
“Sometimes, they’re literally cutting syllable by syllable in a very exacting, precise way. Finally, our sound mixer, Lindsey Alvarez, ties it all together.”
“There is a lot of teamwork on the show,” Westerman sums up, “and it’s been rewarding and fun to work on a show this good and be part of that collaboration.”
NAB Show New York Appearance
Westerman will chat with writer and film historian Bobbie O’Steen on stage at NAB Show New York’s Insight Theater. The duo will discuss the “meticulous art of film editing” on October 25 at 3 p.m.
Westerman has earned ACE and BFE nods for her work on the second season of the Selena Gomez-helmed series. Season 3 of “Only Murders in the Building” is streaming now on Hulu.
Westerman got her big break in film working under Nora Ephron and Richie Marks. Recent notable work includes a full slate of Ryan Murphy projects, including Emmy winning limited series “Halston,” for which she served as producer and editor.
The editing team on “Barbie” and writer-director Greta Gerwig talk about their process for combining humor, emotion and anarchy.
September 19, 2023
Family Dynamics: Cinematographer Paul Daley Gets Into It for “The Righteous Gemstones”
Danny McBride, Adam Devine, Edi Patterson and John Goodman in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
TL;DR
Director of photography Paul Daley discusses his work on Season 3 of “The Righteous Gemstones,” HBO’s dark comedy series created by Danny McBride.
Daley adopted an “evolutionary” approach to the cinematography for Season 3, sticking with the show’s large-format ARRI Alexa camera package but introducing new Supreme lenses.
Signature Prime lenses were generally used for daytime scenes and Supremes for nighttime.
Season 3 features a monster truck called “The Righteous Redeemer,” which presented unique challenges for filming, including the loss of a camera during a particularly intense scene.
The Righteous Gemstones, HBO’s dark comedy series created by Danny McBride, is more than just entertainment; it’s a cultural critique. With its sharp wit and biting humor, the show exposes the underbelly of religious commercialism. The series, now streaming on Max, has garnered a strong following since its 2019 debut. A key contributor to its visual spectacle is director of photography Paul Daley, who shot several “micro-episodes” in Seasons 1 and 2 and all 10 episodes in Season 3. Daley recently broke down his visual approach to the third season in an episode of The Go Creative Show.
The Emmy-nominated series stands to shoulder to shoulder with other critically acclaimed shows from HBO. While the show has received ample praise for its writing and performances, its visual storytelling in the third season is equally compelling. Allison Herman, in her review of the new season for Variety, says “the show has a visual panache that defies our expectations for a half-hour comedy.”
She continues, neatly laying out the show’s narrative arc. “The Gemstones are not the Roys, forced to confront the emotional poverty beneath their material riches, nor Barry Berkman, scrambling for redemption without real accountability. They’re buffoons, their idiocy only amplified by the motorcycle chases, musical sequences and megachurch sermons that immerse us in their world.”
The Righteous Gemstones tackles themes like greed, family dynamics, and the commodification of religion, all of which are amplified through Daley’s lens. “It’s funny, but it gets very dark,” the DP says of the show’s third season and its focus on the relationship between Gemstone patriarch Eli’s (John Goodman) daughter, Judy (Edi Patterson), and her mediocre husband BJ (Tim Baltz). “And then it gets super awkward. And then it gets super uncomfortable. And there are some things there’s nothing funny about. Like ‘This is horrible.’ This guy’s completely humiliated, is cuckolded, he’s embarrassed, he gets his ass kicked. And we shoot it that way. We let the actors bring the comedy. We bring the drama.”
An Evolutionary Approach
For Season 3 of The Righteous Gemstones, Daley decided to adopt an incremental approach. “When we got to Season 3, my thought was, ‘Well, let’s change it up a little bit.’ Nothing revolutionary, just make it evolutionary,” he told Go Creative host Ben Consoli.
The cinematographer continued to rely on the show’s large-format ARRI Alexa camera package, a mainstay since the its inception. This package included Signature Prime lenses, which Daley praises for their warmth and softer focus. “The Signatures are a little bit warmer than the Supremes,” he said. “I found you can actually reduce your filtration because they’re a little softer. They drop off the bokeh, the background is so definitive, nothing turns into a blob back there,” he elaborated.
John Goodman in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Walton Goggins in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Danny McBride in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Danny McBride in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Cassidy Freeman in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Walton Goggins in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Tony Cavalero in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Tim Baltz in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Steve Zahn in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Stephen Dorff in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Skyler Gisondo in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Likas Hass and Robert Oberst in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Kelton Dumont in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
John Goodman in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Edi Patterson and Cassidy Freeman in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Edi Patterson and Adam Devine in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Edi Patterson in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Danny McBride in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Danny McBride, Adam Devine and Edi Patterson in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
To add a new layer to the show’s visual storytelling, Daley introduced Supreme lenses. “For the nighttime stuff, we would lean toward the Supremes. They were cooler, a little sharper, and they just lend themselves to night work a little better,” he explained.
Daley emphasized that lens choices are made with the context of the scene in mind. Signature lenses were generally used for daytime sequences due to their warmth, while Supremes are chosen for their cooler, sharper quality in nighttime scenes. “We would never shoot particularly wide open because we always wanted to see where we were,” he added.
When it came to capturing the show’s varied and dynamic scenes, Daley opted for a flexible multi-camera setup. “Always three. But then we’d have multiple units. Those two would have three. And when we were in the Coliseum, I believe we had 12,” he revealed. This approach allowed for a wide range of shots, from intimate dialogues to grand set pieces.
Daley’s philosophy on camera motion aligns with the show’s focus on character-driven storytelling. “It needs to be motivated. Moving for the sake of it, there’s something wrong,” he said.
Monster Trucks, Militias and Mayhem
Season 3 amped up the spectacle with monster trucks, militias and mayhem. Capturing the essence of the real-life monster truck McBride had built for the series, “The Righteous Redeemer,” was no small feat. “That monster truck is absolutely massive,” Daley marveled. He noted that the sheer size of the vehicle presented a challenge in conveying its scale on screen. “When I watch it, I sometimes wonder if we didn’t actually get a sense of how massive that thing really is.”
While filming, Daley faced a filmmaker’s worst nightmare: losing a camera to the very subject he was capturing. During a shot where a cow gets run over by the monster truck, a camera was placed too close to the action. “The frame is just full of black tire and then gone,” Daley recounted. Despite the loss, the memory card survived, providing a unique, albeit costly, point of view.
“The Righteous Redeemer” in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
J. Gavin Wilde driving “The Righteous Redeemer” in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
J. Gavin Wilde driving “The Righteous Redeemer” in Season 3 of “The Righteous Gemstones.” Cr: Jake Giles Netter/HBO
Because of the monster truck’s unique structure, traditional rigging was out of the question. “On the monster truck, you can’t rig to it. There’s no way to rig so they built a vehicle that was on — imagine like a mechanical bull moves around. They built the chassis of one of those on that.” Daley explained. This allowed the crew to mount real cameras for the driving shots, as the actors couldn’t actually drive the monster truck themselves.
Teased at the beginning of the season and making a grand reappearance in the finale, the monster truck served as more than mere spectacle. “It’s also kind of a metaphor for these characters, because they just are so imposing and destructive,” Daley observed. “But there’s so much heart to it.”
“The Woman in the Wall” centers on a woman grappling with the trauma of time in a Magdalene laundry, complicated by the deaths of two people connected to her incarceration there.
September 19, 2023
Should We Learn to Stop Worrying and Love AI? Ummm, Actually… Experts Say No.
TL;DR
There is growing public concern about the role of artificial intelligence in daily life, finds a new Pew Research report.
Experts fear that human knowledge will be lost or neglected in a sea of disinformation, and that people’s cognitive skills will decline.
There is hope that humans will come out on the right side of the human+tech future of evolution, but the time to embed those human values, checks and balances on AI, is now.
A growing share of Americans are worried about the role artificial intelligence is playing in daily life — and they have every right to be concerned, according to two new studies from Pew Research Center.
More than half of United States citizens say they feel more concerned than excited about the increased use of AI, according to Pew’s poll of 11,201 US adults, which ran from July 31 to August 6, 2023. That’s up by 14 points since the last such survey in December.
For instance, 53% of Americans say AI is doing more to hurt than help people keep their personal information private. The survey supports previous Pew analyses, which found public concerns about a desire to maintain human control over AI and a caution over the pace of AI adoption in fields like health care.
Although AI’s impact remains vague in general public consciousness, concern has reached fever pitch among academia, policy makers and scientific communities where there are deep concerns about people’s and society’s overall well-being. They also expect great benefits in health care, scientific advances and education — with 42% of these experts saying they are equally excited and concerned about the changes in the “humans+tech” evolution they expect to see by 2035.
In sum, the 350 experts canvassed by the Pew Research Center have great expectations for digital advances across many aspects of life by 2035 but all make the point that humans’ choices to use technologies for good or ill will change the world significantly.
Andy Opel, professor of communications at Florida State University, frames the dichotomy in comments for the report.
“We are on the precipice of a new economic order made possible by the power, transparency and ubiquity of AI. Whether we are able to harness the new power of emerging digital tools in service to humanity is an open question.”
We’re on the cusp of making those decisions right now.
The experts who addressed this fear wrote about their concern that digital systems will continue to be driven by profit incentives in economics and power incentives in politics. They said this is likely to lead to data collection aimed at controlling people rather than empowering them to act freely, share ideas and protest injuries and injustices. They worry that ethical design will continue to be an afterthought and digital systems will continue to be released before being thoroughly tested. They believe the impact of all of this is likely to increase inequality and compromise democratic systems.
A topmost concern is the expectation that increasingly sophisticated AI is likely to lead to the loss of jobs, resulting in a rise in poverty and the diminishment of human dignity.
Cr: Pew Research Center
Cr: Pew Research Center
Cr: Pew Research Center
Cr: Pew Research Center
Reliance on AI for the automation of story creation and distribution through AI poses “pronounced labor equality issues as corporations seek cost-benefits for creative content and content moderation on platforms,” says Aymar Jean Christian, associate professor of communication studies at Northwestern University in the report.
“These AI systems have been trained on the un- or under-compensated labor of artists, journalists and everyday people, many of them underpaid labor outsourced by US-based companies. These sources may not be representative of global culture or hold the ideals of equality and justice. Their automation poses severe risks for US and global culture and politics.”
Several commentators look for the positives of digital advances and in particular point to the democratizing and socially levelling effect that this might have.
“The development of digital systems that are credible, secure, low-cost and user-friendly will inspire all kinds of innovations and job opportunities,” says Mary Chayko, professor of communication and information at Rutgers University. “If we have these types of networks and use them to their fullest advantage, we will have the means and the tools to shape the kind of society we want to live in.”
Yet Chayko has her doubts. “Unfortunately, the commodification of human thought and experience online will accelerate as we approach 2035,” she predicts. “Technology is already used not only to harvest, appropriate and sell our data, but also to manufacture and market data that simulates the human experience, as with applications of artificial intelligence. This has the potential to degrade and diminish the specialness of being human, even as it makes some humans very rich.”
Herb Lin, senior research scholar for cyber policy and security at Stanford University’s Center for International Security and Cooperation writes, “My best hope is that human wisdom and willingness to act will not lag so much that they are unable to respond effectively to the worst of the new challenges accompanying innovation in digital life. The worst likely outcome is that humans will develop too much trust and faith in the utility of the applications of digital life and become ever more confused between what they want and what they need.”
David Clark, senior research scientist at MIT’s Computer Science and AI Laboratory says that to hold an optimistic view of the future you must imagine several potential positives to come about.
These include the emergence of a new generation of social media, with less focus on user profiling to sell ads, less emphasis on unrestrained virality, and more of a focus on user-driven exploration and interconnection.
“The best thing that could happen is that app providers move away from the advertising-based revenue model and establish an expectation that users actually pay,” he asserts. “This would remove many of the distorting incentives that plague the ‘free’ Internet experience today. Consumers today already pay for content (movies, sports and games, in-game purchases and the like). It is not necessary that the troublesome advertising-based financial model should dominate.”
A whole section is devoted to expert thoughts on the impact of ChatGPT and other generative AI engines.
Author Jonathan Taplin, echoes many in the creative community, “The notion that AI (DALL-E, GPT 3) will create great ORIGINAL art is nonsense. These programs assume that all the possible ideas are already contained in the data sets, and that thinking merely consists of recombining them. Our culture is already crammed with sequels and knockoffs. AI would just exacerbate the problem.”
Interested in how artificial intelligence will impact technology, business, and creativity? (How can you not be?) Ride the wave into the future of Media & Entertainment, where curiosity meets innovation meets storytelling, with NAB Amplify’s dedicated resource exploring the transformative force of AI. Dive into explainers that demystify complex concepts, discover real-world applications in Hollywood, and glimpse the road ahead. Aimed at industry professionals working at all stages of the content lifecycle, these resources are your gateway to understanding how AI is reshaping the entertainment landscape. Join us, and let’s Amplify the conversation!
Venture capitalist Marc Andreessen pens an optimistic 7,000-word manifesto on the transformative potential of artificial intelligence.
September 29, 2023
Posted
September 12, 2023
How “The Creator” Creator Gareth Edwards Is Thinking About AI
From “The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
TL;DR
New sci-fi feature “The Creator” is the latest project to use artificial intelligence as a backdrop, but all may not be as dystopic as it first appears.
Although the project has been in the works for years, director Gareth Edwards points out that releasing a film that so heavily involves debates around AI feels particularly prescient in 2023.
Edwards believes AI will fundamentally change filmmaking, allowing anyone to make big budget looking visions on a shoestring, but wonders where this will leave craft skills.
Edwards’ filmatic influences for “The Creator” span “Rain Man” to “Apocalypse Now” to “Baraka.”
“The Creator” – Gareth Edwards’ Vision | 20th Century Studios
The backdrop to new sci-fi film The Creator is AI, which can be benevolent and could be evil. One AI manifests as a very cute young kid.
“This film will challenge what you believe,” says actor John David Washington. “It’s hard to know whose side to be on.”
That’s exactly what director Gareth Edwards, who wrote the script with Chris Weitz, wanted. Although the future 50 years hence looks “as if someone made Apocalypse Now in the Blade Runner universe,” according to Ryan Scott at SlashFilm, Edwards doesn’t paint AI as black or white.
“Should we embrace it or should we destroy it?” he asks.
Releasing in cinemas on September 28, this is Edwards’ first film since Star Wars: Rogue One in 2016, around the same time he began writing the film. The starting point was to make an allegory about robots, “a fairy tale for people who are different, that look different from us, and that we treat as the kind of the enemy or the inferior, and that they do the same back to us,” he says in an interview with Joe Deckelmeier for Screen Rant.
He’s kept the robots but added AI so that the robots are sentient. It was only as they were shooting the film in the first half of 2022 that the latest wave of AI technology became front page news.
“I thought I was making a subject matter that was like three decades away. Like, there’s no way we’re going to witness this. And then whilst we were filming, people are sending me links to news items about whistleblowers in big tech saying that the AI was sentient. And it was like, Whoa, what’s going on?”
Artificial Intelligence in “The Creator” | 20th Century Studios
“As we’re making The Creator, AI is getting better and better,” Edwards says. “It feels like we’re at that tipping point now and this movie questions what does that look like 50 years from now, when AI is more embedded as part of society.”
Equally presciently perhaps, the film also depicts half of the world having developed AI and the other half being actively against it, following a catastrophic malfunction. Interestingly, it is the West that wants to ban AI while a region of Southeast Asia fights to keep it as a force for good.
Scenes in the film depict an anti-AI movement “with people with protest signs, for and against AI,” the director told Collider’s Perri Nemiroff, thinking this was absurd. “And now, I live very near the Studios [in LA] and we drove past and that’s exactly what’s happening [with the writer’s strike].”
The movie is set in 2070 but Edwards told Nemiroff he should have picked 2024. “But I picked 2070 Because I didn’t want to make the mistake Kubrick made of 2001: A Space Odyssey [which was made in 1968 but the Jupiter mission it depicted remains distant even now]. “So I was like, I’m gonna pick something way downstream.”
Making a Smaller Budget Go Further
Distributed by 20th Century Studios, the film itself was shot on a relative shoestring budget of $80 million, but looks like a blockbuster costing significantly more. For Edwards this seems a welcome retreat from the huge budget he handled for Godzilla in 2014 and back to the more DIY approach with which he made breakthrough sci-fi-horror Monsters in 2010.
Counterintuitively the secret was to ditch CGI and LED volumes (though both were used) to focus on shooting more in actual locations with world-building production design added after the fact.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“What you normally do is you have all this design work and people say, ‘You can’t find these locations,” he explained during a Q&A session hosted by IMAX, and reported by Slash Film’s Vanessa Armstrong.
“[They’d say] You’re going to have to build sets in a studio against greenscreen. It’ll cost a fortune.’ We were like, ‘What we want to do is go shoot the movie in real locations, in real parts of the world closest to what these images are. Then afterwards, when the film is fully edited, get the production designer, James Clyne, and other concept artists to paint over those frames and put the sci-fi on top.’”
So they did, and the crew went to 80 locations, which is far more than one would normally use for a movie of this size.
“We didn’t really use any green screen,” he said. “There was occasionally a little bit here and there, but very little. If you do the maths, if you keep the crew small enough, the theory was that the cost of building a set, which is typically like $200,000, you can fly everyone to anywhere in the world for that kind of money. So it was like, ‘Let’s keep the crew small and let’s go to these amazing locations.’”
“The Creator” | Official Trailer | 20th Century Studios
Edwards shot the film using Sony FX3 cameras, a budget choice but, as he points out, one that is barely distinguishable in performance from far more expensive so-called cinestyle cameras.
“The difference between the greatest digital cinema camera you can buy and a camera like the FX3 is minute, hardly anything,” he told Nemiroff.
The big advantage for the production was the camera’s ability to record in different light scenarios including the capability of shooting 12,800 ISO “so we can shoot under moonlight.”
That in turn enabled the production team to shoot with fewer lights, cutting costs and increasing mobility. The filmmakers developed a lightweight lighting rig that a crew member could move in seconds, rather than minutes, as Edwards explained to Armstrong.
“I could move and suddenly the lighting could re-adjust. And what normally would take 10 minutes to change was taking four seconds.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
This afforded room for the actors to improvise and for Edwards to capture more of a documentary feel. “We would do 25-minute takes where we would play out the scene three or four times and just give everything this atmosphere of naturalism that I really wanted to get, where it wasn’t so prescribed. You’re not putting marks on the ground and saying, ‘Stand there.’ It wasn’t that kind of movie.”
Edwards’ lighting operator would move with the camera, just as a boom operator would, he told Nemiroff.
“We’d do a little dance together in real time,” he said. “Normally [changing light setups] would take half an hour. So it just liberated us completely. And I’m never gonna go back, to be honest.”
He started the project with one of the world’s most in-demand cinematographers, Greig Fraser ASC, who won the Oscar for his work on Dune last year. As Edwards says tells Nemiroff, “Greig is one of the few people in the world I would trust to give a camera to and say, you shoot it, and just hand it over. He’s got an amazing eye. The whole world seems to know that now.”
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
Having done prep for the movie in 2019/2020, Fraser got the offer to shoot Dune and its sequel for Denis Villenueve, and suggested DP Oren Soffer take over behind the camera.
According to Edwards, Soffer was a protege of Fraser’s. “So I looked at his work, it was really strong. We chatted and I really liked him. And so basically, there’s this transition where Greig carried on remotely but Oren picks up the reins.”
In conversation with Armstrong, Edwards revealed that the visual design of The Creator was inspired by the simple idea, “What if the Sony Walkman won the tech war instead of the Apple Mac?”
“The way we tried to quickly describe the design aesthetic of the movie is that it’s a little bit retro-futuristic.”
Likewise for the insect-like robots in the film, which they tried to design as if an insect had been made by Sony. “We took products and tried to turn them into organic-looking heads. We took things like film projectors and vacuum cleaners and just put them together, deleted pieces and kept experimenting. It was like DNA getting merged together with other DNA, trying to create something better than the previous thing.”
“AI Democratizes Filmmaking”
When it comes to AI in filmmaking, Edwards is equally even-handed. He was surprised by what is already possible with AI tools.
“My initial thoughts were that AI will never be able to understand the beauty of an image but actually websites like Midjourney are pretty good. [So, then you think] soon it’ll be moving footage. And then maybe you won’t need cameras,” he told GameSpot senior editor Chris Hayner in an interview for Fandom.
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“The Creator,” directed by Gareth Edwards. Cr: 20th Century Studios
“It’s going to change filmmaking so much,” he continues, way beyond the CGI-realism breakthrough of Jurassic Park. “It’s going to be a big seismic shift. My hope is that it sort of democratizes filmmaking, like it doesn’t cost $200 million anymore to go make something that’s in your head. You can kind of do it from your bedroom. But then the question is when everyone can make Star Wars from their bedroom, will there be any specialist [crafts] anymore?”
Edwards is known for using literal reminders of cinematic lodestones for his projects. Slash Film’s Jenna Busch reports that he’s hung different movie posters in the edit suite as inspiration for each film.
Which posters made the cutting room for “The Creator?” Some are predictable, others may (at first) seem like head scratchers (we’ll have to wait until its debut to see the full logic). Here they are, in order of predictability:
AI is a given for our future, but neither positive nor negative impacts are a foregone conclusion. We still have many choices to make about GenAI.
September 13, 2023
Posted
September 11, 2023
Translating Anime to a Live-Action Adventure for “One Piece”
From the Netflix series “One Piece”
TL;DR
One of the most expensive series ever made, Netflix’s $144 million live-action adaptation of bestselling manga pirate adventure “One Piece” needs to appeal to die-hard fans and newcomers alike.
Editor Tessa Verfuss discusses the editorial decisions behind the film’s action scenes, which employ dramatic close-ups and other camera angles. These provide a level of self-awareness rather than trying to be realistic and self-serious.
She also explains how she used framing, rhythm and pacing to help create exaggerated, larger-than-life sequences and hero or villain moments.
The series was shot on the ARRI Alexa LF outfitted with custom-made Hawk MHX Hybrid Anamorphic lenses to lend it a comic book-style and help add weight to certain moments.
The production was located at Cape Town Studios and employed 1,000 local crew members, many of whom had worked on Starz pirate adventure series “Black Sails.”
Netflix reportedly spent $144 million on the eight-episode series One Piece, making it one of the most expensive shows ever made. It’s also the biggest show the streamer has made in Africa. Production was located at Cape Town Studios, where it employed 1,000 local crew members, many of whom had worked on Starz adventure series Black Sails.
Editor Tessa Verfuss was one of these Black Sails alums. “There was a lot of buzz around Cape Town that there was this massive show coming in,” she told Nerds and Beyond. “The president came to visit the set, that’s how big a deal it is for us. When you hear something that big is coming, obviously you’re gonna be interested whether or not you’re familiar with the IP. And then when I heard it’s pirates, I was like, “Oh, well I’ve done pirates on Black Sails!” Sword fights, that’s totally right up my alley, but this is a bit different from Black Sails — a different vibe.”
One Piece is a live action adaptation of the bestselling manga story of all time. Debuting in 1997 in this serialized pirate adventure is about the search for the elusive One Piece treasure, led by Monkey D. Luffy. The story sprawls across 105 volumes of stories, all written and illustrated by Eiichiro Oda who is executive producer on the Netflix show.
“There’s this strong fantasy element, it’s not dark and twisty,” describes Verfuss. “It’s optimistic, joyful, funny, sincere — it’s so heartfelt, and everyone really loves that about One Piece. You couldn’t get much more different when it comes to pirate properties.”
Vincent Regan, Michael Dorman and director Marc Jobst behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Emily Rudd as Nami in Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Mackenyu Arata behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Taz Skylar as Sanji, Mackenyu Arata as Roronoa Zoro, Iñaki Godoy as Monkey D. Luffy, Emily Rudd as Nami and Jacob Romero Gibson as Usopp in Season 1 of “One Piece.” Cr: Netflix
From Season 1 of “One Piece.” Cr. Netflix
Jacob Romero Gibson behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Emily Rudd behind the scenes of Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Aidan Scott as Helmeppo in Season 1 of “One Piece.” Cr: Netflix
Maximilian Lee Piazza as Young Zoro in Season 1 of “One Piece.” Cr: Netflix
Ilia Paulino as Alvida in Season 1 of “One Piece.” Cr: Netflix
Taz Skylar as Sanji and Mackenyu Arata as Roronoa Zoro in Season 1 of “One Piece.” Cr: Netflix
Vincent Regan as Vice-Admiral Garp in Season 1 of “One Piece.” Cr: Casey Crafford/Netflix
Emily Rudd as Nami in Season 1 of “One Piece.” Cr: Joe Alblas/Netflix
Colton Osorio as Young Luffy in Season 1 of “One Piece.” Cr: Raquel Fernandes/Netflix
Although filled with VFX, much of the budget went on large-scale sets on soundstages and water tanks. The production design department was given a head start on the massive ships required for the series by repurposing ones initially built for Black Sails.
Verfuss was one of several editors on the series, with credits for cutting four episodes. “With an anime adaptation it’s not trying to be serious or realistic,” she says. “You can use the framing, the rhythm, the pacing to kind of make things a little bit larger than life, a little bit exaggerated. If you’re introducing someone, you get to give them those hero or villain moments. You can go to the extreme closeup and it doesn’t have to have this completely naturalistic feel that you would have in a different kind of show.”
One Piece is shot on the ARRI Alexa LF with custom-made Hawk MHX Hybrid Anamorphic lenses. The show’s cinematographers, led by Nicole Hirsch Whitaker, ASC, who shot the first two episodes with director Marc Jobst, covered each scene with two cameras to provide editorial with a choice of angles and performance.
“In terms of getting those characters to really work you are looking through those takes, finding those moments, and choosing your angles,” says Verfuss.
“We’ve got these wide-angle lenses that are used throughout the show, and they make these really strong frames, which I’m hoping people feel is a bit more like a comic book style. They really help add weight to certain moments.”
Elaborating on the process in an interview with Screen Rant, Verfuss said her goal was to bring an anime style to the fight scenes.
“You have these dramatic close ups and cool angles so I think it’s bringing a level of cool and self-awareness to it [rather than] trying to be realistic and self serious. The mechanics of a fight scene are one side of it, but it’s really about the emotions of the scene and understanding the character’s motivation.
“What you want from a live action [compared to animation] is you want your characters to feel human, characters that you can root for and get behind and not a cartoon.”
Netflix needed to appease die-hard fans — of which there are millions around the world – and reach an audience who had never heard of the property.
“You have to be aware of when the show is making the conscious decision to put an Easter egg in or is paying homage to something,” she said. “Sometimes it’s very subtle.”
It is content that could end up being glossed over in given the fast-pace of the show, which is where co-showrunner Matt Owens played a role.
Owens has “a really deep knowledge of the property,” Verfuss says. “So having someone steering who can say, ‘Wait, no, there’s something important here,’ was a huge help. We also had stacks of manga in the corners of the production office that you could go and flip through, and I watched some of the anime during my research.”
The proof of their success will be whether Netflix greenlights a second season. After all, the first eight episodes barely scratch the surface of the One Piece universe.
“Hopefully we got this one right, and that fans are gonna love it and think this is the best version,” she signed off to Screen Rant, “Or if not the best version of One Piece then the best version of a live action anime that you could possibly want.”
The PPC NYC programming is produced in partnership with FMC and comprises 24 sessions designed for production and post pros.
September 8, 2023
Street Scenes: Join Our Fujifilm Photowalks at NAB Show New York
From “Reflections,” a short film with Fujifilm photowalk guide Yolanda Hoskey
Would you be interested in exploring Hudson Yards with a camera, lighting equipment, a model, and a pro guide? Register for NAB Show New York and then RSVP for one of the Fujifilm Photowalks at the event.
These one-hour experiences are designed to highlight the integration between Fujifilm and Frame.io, in particular the ability to transfer images from your Fujifilm camera directly online via Camera-to-Cloud (C2C) technology.
Participants will be loaned either a Fujifilm X-H2 or X-H2S camera for the walk.
Each interactive tour group will be limited to six people, plus a guide and a model. Reservations are required.
THE TIMES
These photowalks are scheduled for:
Wednesday, October 25, 10:30-11:30 a.m.
Wednesday, October 25, 1-2 p.m.
Wednesday, October 25, 3:30- 4:30 p.m.
Thursday, October 26, 10:30-11:30 a.m.
Thursday, October 26, 1-2 p.m.
Thursday, October 26, 3:30-4:30 p.m.
Participants should head to the Fujifilm booth at least 20 minutes before the walks.
So what is the Photo+Video Lab? It’s a space where the worlds of photo and video converge, where image-based, still photography fuses with motion capture, where you trade in existing for expansive, or simply find the inspiration to try something new.
Best of all, it’s a space to connect — not only with the end-to-end workflow for your craft, but with your community. Content creators. Photographers. Videographers. And so many others through photo walks, meetups, Q&A sessions, demos, exhibits, workshops and more.
Dive into a full-on immersion into the photo and video world with an integrated workflow experience that will allow you to sample the latest tech and gear side-by-side from iconic brands and innovative newcomers.
The HBO Camera Assessment Series is a feature-length movie that employs staged scenes to demonstrate the strengths—and weaknesses—of cameras.
February 16, 2024
Posted
August 21, 2023
How Do You Fairly Compare Cameras? Well, Here’s HBO’s CAS
From the HBO Camera Assessment Series, courtesy of HBO
If you want to see the HBO Camera Assessment Series, register for NAB Show New York, sign up to attend the screening and bring your badge to the free off-site event on Wednesday, October 25. Only NAB Show New York attendees are eligible for the viewing. Find full details and how to register here.
NAB Show New York will then host a hands-on follow-up, The Making of The HBO Camera Assessment Test (CAS) Seminar, on Thursday, October 26, at 11 a.m. on the show floor at the Javits Center featuring CAS leads Stephen Beres and Suny Behar. They will explain the testing methodology and dive into the technology advancements that changed the style and type of analysis required.
Currently in its sixth installment, the HBO Camera Assessment Series is a feature-length movie that employs staged scenes that each clearly demonstrate the strengths—and weaknesses—of cameras such as the Sony Venice 2, the RED V-Raptor, the new ARRI Alexa 35 configuration, the Blackmagic Ursa 12K, and many more.
Of course, the nature of the gear has certainly evolved since HBO started doing this as a deep dive into digital cinematography, which was only then seriously starting to challenge motion picture film as the most viable medium for the network’s shows.
Cinematographer/director Suny Behar has overseen these assessments from the start. Together with HBO and under the leadership of Stephen Beres, senior vice president of Production Operations at HBO, MAX and Warner Brothers Discovery, Behar has created new installments in the series when the state of camera technology has advanced enough to warrant it.
“When we started 10 years ago,” recalls Behar, “a lot of the questions weren’t about comparing the performance and the quality of the cameras as much as it was comparing whether or not some cameras even could perform.
“There was a vast difference between a camera that could record even 10 bit 4:4:4 versus a [Canon] 5D that was 8-bit 4:2:0, so you couldn’t do green screen work; there was significant motion artifacting; and it was difficult to focus. Those larger differences aren’t what we’re looking at now because all the cameras can do at least 10-bits 4:2:2 minimum. They at least have 2K, super 35 sized sensors.”
Steve Beres (left) and Suny Behar (right)
There continue to be differences, some quite significant, among the tested cameras, Behar adds, “but it’s in different realms. The tests are no longer about [finding] where the picture just breaks, but as people expect more, there are other issues we investigate.”
There are circumstances people wouldn’t have tried to shoot a decade ago that are becoming standard expectations of a DP.
“You are going to care about signal to noise if you’re trying to shoot with available light, where some cameras will be significantly noisier than others. In the world of HDR, if you’re shooting fire or bright lights, you are going to care about extended dynamic range in the highlights, if you hope to not have to comp all your highlights in with the effects because [the highlights] broke.”
Stephen Beres explains that these tests, which have screened at various venues, serve as the start of discussions for his networks’ productions, not as any kind of dictate.
“We don’t have a spreadsheet of allowed and disallowed,” Beres explains. “What we have is projects like this, so when we sit down together — the studio and the creative team on the show — and we look at these kinds of things as a group, it can help us start the discussion about the visual language of the show. ‘What visual rules should be set up for that world that that show exists in?’
“And then we sort of back that into the conversation about ‘what technology are we going to use to make that happen?’. And that’s not just about cameras. It’s the lensing. It’s what we do in post, and it’s how we work with color. It’s how we work with texture. All those things go together to create the visual aesthetic of the show.”
Once they complete a new installment in the CAS, the company is delighted to share the results with all who are interested. Beres and Behar have both taught about production and post on the university level, and they clearly enjoy sharing their knowledge.
From the HBO Camera Assessment Series, courtesy of HBO
From the HBO Camera Assessment Series, courtesy of HBO
The Assessments
A great deal of thought goes into designing these camera tests in order to display apples-to-apples comparisons, with elements such as color grading and color and gamma transforms all handled identically.
“I think all of the cameras we tested this time shot RAW,” Behar says, “so then you have to make decisions about how you’re going to get to an intermediate [format for grading].”
They decided to use the Academy Color Encoding System (ACES) as a working color space. While there are certainly some people in the cinematography and post realms who still have various issues with ACES, Behar says, it has been useful in some ways because ACES forced every manufacturer to declare an IDT whether they liked it or not.
The IDT, or Input Device Transform, along with the ODT (Output Device Transform), provides objective numerical data quantifying the exact responses of a given sensor so that it can be transformed perfectly into ACES space.
While some manufacturers were reluctant to subject their sensors to such scrutiny (where little tricks involving after-the-fact contrast and saturation, etc., can’t hide their flaws), all did come around because of the growing adoption of ACES and its support from the Academy of Motion Picture Arts and Sciences and the American Society of Cinematographers.
Because of this, the ACES imagery upstream of any color grading really does provide a look into a sensor’s dynamic range, color and detail rendering.
Then, the CAS did the same across-the-board grade (no secondaries, no Power Windows) and transform to deliver final rec. 709 images for all the tested cameras to test many of the different sensors’ attributes and liabilities. Next, to test in HDR, they derived a PQ curve from the same picture information and opened it up without any further adjustments.
“The only test that we did not go through that exact pipeline for,” says the cinematographer, “was the dynamic range test. I’ve always felt that the ACES-rec 709 transform is too contrasty, meaning it has a very steep curve and a very high gamma point, which tends to crush blacks and push up mids. It does give you a punchy image, but if we’re testing dynamic range, and especially in low light, the first question the viewer would have would have been, ‘is there more information in the blacks?’ or ‘how did you decide what to crush?’ and those are very valid points.”
For this, Behar shot a very large number of test charts, that gives them the ability to map their own gamma transform. Shooting in log form at key exposure and at many steps over and under, the team is able to lock in an across-the-board standard for middle gray based on each camera system’s log profile. Once each camera is set up for perfectly exposed middle gray, the tests of over- and under-exposure can be objectively compared.
Given that a number of the cameras tested reached approximately 18 stops of dynamic range, I enquired whether such a capability is overkill. Circumstances where a cinematographer would actually use that much dynamic range are few and far between. More likely, they’ll want to use lighting and grip gear to limit such situations, as they always have.
“That’s right,” says Behar. “I think most DPs won’t need more than 12, maybe 13, stops of dynamic range to tell a story. You can’t hide a stinger in the shadows if you’re seeing 10 stops under. You can’t have a showcard in the window if you’re seeing 12 stops over.
“But then it stands to reason that the camera manufacturers should allow us to use that information to create soft knee rolloffs and toe rolloffs for lower dynamic range, but with beautiful rolloffs into the highlights and the shadows.
“You can’t create a look [digitally] that is like Ektachrome, with maybe four stops over and three and a half under, if you’re clipping at four stops. You need to burn and roll and bleed and have halation. With the dynamic range on some of these cameras we’ve tested, you can do more than just light for an 18-stop range.”
Behar and Beres both take great pride in these CAS films, which are shot and produced to feel like high-quality HBO-type programming, not just charts and models sitting in front of charts.
“This is real scenes with moving cameras, moving actors,” says Behar, promising the cinematography and production is of the highest caliber. “The number one feedback response we’ve gotten so far has been, ‘Holy crap! I thought this was going to be a camera test!’”
Building the sets for the HBO Camera Assessment Series, courtesy of HBO
If you want to see the HBO Camera Assessment Series, register for NAB Show New York, sign up to attend the screening and bring your badge to the free off-site event on Wednesday, October 25. Only NAB Show New York attendees are eligible for the viewing. Find full details and how to register here.
NAB Show New York will then host a hands-on follow-up, The Making of The HBO Camera Assessment Test (CAS) Seminar, on Thursday, October 26, at 11 a.m. on the show floor at the Javits Center featuring CAS leads Stephen Beres and Suny Behar. They will explain the testing methodology and dive into the technology advancements that changed the style and type of analysis required.
Additionally, post-discussion sessions and demos will feature a Sony Venice 2, ARRI Alexa 35, Panasonic AK-PLV 100 and a pair of Sony FR7 camera packages, along with gear from Mark Roberts Motion Control and Vinten, and supporting sponsors Fujinon, LUX, Multidyne, and Seagate.
Getting ready to plan your journey at NAB Show New York? You won’t want to miss this opportunity to explore the synergies between live broadcast and cinema with the Cine+Live Lab!
This destination features a variety of educational sessions and production demonstrations centered on the trend of translating cinematic techniques to live broadcast productions. All sessions and demos are open to all-access badge holders, but off-site bonus programs may require prior registration.
Don’t-miss sessions include Color Accuracy: From On Set to Post, featuring colorist Warren Eagles in conversation with AbelCine Camera Technology Specialist Geoff Smith, and Hybrid Broadcast in the Real World, exploring use cases and projects involving a blended broadcast-cinema aesthetic and tools at top tech conferences in a conversation moderated by AbelCine director of rental Gabriel Mays, as well as a chance to check out the latest HBO Camera Assessment Series, and much more.
NAB Show New York attendees are encouraged to “explore the synergies between live broadcast and cinema” through Cine+Live Lab.
September 6, 2023
Posted
August 17, 2023
AI and the Future of the Creative Industries
“The Treachery of Images,” 1929, by Rene Magritte
TL;DR
Generative AI is experiencing an unprecedented growth trajectory, and is anticipated to expand from $40 billion in today’s marketplace to a staggering $1.3 trillion by 2032.
The rise of generative AI has sparked complex ethical and legal debates, including issues surrounding copyright, intellectual property rights, and data privacy.
With its potential to disrupt traditional creative work, AI’s role in the creative industries has become a critical point during the Hollywood strikes.
Scholars argue that while generative AI can mimic certain aspects of human creativity, it lacks the ability to produce genuinely novel and unique works.
In the future, generative AI could be embraced as an assistant to augment human creativity, be used to monopolize and commodify human creativity, or place a premium on “human-made” — or all three.
In the age of artificial intelligence, the concept of creativity has become a fascinating battleground. The Hollywood strikes have thrown the debate into sharp relief, pitting guild members against major studios seeking to disrupt a successful century-old business model by adopting Silicon Valley’s “move fast and break things” ethos.
Madeline Ashby at Wired deftly outlines the stakes for the striking writers and actors. “Cultural production’s current landscape, the one the Hollywood unions are bargaining for a piece of, was transformed forever 10 years ago when Netflix released House of Cards. Now, in 2023, those same unions are bracing for the potential impacts of generative AI,” she writes.
“As negotiations between Hollywood studios and SAG heated up in July, the use of AI in filmmaking became one of the most divisive issues. One SAG member told Deadline ‘actors see Black Mirror’s “Joan Is Awful” as a documentary of the future, with their likenesses sold off and used any way producers and studios want.’
From the “Joan is Awful” episode of “Black Mirror,” Cr: Netflix
“The Writers Guild of America is striking in hopes of receiving residuals based on views from Netflix and other streamers — just like they’d get if their broadcast or cable show lived on in syndication. In the meantime, they worry studios will replace them with the same chatbots that fanfic writers have caught reading up on their sex tropes.”
AI researcher Ahmed Elgammal, a professor at the Department of Computer Science at Rutgers University, where he leads the Art and Artificial Intelligence Laboratory, recently sat down with host Alex Hughes on the BBC Science Focus Instant Genuis podcast to discuss the limits of AI against human creativity.
In the episode “AI’s fight to understand creativity,” which you can listen to in the audio player below, Elgammal explores the capabilities and limitations of AI in generating images, the ethical dilemmas surrounding copyright, and the profound distinction between AI-generated images and human-created art. This conversation sheds light on the complex relationship between machine learning and the uniquely human quality of creativity, setting the stage for the ongoing debate at the intersection of art and technology.
“The current generation of AI is limited to copying the work of humans. It must be controlled largely by people to create something useful. It’s a great tool but not something that can be creative itself,” the AI art pioneer tells Hughes.
“We must be conscious about what’s happening in the world and have an opinion to create real art. The AIs simply don’t have this.”
The Growth and Impact of Generative AI
Generative AI is experiencing an unprecedented growth trajectory, with Bloomberg Intelligence forecasting its expansion from $40 billion in today’s marketplace to a staggering $1.3 trillion by 2032.
Bloomberg’s latest report, “2023 Generative AI Growth,” demonstrates that this growth is not confined to mere numbers, but represents a fundamental shift in the way industries operate.
“The world is poised to see an explosion of growth in the generative AI sector over the next 10 years that promises to fundamentally change the way the technology sector operates,” Mandeep Singh, senior technology analyst at Bloomberg Intelligence and lead author of the report, emphasizes. “The technology is set to become an increasingly essential part of IT spending, ad spending, and cybersecurity as it develops.”
Generative AI is poised to expand its impact from less than 1% of total IT hardware, software services, ad spending and gaming market spending to 10% by 2032. Cr: Bloomberg Intelligence
Reflecting a broader trend in the technological landscape, Bloomberg predicts that generative AI will move revenue distribution away from infrastructure and towards software and services.
Voicebot.ai editor and publisher Bret Kinsella analyzes this shift on his Synthedia blog. “The report shows that 85% of the revenue in 2022 was related to computing infrastructure used to train and operate generative AI models,” he writes. “Another 10% is dedicated to running the models, also known as inference. Generative AI software and ‘Other’ services (mostly web services) accounted for just 5% of the total market.”
But that revenue distribution will change radically over the next decade, he notes. “In 2027, researchers estimate infrastructure-related revenue to decline from 95% of the market to just 56%. The figure will be about 49% in 2032.”
As a result, says Kinsella, “generative AI training infrastructure will become a $473 billion market while generative AI software [will] reach $280 billion, and the supporting services will surpass $380 billion. This may seem outlandish to forecast a software segment of a few hundred million dollars will transform into a few hundred billion in a decade. However, the impact of generative AI is so far-reaching it makes everyone rethink old assumptions.”
Specialized generative AI assistants and code generation software are emerging as powerful tools, allowing businesses to leverage AI in innovative ways. However, this growth also brings challenges. Kinsella highlights the need for caution, pointing out that the rapid adoption of AI technologies raises questions about accessibility, ethics, and regulation. Balancing innovation with responsible development will be a key consideration as generative AI continues to evolve, requiring new regulations and ethical considerations in areas such as copyright, data privacy, and algorithmic bias.
Creativity is a complex and multifaceted subject, inspiring fierce debate since long before the days of Dada artists such as Marcel Duchamp, who in 1917 displayed a sculpture comprising a porcelain urinal signed “R. Mutt.” Much like pornography, people find art — and creativity, the driving force behind all art — difficult to define. “I don’t know much about art, but I know what I like,” the popular saying goes.
Elgammal draws a clear line between AI-generated images and human-created art. “AI doesn’t generate art, AI generates images. Making an image doesn’t make you an artist; it’s the artist behind the scene that makes it art,” he tells Hughes. In other words, human creativity will always trump AI.
AI’s ability to generate realistic images is both impressive and concerning. While the technology has advanced significantly, now capable of delivering lifelike, photorealistic “AI clones,” it is not without flaws. Errors in AI-generated images, such as distorted fingers and hands, and the inability to produce something truly new are both significant limitations.
An article by Chloe Preece and Hafize Çelik in The Conversation explores AI’s inability to replicate human creativity, arguing that while AI can mimic certain aspects of creativity, it falls short in producing something genuinely novel and unique.
The key characteristic of what they call AI’s creative processes “is that the current computational creativity is systematic, not impulsive, as its human counterpart can often be. It is programmed to process information in a certain way to achieve particular results predictably, albeit in often unexpected ways.”
The duo cites Margaret Boden, a research professor of cognitive science at the University of Sussex in the UK, on the three types of creativity: combinational, exploratory, and transformational.
“Combinational creativity combines familiar ideas together. Exploratory creativity generates new ideas by exploring ‘structured conceptual spaces,’ that is, tweaking an accepted style of thinking by exploring its contents, boundaries and potential. Both of these types of creativity are not a million miles from generative AI’s algorithmic production of art; creating novel works in the same style as millions of others in the training data, a ‘synthetic creativity,’” they write.
“Transformational creativity, however, means generating ideas beyond existing structures and styles to create something entirely original; this is at the heart of current debates around AI in terms of fair use and copyright — very much uncharted legal waters, so we will have to wait and see what the courts decide.”
But the main flaw Preece and Çelik find with generative AI is its consumer-centric approach. “In fact, this is perhaps the most significant difference between artists and AI: while artists are self- and product-driven, AI is very much consumer-centric and market-driven — we only get the art we ask for, which is not perhaps, what we need.”
The Three Body Problem: Copyright, Intellectual Property and Ethics
The rapid evolution and widespread adoption of generative AI has also given rise to a complex web of ethical and legal challenges. As AI systems become more sophisticated in generating content that closely resembles human creativity, questions surrounding copyright, intellectual property rights, data privacy, and algorithmic bias have come to the forefront.
“This is far more than a philosophical debate about human versus machine intelligence,” technology writer Steve Lohr notes in The New York Times. “The role, and legal status, of AI in invention also have implications for the future path of innovation and global competitiveness.”
The Artificial Inventor Project, a group of intellectual property lawyers founded by Dr. Ryan Abbott, a professor at the University of Surrey School of Law in England, is pressing patent agencies, courts and policymakers to address these questions, Lohr reports. The project has filed pro bono test cases in the United States and more than a dozen other countries seeking legal protection for AI-generated inventions.
“This is about getting the incentives right for a new technological era,” Dr. Abbott, who is also a physician and teaches at the David Geffen School of Medicine at UCLA, told Lohr.
Elgammal spends a considerable amount of time delving into these complex issues. He identifies a three-way copyright problem that has emerged with the current generation of AI image tools. The stakeholders include the innovator, who might inadvertently violate the copyright of other artists; the original artist, whose work might be transformed or mixed without consent; and the AI developer, the company that develops and trains the AI system based on these images.
This is a new and significant problem, and one that current copyright laws are not equipped to handle. The situation is further complicated by the fact that the latest models are trained on billions of images, often without proper consent, leading to a messy situation where copyright infringement is difficult to track and enforce.
“The copyright problem comes with the current generation of imagery tools that are mainly trained on billions of images,” he says. “However, this wasn’t an issue a couple of years ago, when artists used to have to use AI through certain models that were trained using the artist’s own images.”
While training models on billions of images taken from the internet might not directly violate copyright law (since the generated images are transformative rather than derivative), it is still considered unethical, Elgammal insists.
AI’s role in misinformation is another major consideration, he says. “How can we control the data given to an AI?” he asks. “There are different opinions on everything from politics to religion, lifestyle and everything in between. We can’t censor the data it’s given to support certain voices.”
Elgammal also raises concerns about the environmental impact of AI, noting that just “training the API these models use takes a lot of energy.”
Running on Graphics Processing Units (GPUs), known for their high energy consumption, this training can last for days or weeks, iterating over billions of images. This phase forms the bulk of the energy footprint, reflecting a significant demand on power resources. The generation of images, even after training, continues to require substantial energy. Running the models on GPUs to create images adds to the energy consumption, making the entire process from training to generation a power-hungry endeavor.
Lost in Translation
As generative AI tools become increasingly more sophisticated, the potential for collaboration between humans and artificial intelligence increases exponentially. Elgammal explains how platforms designed for artists to train AI on their own data can lead to new forms of artistic expression, where the machine becomes an extension of the artist’s creativity.
But the newest text-to-image tools, trained on billions of publicly available images, actually decreases the level of creativity possible with generative AI, he argues. “With text prompting, as a way to generate, I think we are losing the fact that AI has been giving us ideas out of the box, or out of [the] ordinary, because now AI is constrained by our language.”
One of the most interesting things about generative AI, says Elgammal, is its ability to visually render our world in novel ways. But text-to-image tools compel AI to look at the world “from the lens of our own language,” he explains. “So we added a constraint that limit[s] the AI’s ability to be imaginative or be engineering interesting concepts visually.”
Language is still useful in other contexts, however, “because if you are using AI to generate something, linguistic text or something very structured like music, that’s very important to have language in the process,” he says. “So we have a long way to go in terms of how AI can fit the creative process for different artists. And what we see now is just still early stages of what is possible.”
AI as Creative Assistant
Artificial intelligence, it turns out, might be best employed as a creative assistant.
Elgammal likens generative AI to a “digital slave” that can be used to help artists increase their creative output. “Fortunately, the AI is not conscious. So having a digital slave in that sense is totally fine,” he says. “There’s nothing unethical about it.”
He compares the relationship between artist and AI to a director and film crew, or Andy Warhol’s Factory, which had dozens of assistants in varying capacities that allowed Warhol to carry out his creative vision. “But an emerging artist doesn’t have this ability. So you can use AI to really help you create things at scale.”
An article in the Harvard Business Review by David De Cremer, Nicola Morini Bianzino, and Ben Falk explores this idea further in one of three different — but not necessarily mutually exclusive — possible futures they foresee with the use of generative AI.
In this scenario, AI augments human creativity, facilitating faster innovation and enabling rapid iteration, but the human element remains essential.
“Today, most businesses recognize the importance of adopting AI to promote the efficiency and performance of its human workforce,” they write, citing applications such as health care, inventory and logistics, and customer service.
“With the arrival of generative AI, we’re seeing experiments with augmentation in more creative work,” they continue. “Not quite two years ago, Github introduced Github Copilot, an AI ‘pair programmer’ that aids the human writing code. More recently, designers, filmmakers, and advertising execs have started using image generators such as DALL-E 2. These tools don’t require users to be very tech savvy. In fact, most of these applications are so easy to use that even children with elementary-level verbal skills can use them to create content right now. Pretty much everyone can make use of them.”
The value proposition this represents is enormous. “The ability to quickly retrieve, contextualize, and easily interpret knowledge may be the most powerful business application of large-language models,” the authors note. “A natural language interface combined with a powerful AI algorithm will help humans in coming up more quickly with a larger number of ideas and solutions that they subsequently can experiment with to eventually reveal more and better creative output.”
The Future of Generative AI
De Cremer, Bianzino and Falk outline two other possible scenarios for the future of generative AI: one where machines monopolize creativity, and another where “human-made” commands a premium. Again, these aren’t mutually exclusive; any or all of them could occur — and be occurring — at the same time.
What the writers call a nascent version of this first scenario could already be in play, they caution. “For example, recent lawsuits against prominent generative AI platforms allege copyright infringement on a massive scale.”
Making the issue even more fraught is the gap between technological progress and current intellectual property laws. “It’s quite possible that governments will spend decades fighting over how to balance incentives for technical innovation while retaining incentives for authentic human creation — a route that would be a terrific loss for human creativity.
“In this scenario, generative AI significantly changes the incentive structure for creators, and raises risks for businesses and society. If cheaply made generative AI undercuts authentic human content, there’s a real risk that innovation will slow down over time as humans make less and less new art and content.”
The resulting backlash could put even more of a premium on “human-made,” they argue. “One plausible effect of being inundated with synthetic creative outputs is that people will begin to value authentic creativity more again and may be willing to pay a premium for it.”
Businesses that find success using generative AI tools “will be the ones that also harness human-centric capabilities such as creativity, curiosity, and compassion,” according to MIT Sloan senior lecturer Paul McDonagh-Smith.
The essential challenge, he said during a recent webinar hosted by MIT Sloan Executive Education, lies in determining how humans and machines can collaborate most effectively, so that machines’ capabilities enhance and multiply human abilities, rather than diminish or divide them.
It’s up to humans to add the “creativity quotient” to use technologies like generative AI to their full potential. For organizations, this means creating processes, practices, and policies that empower people to be creative to maximize the power of transformative technologies.
“Boosting your creativity quotient will optimize the use of large language models and generative AI,” he said. “It will also put all of us in a much better place in terms of how we interface with AI and technology in general.”
Founders Fund principal Jon Luttig sees three distinct patterns of behavior in regards to the rapid rise of generative AI: hope, cope, and mope.
“With a sudden phase change driven by very few people, many technologists fear that the new world we’re entering will leave them behind,” he writes on Substack. “This fear creates hallucinations including hope (the LLM market deployment phase will unfold in a way that benefits me), cope (my market position isn’t so bad), and mope (the game is already over, and I lost).”
These hallucinations, as the venture capitalist dubs them, “propel hyperbolic narratives around foundation model FUD [Fear, Uncertainty and Doubt], open source outperformance, incumbent invincibility, investor infatuation, and doomer declarations. It’s hard to know what to believe and who to trust.”
Luttig seeks to dispel these myths, contending that there’s still plenty of room for AI startups to flourish alongside players like Microsoft and Google. The people who want to slow AI down, he argues, are just the copers and mopers who fear that they’re on the wrong end of the equation.
Tell that to the writers and actors out on the picket lines.
Aimed at empowering creatives, the AI Creative Summit will be held at NAB Show New York Oct. 24-25.
January 4, 2024
Posted
August 14, 2023
Editor Joanna Naugle Says “Yes, Chef!” to Chaos (and Collaboration) for “The Bear”
Jeremy Allen White as Carmen “Carmy” Berzatto, Ebon Moss-Bachrach as Richard “Richie” Jerimovich. CR: Chuck Hodes/FX.
TL;DR
Joanna Naugle has edited 12 of the 18 episodes of The Bear‘s two seasons. She shares her experience working on the project with creator Chris Storer, who she calls post-production house Senior Post’s favorite collaborator.
Naugle used pacing and sound design to help differentiate the mood at The Beef from other restaurants where Carmy (and sous chef Sydney) had worked in the past. Pacing and sound design play a key role in setting the scene.
Naugle also discusses the importance of collaboration, the benefits of cutting while production is ongoing, and talks workflows in the age of LucidLink.
If The Bearreminds you of a classic, gritty Martin Scorsese film, you won’t be surprised that editor Joanna Naugle is a devotee and used multiple films as a reference for the project.
Naugle told No Film School, she is one of those who “worship at the temple of Schoonmaker” Scorsese’s longtime editor, Thelma. Showrunner Chris Storer wanted to emulate the vibe and pace of those films. Naugle was in, calling it “an honor” to try to imitate Schoonmaker’s style.
(She also cites Roderick Jaynes, the Coen brothers’ fictional editor alter ego, as another film role model.)
Naugle edited five episodes in the first season, and she is credited on seven episodes of The Bear’s sophomore season, so it’s no wonder her fingerprints and Schoonmaker’s influence are all over The Bear. (For context, SNL’s Adam Epstein cut seven of the total 18 episodes).
Storer (who Naugle describes as her favorite collaborator) runs a very interactive production, so she and the other editors got to see dailies very early in the process. Although they worked remotely for the majority of the shoot, Naugle says their process enabled them to catch opportunities for which they’d want different footage — and do so at a point when it was still workable.
“I always try to [be] a day behind the daily,” Naugle said to No Film School’s Gigi Hawkins. That means she’s able to tell Storer and executive producer Josh Senior, “If we just had this one shot, like, this would really make the scene or, like, we’re missing like a little bit of transitional material there.”
Naugle’s prime example of this benefit shines in season one, episode two:
Head chef Carmy Berzatto is working ungodly, lonely hours, scrubbing every inch of the kitchen with a toothbrush. The scene is set by watching all the employees clock out in rapid succession. But that punching out montage wasn’t in the original script. When Storer, Senior, and Naugle discussed the need to show how alone Carmy felt in the normally frenetic restaurant, Storer suggested the punch cards. And Naugle knew, “This is such a succinct, easy way of just saying, like, ‘Oh, everyone [else] is home,’” as she told Hawkins.
Because they were still on set, it was easy for the crew to shoot the necessary footage.
“The scene is just so much stronger for it,” Naugle says. “So that’s definitely a pro to having your editor working while you’re also shooting, and they can be reviewing what’s there or not.”
“THE BEAR” — “Beef” — Season 2, Episode 1 (Airs Thursday, June 22nd) Pictured: (l-r) Jeremy Allen White as Carmen “Carmy” Berzatto, Ayo Edebiri as Sydney Adamu. CR: Chuck Hodes/FX.
THE BEAR — “Bolognese” — Season 2, Episode 8 (Airs Thursday, June 22nd) Pictured: Jeremy Allen White as Carmen “Carmy” Berzatto. CR: Chuck Hodes/FX.
“THE BEAR” — “Beef” — Season 2, Episode 1 (Airs Thursday, June 22nd) Pictured: Ayo Edebiri as Sydney Adamu. CR: Chuck Hodes/FX.
“THE BEAR” — “Beef” — Season 2, Episode 1 (Airs Thursday, June 22nd) Pictured: (l-r) Ebon Moss-Bachrach as Ricard “Richie” Jerimovich, Jeremy Allen White as Carmen “Carmy” Berzatto. CR: Chuck Hodes/FX.
THE BEAR — “Bolognese” — Season 2, Episode 8 (Airs Thursday, June 22nd) Pictured: Liza Colón-Zayas as Tina. CR: Chuck Hodes/FX.
THE BEAR — “Forks” — Season 2, Episode 7 (Airs Thursday, June 22nd) Pictured: (l-r) Matty Matheson as Neil Fak, Jeremy Allen White as Carmen “Carmy” Berzatto. CR: Chuck Hodes/FX.
SETTING THE PACE
Much of The Bear is compressed and distilled like a fancy compote.
The infamous three-minute montage that begins the pilot? Naugle explains it started out as 10 minutes of scripted scenes, but when Storer first reviewed it, he said something to the effect of “‘This is great. Like it’s [a] 10 minute intro. Can we make it three minutes?’ And I was, like, ‘That’s an interesting and intense note. Let’s try it,’” as she recounted to Ray Richmond of Gold Derby.
What resulted was indeed intense and a bit, intentionally, off-putting. “We basically threw everything in there, all at once, to kind of just, like, establish ‘This is our show. You’re in or you’re out. It’s gonna be crazy.’”
And based on buzz (and the fact that it was greenlit for a second season), many of us decided to stick with Carmy and the crew of The Original Beef.
“Each new cut brings you further into the stressful kitchen at The Beef, or into Carmy’s frazzled mind. It‘s the kind of editing you notice, but only because it’s so successful at drawing you further in,” Katey Rich writes in her summary of the Little Gold Men episode featuring Naugle.
Those of us who continued watching through episode two were rewarded with a glimpse into Carmy’s training. This scene, featuring Carmy working in New York under Joel McHale’s unnamed head chef, is another masterclass in demonstrating how editing can drive a story.
The Bear season 2, episode 9 “Omelette”. Pictured: Jeremy Allen White as Carmen “Carmy” Berzatto. CR: Chuck Hodes/FX.
They differentiated the restaurants, not in pacing, but in other subtle, significant ways to demonstrate that the NYC restaurant is like “a symphony and everybody is doing exactly what they need to.”
The scene in the New York restaurant exuded a seamless, smooth-like-butter quality. Naugle told Filmmaker U’s Gordon Burkell: “I basically added a couple of like digital zooms to a lot of those close ups to just make it feel like every shot was moving from left to right, left to right. And just adding that little bit of movement just made them all feel a lot more connected. And then when we got back to The Beef, a lot of the close ups are just like static, and they feel, like, disconnected. And they’re dirty, obviously.” Through some light touches in Premiere, they “were able to make everything feel a little bit more fluid and like it was one shot, even though there were a lot of cuts within it.”
While the New York kitchen might seen more functional than The Beef, the score makes it clear that the atmosphere is one where creativity is stifled and terror is just below the surface.
Also, Naugle turned to sound design, electing to make NYC much quieter than The Beef. The background music is dissonant and disquieting, in contrast to the practiced movements of the staff.
Whereas, back in Chicagoland, not only do Carmy and Richie duke it out in a shouting match, the score features two different songs competing, each one representing a different actor, with the volume increasing when they are speaking, to mimic “two people trying to like turn the radio dial to their station,” Naugle told Burkell.
Another scene from The Bear with a very distinct tone and pacing comes in episode four of season two, when (spoiler alert) Marcus travels to Copenhagen to train at Noma (a soon-to-be-dated reference).
“We really wanted to feel a departure from the pace that we’d set, you know,” she told Hawkins. “We linger a lot more in that episode. There’s a lot more quiet. There’s a lot more like silence. And that really was intentional to try to make it feel like, ‘OK, we’re leaving Chicago. We’re leaving this world. Marcus is having his own experience.’”
She notes this echoes the season one montage when Marcus meticulously crafts and decorates the donuts. Slow, colorful, almost meditative.
CREATING IN A COLLABORATIVE ENVIRONMENT
In the era of “prestige television,” it’s not uncommon for creatives to refer to episodes as 30-minute films. “I think Chris always wanted it to feel like it was a film in bite-sized segments or less, because they put so much time and energy and attention into all the little details,” Naugle told Hawkins.
And she is no different. Naugle’s regular involvement on The Bear meant she had “to think about it within the context of the whole season. So you’re editing a bunch of short films back-to-back that, hopefully, become an anthology,” she said.
The Bear, season 2, episode 10. Pictured: (l-r) Liza Colón-Zayas as Tina, Abby Elliot as Natalie “Sugar” Berzatto, Ayo Edebiri as Sydney Adamu. CR: Chuck Hodes/FX.
Another benefit of TV work, Naugle says, is the collaborative environment. Naugle enjoys “being able to just like, pass the baton back and forth and say, ‘OK, I got my first two episodes, now, I can’t wait to see what Adam [Epstein] did.’ And then I get to, you know, feel creatively inspired all over again.”
And that inspiration (and those breaks) are crucial for The Bear. After all, Naugle told Filmmaker U, “The pace is so crazy. And there’s flashbacks and montages, and makes you want to, you know, scream sometimes because it’s so intense. But that’s really, really fun.”
In practical terms, these hand-offs are facilitated by an all-Premiere workflow and LucidLink.
‟We use Adobe products, which has worked super well because we have all the shared media on LucidLink and then are able to see who’s in which project,” Naugle said.
Naugle appreciates the flexibility of modern ways of working, but notes that there are times when an email or even a Zoom conversation just won’t cut it for collaboration, as far as she’s concerned. Being physically together she says is “so important, especially, I find, for that last push.”
‟I think it’s our job as editors to help directors and writers — in a very supportive way — step away from what was intended on the page. They always say that the movie is shot three times, or written three times; the original version, when you shoot it, and then what comes through in the edit,” Naugle told We Got This Covered’s Scott Campbell. ‟So I always try to be the first most unbiased audience to the footage.”
She takes that responsibility seriously. “[O]nce you’re in the edit room, it’s just like, ‘OK, what’s working, what’s coming through, let’s lean into the strengths and not be afraid to discard the things that didn’t totally work.’ And I think it takes a really creative and trusting collaborator to do that,” Naugle said.
You can check out more of Naugle’s work in the indie film Molli and Max in the Future (which debuted this spring at SXSW) or review episodes of Ramy, Two Dope Queens, Some Good News, and other movies and shows edited by Senior Post.
The editing team on “Barbie” and writer-director Greta Gerwig talk about their process for combining humor, emotion and anarchy.
October 4, 2023
Posted
August 2, 2023
Whether You’re a Photographer or Cinematographer or Both, Here’s How to “Think Like a Filmmaker”
From “The Wrangler,” a short film shot by Jeff Berlin
The transition from still photographer to cinematographer should be simple, right? Cinematography is just 24 still images per second.
That’s what Jeff Berlin thought when he started the process of expanding from successful fashion and editorial photographer, shooting internationally recognized talent in beautiful locations throughout the world to becoming a professional cinematographer.
But as he explains in this talk presented by B&H Photo, Berlin learned that the two jobs have quite a few differences, both in terms of the approach to the artistry and the specifics of the job description.
After presenting some of his work in fashion and celebrity portraiture, and his back story peppered with info about the six years he’d run off to plum assignments from his home bases in Paris or Milan, he talked about a number of the still photographers who provided him with references and inspiration in his work.
On the path to becoming any kind of visual artist, he says, “you develop and cultivate your sensibility, and you educate your eye.” In the world of stills, he developed a strong familiarity with such greats as Irving Penn, William Eggleston, Dorothea Lange and Richard Avedon.”
While those artists’ work will likely always inform Berlin’s, as a cinematographer, he says, you’ve got to “find your references.” He speaks about classics of cinematography such as Days of Heaven from director Terrence Malick and shot by Nestor Almendros and The Danish Girl, directed by Tom Hooper with cinematography by Danny Cohen — “It’s just a really, really lovely film,” he enthuses — or Mike Nichols’ The Graduate, for which Robert Surtees served as cinematographer. These and a number of others, he says, “have become touchstones.”
Cinematographer Steven Bernstein (Monster, directed by Patty Jenkins), “has been my mentor through a lot of this journey,” Berlin says, noting that the DP was among the first to explain to him some of the differences between the two skills.
Berlin says he comes “from a world where you’re looking to shoot the most beautiful images,” but while that is sometimes what directors are looking for it certainly isn’t always.
Now that he’s shot a number of different projects, he references a director’s treatment for a short film he shot. “‘I don’t like super sharp images. Ever.’ People talk so much about resolution and sharpness, but that isn’t necessarily what filmmakers want to tell their story.”
Berlin also touches on the different vocabulary in the two fields. “A tripod,” he says, “becomes ‘sticks. People don’t talk about the F-stop; they talk about the T-stop. There are [cinematography-specific] composition terms such as French Overs and “Dirty Overs.”
Not that it’s terribly challenging learning the new argot, but it can be interesting going from being a highly in-demand still photographer to having to learn some basic terminology to shoot motion pictures.
Of course, cameras are an indispensable part of either profession and Fassbinder goes into some depth about his findings as a cinematographer seeking “the best camera for the mission.”
The very top tier of motion picture camera is perfect for situations where something costing many tens of thousands of dollars and requiring a certain size crew to just to move it around and ensure its smooth operation. But there are some cameras costing only a few thousand dollars that can be perfect for shoots requiring a smaller crew and more modest footprint. Often, he points out, a number of features and TV productions mix and match.
Berlin speaks in terms of the Sony gear he uses, but the underlying concepts can easily be transposed to equipment from other major manufacturers.
He speaks about the image quality, from sensor to encoding, of his Venice 2, which completely “kitted out” runs about 90 grand, and the FX series (3, 6 and 9), the cheapest of which costs about $4000.
Berlin also talks about dual base ISO, which many Sony cameras (as well as Panasonic and others offer) have, which can allow for extreme low-light shooting and day exterior cinematography without the most of quality sacrifices involved in the traditional approach to exposure index, which has generally pumped the gain way up to enable really low-light shooting.
And he goes into the importance of ND (neutral density) filters as a way of controlling exposure in brightly lit conditions without having to change T-stop (and thus alter the depth of field) or ISO setting. (Sony cameras’ built-in ND filters do offer a convenience some competitors do not).
In sum, he says, it’s comparatively easy to be a solo still photographer. “When you’re in a studio doing a campaign, you have your crew, you have your hair, your makeup, your stylist, your assistant…I would sometimes have three assistants, depending on the kind of job that I was doing.
“But filmmaking is always a team sport. The still photographer is the director, the DP and the gaffer. On a film set, everyone has their own role.”
Furthermore, “as a still photographer, you really want to have a style that identifies you, that that individualizes you and gives … an identity to your work. A cinematographer really shouldn’t have a style; you are there to support the vision of the director.”
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
TL;DR
The editing team on “Barbie” and writer-director Greta Gerwig talk about throwing lots of ideas at the wall to find the right combination of humor, emotion and anarchy.
Some of the ideas were “completely abstract works of art that could be in the Tate Modern or in a ‘70s avant garde screening,” which is why some of it survived and some of it didn’t.
“The whole fun of this job is trying crazy ideas,” says editor Nick Houy. It might be terrible and you’ll do six things and one of them will be great.”
“Barbie” Timeline Tour: Breaking down the post and VFX workflows with first assistant editor Nick Ramirez and VFX Editor Matt Garner
The art of the edit is about selecting which material to keep out as much as what to retain, and so it proved with the year’s runaway hit, Barbie.
In charge of the cutting room was Nick Houy, ACE, making Barbie his third film after Ladybird and Little Women with actor-writer-director Greta Gerwig.
“I felt blessed every day to have such a magical group of editors figuring out this movie. It was a very sweet group, very diligent and talented, but I think the thing I remember most about it is that everyone had such heart,” Gerwig said in an interview with CineMontage’s Kristin Marguerite Doidge.
The project took 14 months for the editors to cut, revealing the complexity of the film.
Doidge writes, “That tireless work paid off when it came to achieving the right tone and pacing to keep the story moving during both the poignant moments and the hilarious ones.”
The normal challenge of rhythm and pacing becomes even more acute with a story that is as intentionally arch and anarchic as the one written by Gerwig and Noah Baumbach. Since every joke had to count and had to work while the film is moving at speed, it was important for Houy to stress-test them over and over again.
“Barbie was so much more a comedy than Lady Bird and Little Women,” Houy told IndieWire’s Sarah Shachat. “So we were just, like, ‘Let’s put it in front of people and see how they react.’ Everyone’s different and every screening’s different and we’ve definitely learned, over the years, that you really have to let things have their fair chance and then act accordingly. Once you know it’s dead, you have got to get it out of there.”
Houy also spoke with Matt Feury at The Rough Cut, where he again picked up the idea. “The whole fun of this job is trying crazy ideas. It might be terrible and you’ll do six things and one of them will be great,” he said.
The editor relates one experiment where Kate McKinnon, who plays Weird Barbie, is looking down at Barbie who is laying on the ground.
“She’s like, ‘Hey, how’s it going Barbie.’ And then we flash to like a Weird Barbie with makeup all over her face and this like horror music sting. You know, it’s such a weird idea. But it was so great. And that ended up in the movie.”
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
From “Barbie,” written and directed by Greta Gerwig. Cr: Warner Bros.
The beach scene near the beginning of the movie where the dialogue is basically lots of “Hi Barbie” apparently went through more than 50 iterations.
“Some of them are completely abstract works of art that were worked on by multiple people with multiple different ideas like literally things that could be in the Tate Modern or they could be in a 1970s Avant Garde screening. We went there with everything. And so that’s why some of it survived and some of it didn’t. But it was all kind of amazing.”
Houy agreed that the main challenge of editing Barbie was providing clarity over the course of a number of turns across the film — some of which hinge on expressive internal realizations of Barbie confronting the reality of women in the real world.
“There’s always some person that has an issue with these structures. Getting it down to that one person instead of half the audience was a big challenge,” Houy told IndieWire. “But it’s worth it. We get excited by that. We’re always talking about Charlie Kaufman movies and trying to do [things like] that in a way that feels like our own voice.”
There are roughly 1,500 VFX shots in the film, which added wrinkles to the post workflow, VFX editor Matt Garner explained to Feury.
“We had to basically turn over everything once early on so that the executives could see it without blue screen in it. And then we had to redo all the work again. So tracking and managing that with all the various vendors we had was quite an undertaking, the most I’ve ever had to deal with.”
“All of those were done at a movie theater in London while they were shooting,” says Houy. “So it was like every Sunday, they would go do that. Our whole crew was in New York, but we watched them all. And those are all things that we talked about early on. I would often just sit and watch a scene of The Godfather, and be like, ‘They’re not cutting at all… we should really should do that.’
“The tone of things like Singing in the Rain were very helpful to understand this crazy dream dance sequence.”
Writer-director Greta Gerwig behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
Writer-director Greta Gerwig behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
Writer-director Greta Gerwig behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
Writer-director Greta Gerwig behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
“Barbie” writer and director Greta Gerwig. Cr: Warner Bros.
Writer-director Greta Gerwig behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
Writer-director Greta Gerwig behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
Behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
Writer and director Greta Gerwig behind the scenes of “Barbie.” Cr: Warner Bros.
Behind the scenes of “Barbie.” Cr: Warner Bros. Pictures
The non-stop jokes and surrealism of much of the movie gives way in a couple of places to contemplative pauses that are in many ways the film’s emotional core.
The final montage, for example, began life as a script note along the lines of “a Terrence Malick-esque sequence occurs” and went through various iterations in the edit before the filmmakers agreed to try selects from of home movies from the people who worked on the film.
Houy told Feury, “We just tried a bunch of stuff. We tried stock footage and never did [find anything] that ever quite worked. And so we started using old Super 8 footage and our own footage. It was a constant evolution. In that sense it was like a film school where we’re all just putting together little pieces of footage and trying things out.
“And where we landed was ultimately the right place where it’s just women. It’s telling the story of becoming human and becoming a woman. And that was what we needed to tell at that moment.”
Houy told CineMontage, “We ended up actually using personal videos of everyone who worked on the movie. … And whenever I see it now, and I see all the people that worked on the movie and their families, and my own family, it just hits so hard.”
“Even though we don’t have a sign up that says, ‘This is footage [of] the people who made the film,’” Gerwig adds to IndieWire. “I think in some unconscious way, it’s a reminder that films are only ever made by people. And these were the people that made this one.”
Next Listen to This:
Next, WatchThis:
“Barbie” writer and director Greta Gerwig. Cr: Warner Bros.
“Oppenheimer” editor Jennifer Lame, ACE speaks with “The Rough Cut” host Matt Feury about working with writer-director Christopher Nolan.
July 31, 2023
ChatGPT Tells All: “Predicting”/Prompting the Future of Media
TL;DR
From immersive virtual reality experiences to the integration of artificial intelligence into our lives, the future of media promises to push the boundaries of human creativity.
The once-futuristic notion of holographic displays has become an everyday reality, making traditional screens relics of the past.
Gone are the days of one-size-fits-all content. In 2050, advanced algorithmic systems will drive mediapersonalization to unparalleled heights
From holographic broadcasts to neural storytelling and even interplanetary communications, the media landscape of 2050 will be an immersive, algorithmically customized, and boundary-pushing experience.
At least according to AI.
Fahri Karakas, associate professor of Business & Leadership at the University of East Anglia in the UK, had the (excellent) idea to prompt ChatGPT 4 to make predictions about the future of media, and the ideas the machine came up with are mind blowing in so much as they do not seem at all like science fiction.
Responding to a prompt by Karakas for “Media trends of 2050,” the AI asks us to imagine watching TV or a live sports event with holographic images projected right into our living room. Thanks to advancements in neural networking and AI, video content will be generated in real time by analyzing the preferences and emotions of individual readers.
That’s not far fetched and neither is the idea of interplanetary media given the rocket into orbit of several commercial space initiatives and the planned missions to the Moon and Mars. In 25 years, “interplanetary communication networks will enable real-time news, entertainment, and cultural exchanges between different colonies and settlements across our solar system,” the AI predicts.
In 2050, synthetic media stars will take center stage, says ChatGPT 4. AI-generated characters with unique personalities and appearances will become cultural icons, captivating audiences in movies, music, and even influencing fashion trends.
Media platforms will implement advanced AI algorithms that understand our preferences, values, and emotions. These algorithms will curate content across different mediums (articles, videos, podcasts) specifically tailored to our tastes, saving hours of scrolling and searching. Genetic tests will reveal our predispositions towards certain genres, styles, or creators, resulting in highly curated content recommendations and personalized media experiences for each individual.
Our clothing will incorporate media capabilities, “allowing users to display digital content, share messages, and interact with others through their garments.”
ChatGPT 4 invites us to imagine being able to change the design of our clothes with a few taps on your wrist and conveying emotions through animated patterns.
Individuals will have the option to “micro-dose media,” consuming bite-sized content experiences designed to boost mood, enhance focus, or provide relaxation. These personalized micro-experiences will be carefully crafted, offering a tailored media diet that suits individual needs and desires.
And of course, in the future, social media experiences will extend beyond the screen. Users will be able to physically immerse themselves in virtual reality environments, attending parties, concerts, and interacting with friends from around the world, blurring the lines between physical and digital reality.
Karakas also asked the AI to imagine what the media ecosystem looks like in 2050. To no surprise, the machine reckons that the traditional media industry has undergone a profound transformation.
“Traditional television networks and print publications have largely become relics of the past. With the ubiquitous adoption of AR/VR technologies, media consumption has transitioned into an immersive and personalized experience. Users can now create their own tailored media environments, blurring the lines between reality and fiction, and leaving behind the one-size-fits-all approach that defined earlier iterations of media consumption.”
Rather than relying on traditional screens, individuals now access media through smart contact lenses or eyewear that overlays digital content onto their physical environment.
In 2050, there has been a seismic shift from passive consumption to active participation in media creation. User-generated content (UGC) has become the “lifeblood” of the media ecosystem, with individuals sharing their stories, opinions, and experiences.
Social media platforms have evolved into immersive multi-sensory spaces, allowing users to curate their media channels and generate content through neural interfaces that directly translate thoughts into digital form.
If you believe the AI, this “democratization of media production” has transformed the dynamics between creators and consumers, fostering a new era of collaboration and shared narratives. Users will become active participants, exploring dynamic environments, and shaping the outcome of the story through their choices and actions.
Much like in Minority Report, augmented reality advertising will blend with our surroundings. AR glasses or contact lenses will overlay digital content onto our physical world, providing personalized ads tailored to our preferences and location as we go about our daily lives.
Journalism and news media will also see a transformative shift. This includes ubiquitous AI news anchors and nanobots “capable of infiltrating high-risk situations, capturing visual data, and transmitting information in real-time.
According to ChatGPT 4, this technology will provide unparalleled reporting from conflict zones, natural disasters, and other dangerous environments.
By 2050, news will not only be delivered through traditional written articles or broadcast segments, but also through immersive virtual reality experiences. People will be able to “step into the news,” witness events firsthand, and interact with virtual objects.
Journalists and anyone else won’t need to use a keyboard anymore, either. Instead of typing or even speaking, expect people to be able to communicate directly through thoughts. That’s because brain-computer interfaces will become the norm, “allowing us to transmit ideas, emotions, and even memories to others. This technology will revolutionize storytelling, as authors can share their stories directly from their minds to the readers.”
Aimed at empowering creatives, the AI Creative Summit will be held at NAB Show New York Oct. 24-25.
October 3, 2023
Posted
July 11, 2023
It’ll Be a New (New) Media Experience in the MSG Sphere
TL;DR
The Sphere in Las Vegas is an experiential medium featuring an LED display, sound system and 4D technologies that require a completely new approach to filmmaking.
U2 is headlining the Sphere’s opening nights and Darren Aronofsky has made the first film in the patented Big Sky format.
Will the custom nature of the technology prove more of a straightjacket than a freedom to creatives?
More than just another giant screen or an upgrade to 4D cinema, the latest Las Vegas attraction is being touted as a new experiential entertainment format.
“We are redefining the future of entertainment through Sphere,” MSG Entertainment executive chairman and CEO James L. Dolan says. “Sphere provides a new medium for directors, artists, and brands to create multi-sensory storytelling experiences that cannot be seen or told anywhere else.”
“This will be a quantum leap forward in the sense of what a concert can be,” U2’s The Edge told Andy Greene at Rolling Stone. “Because the screen is so high-res and so immersive, we can actually change your perception of the shape of the venue. It’s a new genre of immersive experience, and a new art form.”
U2 will be the opening act for the Sphere on September 29, the first of a largely sold out 25-date residency running through the end of the year.
The developers of the 366-foot-tall, 516-foot-wide dome are aiming to reinvent every aspect of the live event experience and is the culmination of seven years of work, with a budget that reportedly stretched to $2.3 billion.
Virtual reality without the goggles was the elevator pitch, MSG Ventures CEO David Dibble recalls to Rolling Stone.
“We thought, ‘Wouldn’t it be great to have VR experiences without those damn goggles?’ That’s what the Sphere is,” says Dibble.
It had to have the world’s highest resolution screen, and so it does at 16K by 16K. There is no commercial camera capable of recording at that resolution without having to stitch together images from a camera array. So MSG Entertainment built its own camera system and a whole postproduction workflow, which together comprise a system it calls Big Sky.
The Big Sky single lens camera boasts t316-megapixel sensor capable of a 40x resolution increase over 4K cameras. It can capture content up to 120 frames per second at the 18K square format, and even higher speed frame rates at lower resolutions.
They designed a custom media recorder to capture all that data including uncompressed RAW footage at 30 Gigabytes per second with each media magazine containing 32 terabytes and holding approximately 17 minutes of footage.
According to David Crewe at Petapixel, who saw the tech first hand, “since the entire system was built in-house, the team at Sphere Studios had to build their own image processing software specifically for Big Sky that utilizes GPU-accelerated RAW processing to make the workflows of capturing and delivering the content to the Sphere screen practical and efficient. Through the use of proxy editing, a standard laptop can be used, connected to the custom media decks to view and edit the footage with practically zero lag.”
Big Dome content demo. Cr: Sphere Entertainment
Specialist lenses have been built, too, including a 150-degree field of view, which is true to the view of the sphere where the content will be projected, and a 165-degree field of view which is designed for “overshoot and stabilization” and is particularly useful in filming situations where the camera is in rapid motion or in a helicopter.
The 164,000-speaker audio system that can isolate specific sounds, or even limit them to certain parts of the audience. This was designed by German start-up Holoplot following investment in the company by MSG.
Watch: The insane engineering behind MSG’s SphereBig Dome content demo. Cr: Sphere Entertainment
According to Rolling Stone, the patented audio technology they created allows them to beam waves of sound wherever they want within the venue in stunningly precise fashion. This would allow, for example, one section of an audience to hear a movie in Spanish, and another side to hear it in English, without any bleed-through whatsoever, almost like fans are wearing headphones. “It can also isolate instruments,” says Dibble. “You can have acoustics in one area, and percussion in another.”
Render of the Sphere at The Venetian Skyline. Cr: Cr: Sphere EntertainmentThe Sphere skyline. Cr: Sphere EntertainmentThe Sphere at sunrise. Cr: Sphere Entertainment
The venue can seat 17,600 people, and 10,000 of them will be in specially designed chairs with built-in haptics and variable amplitudes: Each seat is essentially a low-frequency speaker. There’s also the option to shoot cold air, hot air, wind, and even aromas into the faces of fans.
“There’s a noise-dampening system that we used in the nozzles of our air-delivery system that NASA found really interesting,” Dibble tells Rolling Stone. “They were like, ‘Do you mind if we adapted that for the space program?’ We went, ‘No, knock yourself out.’”
Render of an underwater sequence at the Sphere. Cr: Sphere Entertainment
Director Darren Aronofsky (The Fountain, The Whale) was commissioned to shoot Postcard From Earth, the first piece of cinematic content for the Sphere with the Big Sky camera wielded by Oscar-nominated cinematographer Matthew Libatique.
“At its best, cinema is an immersive medium that transports the audience out of their regular life,” Aronofsky told Carolyn Giardina at The Hollywood Reporter. “The Sphere is an attempt to dial up that immersion.
He added, “Like anything, there are some things that Sphere works particularly well with and others that present new problems to solve. As different artists play with it, I’m sure they’ll find innovative ways to use it and affect audiences in different ways.”
He adds, “We just recently figured out how to shoot with macro lenses and we filmed a praying mantis resting on a branch. Imagine what that may feel like when we present it 20 stories high.”
The venue could house events like Mixed Martial Arts and will also be a centerpiece of the Formula One grand prix in October. MSG has announced plans to build similar venues in London and elsewhere.
It is too early to say but perhaps the highly bespoke nature of the venue and the workflow required to produce “experiences” for it may work against it. Will the technology prove more restrictive than flexible?
The Edge made this comment to Rolling Stone: “Unfortunately, because of the amount of time and expense in creating some of these set pieces visually, it’s quite hard to be as quick on our feet and spontaneous as we might have been on other tours.
“But we still are determined that there will be sections of the show that will be open to spontaneity.”
“Invisible Worlds:” Inside the Immersive Museum Experience
The “Invisible Worlds” exhibit at the Richard Gilder Center for Science, Education, and Innovation.
TL;DR
The American Museum of Natural History in New York City recently launched the Richard Gilder Center for Science, Education, and Innovation, a new 230,000-square-foot expansion.
The $465 million project was designed by Chicago-based architecture firm Studio Gang and took nine years to complete.
One of the key features of the center is “Invisible Worlds,” an immersive exhibit that invites visitors to explore unseen networks of life from the microscopic to the cosmic.
Designed by Berlin-based Tamschick Media+Space, the exhibit is housed inside a vast virtual production volume nicknamed “the bowl,” featuring 23-foot-high walls that surround visitors with projections at all scales.
The Richard Gilder Center for Science, Education, and Innovation, a recent addition to the American Museum of Natural History in New York City, stands as a testament to the fusion of modern architecture and innovative design. The 230,000-square-foot expansion was meticulously constructed to bridge the gap along the museum’s western edge, introducing 30 new access points to the institution’s sprawling 20-building complex. The result is a harmonious blend of old and new, extending the museum’s central axis westward from the iconic Roosevelt Rotunda and aligning the fresh facade with the bustling 79th Street.
The “Invisible Worlds” exhibit at the Richard Gilder Center for Science, Education, and Innovation. Cr: Iwan Baan
James Panero, writing at The New Criterion, explains the seemingly impossible problem represented by the museum’s patchwork construction, describing warrens seemingly built with no rhyme or reason. “Anyone who has walked through the American Museum of Natural History might have sensed something was wrong,” he says. “Just go through its Hall of Gems and Minerals, or its Hall of South American Peoples, or its Hall of Pacific Peoples. At the end of each of these long rooms, which were only reached through other long rooms, you found nothing less than a dead end.”
But the reason for the museum’s confusing layout, Panero reveals, was — in a way — by design. “The master plan of this museum, founded in 1869 and first envisioned by Calvert Vaux and Jacob Wrey Mould in 1872, has never been fully realized,” he notes. “Much like the Metropolitan Museum of Art, the Brooklyn Museum, and other grand 19th century American edifices, New York’s natural-history museum was laid out on a massive cross-in-square plan, which has only been partially built out over time.”
Amid unrealized plans filled with dead ends, the museum’s evolution has only compounded these issues, Panero continues. “What this means is that, over a century and a half after its founding, the street-facing facades and infill architecture of this museum have been created in a progression of styles that have reflected, for better and worse, the ideals of their times.”
In other words, the architecture was a hot mess, and the new Gilder Center was specifically designed to undo a multitude of practical and aesthetic sins. But don’t call it a makeover.
Unveiled to the public on May 4, 2023, the center embodies the transformative potential of immersive technology in education. The center owes its existence to the patronage of the late financier Richard Gilder, a philanthropist renowned for his support of numerous New York-based institutions. Its mission is to broaden the museum’s educational horizons, offering additional classroom spaces and a platform to display a greater portion of the museum’s vast collection of scientific specimens and artifacts. The center represents a significant leap in the museum’s evolution, inviting visitors to engage in a dynamic, interactive experience that transcends the confines of traditional museum exhibitions.
Fast Company’s Nate Berg explores the $465 million project, which was designed by the Chicago-based architecture firm Studio Gang, and stretched over nine years to reach completion.
The center “has a rib cage of a facade and a bird bone of an interior,” Berg writes. “The atrium’s walls bend and move to become staircases and bridges, and open up to form entrances to exhibition halls or views through and out of the buildings. It resembles the hollow-but-buttressed inside of birds’ bones sliced open and viewed through a microscope, strong enough to provide structure but light enough to allow them to take off.”
Inside the expansive center, “there are exhibitions on zoology, paleontology, geology, anthropology, and archaeology, as well as a butterfly vivarium and a decidedly of-the-moment immersive experience looking at networks of life in nature at scales both visible and invisible. These spaces all radiate out from a connecting central atrium that is the Gilder Center’s showpiece, which is a swooping, Swiss cheese monolith of raw concrete.”
The walls of the atrium, a soaring, four-story civic space that serves as a new gateway into the museum from Columbus Avenue, “bend and move to become staircases and bridges, and open up to form entrances to exhibition halls or views through and out of the buildings,” says Berg. “It resembles the hollow-but-buttressed inside of birds’ bones sliced open and viewed through a microscope, strong enough to provide structure but light enough to allow them to take off.”
That “of-the-moment” immersive experience Berg references is Invisible Worlds, both a key feature of the center and a game-changer in the world of immersive exhibitions. It invites visitors to explore the unseen networks of life, from the microscopic to the cosmic, in an unprecedented interactive journey.
The extraordinary 360-degree immersive science-and-art experience offers a breathtakingly beautiful and imaginative yet scientifically rigorous window into the networks of life at all scales. It was designed by the Berlin-based Tamschick Media+Space (TMS) in conjunction with the Seville-based Boris Micka Associates, who worked closely with data visualization specialists and scientists from the museum and around the world to bring the exhibit to life.
Building on the museum’s iconic habitat dioramas and Hayden Planetarium Space Shows, the Invisible Worlds exhibit takes place inside what looks like an enormous virtual production volume, nicknamed “the bowl.” The space is as large as a hockey rink, featuring 23-foot-high walls that surround visitors with projections at all scales and mirrors at ceiling height suggesting infinity.
A looping 12-minute immersive experience reveals that all life on Earth is interconnected: from the building blocks of DNA to ecological interdependencies in forests, oceans, and cities to communication made possible by trillions of connections within the human brain.
Accompanied by responsive audio, the imagery ranges from LiDAR scans of New York City and airplane-traffic data to 3D renderings of a dragonfly’s nervous system, models of fish schooling behavior, and a map of the human brain. At key moments, visitors become part of the story as their own movements affect the images of living networks depicted all around.
The “Invisible Worlds” exhibit at the Richard Gilder Center for Science, Education, and Innovation. Cr: Iwan BaanThe “Invisible Worlds” exhibit at the Richard Gilder Center for Science, Education, and Innovation. Cr: Alvaro KedingThe “Invisible Worlds” exhibit at the Richard Gilder Center for Science, Education, and Innovation. Cr: Iwan Baan
Alex Pasternack at Fast Company takes a deep dive into the exhibit, asking us to “imagine an immersive Planet Earth, or Powers of Ten, built with powerful GPUs, 3D software, and a constellation of speakers and laser projectors that respond to you.”
Moving inside the bowl, visitors see a fungal network branching across the walls and floor. “As you get your bearings, your sense of perspective and scale, you step on one route and water glides along it,” Pasternack writes. “Then you’re floating up to the rainforest canopy, with lemurs and snakes and a giant hummingbird and a leaf in its microscopic glory, and flocks of birds that fly around you on their way north. Next, flying from space down to Manhattan, you see humans’ own digital signals, not unlike those fungal networks, or elsewhere, the webs of neurons that also signal when you step on them. Later, deep in San Diego Bay, a giant humpback, blue in a veil of glowing plankton, emerges from the darkness with a low wail, dwarfing you and everyone else.”
Marc Tamschick, founder and creative lead of TMS, explains to Pasternack how the goal was to use real-world data without pummelling visitors over the head with mountains of tech and science. “If you dig deeper, you’ll find relationships to all the galleries in the museum. And I think people will add their own memories to what they see,” he said, “and then it kind of starts to open the heart.”
Vivian Trakinski, director of science visualization at the American Museum of Natural History, told Pasternack that during the development process she was afraid the experience would be “completely overwhelming,” as she put it. “But when we came in for the first time, it was peaceful, it was contemplative.”
Pasternack calls Invisible Worlds “a feat of modern science communication.” Comparing it to a recent real-world trip undertaken to observe humpback whales off of the coast of Southern California, he says, “It’s hard to describe the vertiginous feeling of glimpsing these majestic animals even at a distance; now I was underwater with one of them, and I wanted to stay there.”
Tim Deakin at MuseumNext asked Tamschick how advancements in technology helped shape the Invisible Worlds exhibit over the nearly seven years it took to complete the project. “We always have to start with the idea. Only when we know what we want to share with the audience do we look at technologies to carry that message. I think that projects should never be driven by the hardware; instead it should always be driven by the content,” Tamschick said.
“That gives us greater ability to choose how we want to progress the project. In saying that, we are also fortunate in our line of work to collaborate with experts who can see what technologies are coming in the future. In this instance, that helped us to understand that what we wanted to visualize was only going to become more precise and more high quality as the project developed.
“We also planned the project in such a way that, if the technology improves significantly in the next five years, the museum could simply upgrade the hardware but deliver the same content.”
For Tamschick, the aim of Invisible Worlds is to create an environment visitors can explore in an authentic way. This contrasts with other immersive experiences that serve as contemplative experiences or learning opportunities. “We hope that people will connect with topics they didn’t know they were interested in before,” he says. “People may never have considered what happens when you shrink to the size of a molecule or sink to the depths of the ocean.”
“Kagami” is a mixed-reality presentation showcasing the performance of late composer Ryuichi Sakamoto created by Tim Drum and directed by Todd Eckert.
The Japanese Academy Award-winning composer for “The Last Emperor” died earlier this year, but has been resurrected as a hologram via mixed reality goggles for limited theatrical runs.
Wired writer Elissaveta Brandon judged the result “an experience that feels material and ethereal at once.”
The latest art-meets-tech experience presents the late Ryuichi Sakamoto in a digital concert.
The Japanese Academy Award-winning composer for The Last Emperor died in March at the age of 71, but he has been resurrected as a hologram for a special theatrical experience viewable in situ at New York’s The Shed using mixed reality goggles.
Called Kagami, which translates to “mirror” in Japanese, the production was a five-year work in progress with Sakamoto and is directed by Todd Eckert, a former content executive at Magic Leap.
A digital rendering of the in-headset experience of “Kagami,” featuring Academy Award-winning composer Ryuichi Sakamoto. Cr: Tin Drum
Following its debut at The Shed, the show went on tour with scheduled stops at the Big Ears Festival in Knoxville, Tennessee and the Manchester International Festival in the UK, continuing in 2024 to the Sydney Opera House.
Sakamoto’s performance of 10 solo piano pieces was recorded in a volumetric capture system, but the artist died before the project was completed at mixed-reality studio Tin Drum.
According to Wired, it took about six months for Eckert’s team to process the raw data captured in the session. In the meantime, they also had to sculpt Sakamoto’s hair from scratch and recreate his iconic glasses. But the most challenging thing to recreate was Sakamoto’s face, which was blocked from the cameras for a large part of his performance because he was hunched over. The team had to make up for the missing data by reconstructing his face using referencing segments of complete data.
What of the 45-minute show itself? The show begins with 80 people sitting in a circle around absolutely nothing. After each concertgoer has slid on a Magic Leap 2 headset, a virtual Sakamoto appears in the center of the circle. The musician then performs while guests can move “around” him.
“Kamagachi” by Ryuichi Sakamoto and Tin Drum at The Shed in New York City. Cr: Ryan Muir Photography/The Shed
Forbes writer Charlie Fink attended and reports that when Sakamoto’s hologram, ends a piece, there is silence. “One person claps before they realize this is not a live performance.”
In one piece, a tree grows out of the piano and its roots go far beneath it below the floor, Fink describes. It dissolves into stars, the milky way, and soon we find ourselves standing above the earth as seen from space. In another composition we’re surrounded by iconic New York images: skylines, bridges, unexpected terracotta lions. Other times, Eckert shows the viewer black and white warplanes flying in formation and other iconic World War images.
Fink says he shed a tear, and Wired’s Elissaveta Brandon judged the result “an experience that feels material and ethereal at once.”
Brandon found the headset’s 70-degree field of view limiting and also tricky to get used to, “considering you’re walking around in a room full of strangers wearing the equivalent of dark sunglasses.”
She ponders whether a mixed-reality concert can ever make up for the sad reality of someone’s absence. What do we lose when technology becomes so integral to the most intimate of human experiences — and what do we gain?
A look at the collaborative process: Ryuichi Sakamoto, director Todd Eckert, and the Rhizomatiks Tokyo capture team on the final day of dimensional photography. Cr. Tim Drum
“Replicating the unique energy and atmosphere that stems from the collective presence of performers and the audience being in the venue together is another challenge,” she writes.
“This is only partly addressed in Kagami, which may be designed for a small crowd to take in together, but remains a mostly solitary experience.”
“Asteroid City,” Wes Anderson’s new film, is set in a midwestern desert and involves a group of scientists, military personnel and students who encounter an alien at a meteor crater.
Cinematographer Robert Yeoman faced challenges shooting under bright, direct sunlight in the desert, maintaining consistent light in day exteriors, and working with a minimal crew and no artificial lighting.
Yeoman used techniques from early filmmakers, such as shooting outside with some bounce to compress the contrast range and using sunlight for interiors.
Anderson’s preference for single camera setups and choreographed camera movements contribute to his distinct filmmaking style.
Despite Anderson’s films being meticulously planned, Yeoman emphasizes the importance of flexibility and spontaneity on set, pushing himself to create visually interesting shots.
Pictured above: Jason Schwartzman as “Augie Steenbeck” and Scarlett Johansson as “Midge Campbell” in Wes Anderson’s “Asteroid City,” a Focus Features release. Credit: Courtesy of Pop. 87 Productions/Focus Features
Wes Anderson’s new film Asteroid City is set in a Southwestern desert in the 1955, but really in a place that could only exist in Anderson’s imagination.
The story, involving a number of scientists, military personnel, and “Junior Stargazer” science students, finds the group gathered at giant meteor crater for a ceremony honoring the children’s inventions, only to find everything thrown into chaos when an actual alien arrives.
Shot in the flat desserts of Chinchòn, Spain — a small town not far from Madrid — cinematographer Robert Yeoman, ASC had to face a lot of things that people in his field generally try to avoid like the plague: Shooting under the kind of bright, direct sunlight that tends to yield blown out highlights and extremely deep shadows.
But Anderson’s entire aesthetic and approach to filmmaking would make it highly unlikely he would ever try to make life easier by shooting on a soundstage (or, God forbid, in front of a volume!).
Yeoman, who has shot all of Anderson’s live-action films since his directorial debut Bottle Rocket, decided to address the situation in the way the earliest filmmakers did. They shot outside with just some bounce to compress the contrast range somewhat and they used sunlight for interiors, too.
As Yeoman told Jim Hemphill at IndieWire, “I knew that inside locations like the diner would need light, so I just asked if we could put skylights in the diner and any building where we knew we were going to be shooting day interiors.
“[Production designer] Adam Stockhausen put skylights in and we covered them with very soft diffusion material and it worked out beautifully. We never used any lights, and that was Wes’ dream,” he said.
Writer/director Wes Anderson on the set of “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Writer/director Wes Anderson, Jason Schwartzman and Tom Hanks on the set of “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Bryan Cranston as Host in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Hong Chau as Polly and Adrien Brody as Schubert Green in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
“It was a constant challenge maintaining consistent light in the movie’s many day exteriors. On other movies, I would be tempted to bring in some big lifts and put giant silks up to soften the light, but it was too windy out there in the desert.”
The filmmakers watched some movies shot in similar locations with minimal amounts of artificial light, such as Wim Wenders’ Paris, Texas, shot by Robbie Mueller, and the DP started to feel comfortable leaning into the approach.
He also explains to Hemphill that it’s not just lighting that Anderson likes to keep simple, it’s also crew. And the virtual elimination of lighting units went a long way to keeping things light and nimble.
“Often,” Yeoman says, it was “just [Anderson], myself operating, a focus puller, a second AC with a slate, a dolly grip and a boom guy,” While the director does use a small, handheld monitor as a small concession to 21st century convenience, “he sits next to the dolly [and] watches the actors.”
From the time Anderson directed the animated Fantastic Mr. Fox (shot by Tristan Oliver), he fell in love with the idea of creating animatics and, on his next live-action film, he swapped out the storyboards he’d previously relied on for the more fleshed-out moving approach of the animatic and he’s never looked back.
In an interview with Tim Molloy at MovieMaker Magazine, Yeoman notes that while Anderson’s films are famously planned out in great detail, with camera positions and movements designed into the animatics, he still likes to show up to the set with some sense of flexibility, rather than having an exact plan for everything that happens that day.
“‘I want to be open to something that spontaneously might happen on set,’” he explains. “‘I do make my own diagrams of camera positions and lighting ideas that I share it with my gaffer. But there’s always some nervousness when I show up in the morning, because I want to push myself to that place where you’re not sure about things.
“When I feel totally secure and confident and sure, I might lapse into something a little more conventional, whereas if I keep my edge, I might come up with something that’s a little more interesting visually.’”
Scarlett Johansson as Midge Campbell in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Jake Ryan as Woodrow, Jason Schwartzman as Augie Steenbeck and Tom Hanks as Stanley Zak in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Mike Maggert as Detective #2, Fisher Stevens as Detective #1, Jeffrey Wright as General Gibson, Tony Revolori as Aide-de-Camp, and Bob Balaban as Larkings Executive in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Grace Edwards as Dinah, Scarlett Johansson as Midge Campbell and Damien Bonnaro as Bodyguard and Driver in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Liev Schreiber as J.J. Kellogg, Steve Carell as Motel Manager, Stephen Park as Roger Cho, and Hope Davis as Sandy Borden in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Pere Mallen as Cowboy, Rupert Friend as Montana, Jean-Yves Lozac’h as Cowboy, Jarvis Cocker as Cowboy, Seu Jorge as Cowboy and Maya Hawke as June in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Jake Ryan as Woodrow, Grace Edwards as Dinah, Ethan Josh Lee as Ricky, Aristou Meehan as Clifford, and Sophia Lillis as Shelly in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
Jason Schwartzman as Augie Steenbeck and Scarlett Johansson as Midge Campbell in writer/director Wes Anderson’s “Asteroid City.” Cr: Roger Do Minh/Pop. 87 Productions/Focus Features
The cinematographer is very supportive of Anderson’s single camera approach, despite the fact that second, third and even fourth cameras are very common on movie sets. “I find that we can often move faster with one camera than with two cameras,” he says.
“Two cameras often means you’re getting coverage. Coverage is great, but with Wes, every shot is specific for the particular moment that he wants. A lot of great directors have a distinct style because they believe there’s one place to put a camera and to tell a story, and that’s the place we’re going to commit to.”
Of course, a Wes Anderson movie wouldn’t be a Wes Anderson movie without enough choreographed camera movement to make a Max Ophüls film feel like Andy Warhol’sEmpire in comparison. “There’s a scene where Jeffrey Wright is giving a speech,” the DP told Esquire’s Tom Nicholson. “Wes was eager to do it all in one shot, he [Wright] walks to one end of the stage and the other, so we had to make a track that would go sideways and in and out — typically you might use a Steadicam or a Technocrane to do a shot like that. But because Wes is so precise on the framing and compositions, even if you’re off a little bit, that’s not acceptable.”
To continue the conversation about camera movement, it would be appropriate to cite what may well be the only profile of a dolly grip ever to appear in TheNew York Times. Written by Melena Ryzik, the piece about Mumbai-born Sanjay Sami, who started out working on Bollywood movies and has been designing track and pushing and pulling dollies on Anderson’s movies since 2006, references a scene in which Adrian Brody makes his way “through a long theater space in an exquisitely detailed choreography of sets, props, walls, actors, dialogue and camera,” which, as Brody explains to Ryzik, “has to come off of a set of tracks and then be loaded seamlessly onto another set of tracks and hit numerous precise marks at very specific timings.”
“The thing I love is, with Sanjay, we essentially are using the same equipment that we might have used on a movie 75 years ago,” Anderson told Ryzik, “but we’re arranging it in a way that it hasn’t been arranged before.”
“In Paris, any time I walk down a street I don’t know well, it’s like going to the movies,” says the director.
June 29, 2023
Perpetual Motion: Getting That 21-Minute Take for “Extraction 2”
From “Extraction 2,” courtesy of Netflix
TL;DR
“Extraction 2” has proven itself as a worthy sequel to its original counterpart, with stuntman-turned-director Sam Hargrave delivering intensified action sequences.
The Netflix action franchise film features an extraordinary 21-minute “oner” that includes a riotous prison break, a multi-explosion car chase in a forest, and a helicopter landing on a moving train.
The movie relied heavily on a well-prepared stunt team, with sequences featuring more than 400 performers. The car chase through the forest alone involved over 260 vehicles, some of which were uniquely modified for shooting. The audacious helicopter and train stunt required a novel approach, with rehearsals conducted on a semi-truck before moving to the actual train.
The action genre is currently working overtime, with healthy franchises and new ones emerging, including Netflix’s Extraction. The original 2020 film, directed by stuntman Sam Hargrave, had no right to expect a sequel even though they had hedged their bets with a scene at the end hinting at a continued narrative.
It was only after the original Extraction was produced and was audience testing that there was talk of a second movie and a possible franchise, as Hargrave told Brian Davids at The Hollywood Reporter. “When it was testing internally, we saw the desire of Netflix to have an action franchise in their stable. They were excited about the possibility and loved working with Chris [Hemsworth]. So we did three test screenings with audiences around town, and then they started to talk about a second movie internally.”
Director Sam Hargrave on the set of “Extraction 2.” Cr: Jasin Boland/Netflix
Director Sam Hargrave and Golshifteh Farahani on the set of “Extraction 2.” Cr: Jasin Boland/Netflix
Director Sam Hargrave (center) and Chris Hemsworth on the set of “Extraction 2.” Cr: Jasin Boland/Netflix
Director Sam Hargrave on the set of “Extraction 2.” Cr: Stanislav Honzík/Netflix
Director Sam Hargrave and cinematographer Greg Baldi on the set of “Extraction 2.” Cr: Jasin Boland/Netflix
Director Sam Hargrave on the set of “Extraction 2.” Cr: Jasin Boland/Netflix
When a sequel was greenlit, Hargrave and his cast and crew achieved something unusual; they produced a film that was better than the original in almost every way and featured an impressive 21-minute “oner” in the middle.
One-take sequences are not unusual, but 21 minutes for pure fight-or-flight action are. Apart from the continuity demands, Extraction 2 needed months of preparation to make the three scenes work, including setting lead actor Chris Hemsworth on fire in the middle of a prison break, a multiple explosion car chase in a forest, and a helicopter on a moving train.
Hargrave summed up the prep work to Matt Fowler at IGN, “The rehearsal process (for the oner) was four or five months from conception to finding the locations,” he revealed, “mapping out the path and then getting the actors making all their moves.
“Then shooting took 29 days, I believe, to complete.”
During early conversations with producer Joe Russo, the idea of a more extended sequence came up. Hargrave told Variety’s Jazz Tangcay, “Joe said, ‘It’d be cool if we opened the film with Tyler (Chris Hemsworth) extracting someone from prison. Joe, he wrote this into the script, ‘And thus follows the greatest oner in cinema history.’”
Hargrave devised a plan forward, resulting in a 21-minute one-take sequence that sees black ops specialist Tyler Rake entering a prison to rescue the family of a violent gang member. As Tyler and the family members escape the prison and jump into armored vehicles, a chase ensues, and when they board a train, it comes under attack by gangsters who land a helicopter on the train.
As a stunt coordinator, Hargrave had been involved with one-take action before. He had been an MCU stunt master and Captain America stunt double; he worked on Charlize Theron’s Atomic Blonde with its 10-minute oner fight scene. The first Extraction may had its own 12-minute one-take, but nothing as complex and lengthy as the second movie.
The movie drew on his experience tying all his disciplines together, including operating the camera with fellow operator Nate Perry. Hargrave explained to Polygon writer Brandon Streussnig how his stunt regimen was a bonus when he carried the camera.
Hargrave’s stunt past prepared him for scenes like the “oner” in Extraction 2 — most notably as a camera operator. Hargrave takes an almost Buster Keaton-esque approach toward creating these incredible feats of human achievement and shooting them personally.
“The real challenge, truthfully, for me, is that a lot of operators and camera people could do a better job than I did, but there’s a certain weight of responsibility because of where I want to put the camera,” he says. “Sometimes it’s in a pretty dangerous spot. For example, in the second movie, when we were landing a real helicopter on a moving train, I wanted the camera to walk underneath the helicopter as it landed and then wrap around and see it leave. That’s a fairly dangerous stunt to pull off. I was blown off the side of the train. Luckily, I had a harness and a cable on during rehearsal.
“Truthfully, the main reason I do many of those things is not because I’m a better operator, per se. It’s just that I feel more comfortable putting myself in harm’s way.”
More details breaking down these three primary sequences come from Anna Menta at Decider. The prison break scene was filmed in two locations. The interior was filmed at Mladá Boleslav Jail in the Czech Republic, a former working prison now used exclusively for movie shoots, including Mission Impossible: Ghost Protocol. The exterior courtyard riot — the far more difficult portion of the scene — was filmed at an 18th-century grain storage facility.
The prison riot alone took more than four months of prep and featured upwards of 400 stunt performers and “special ability action extras,” some of whom were fighting Hemsworth in the foreground while others fought each other in the background.
Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Netflix
Golshifteh Farahani as Nik Khan and Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Netflix
Golshifteh Farahani as Nik Khan and Adam Bessa as Yaz in “Extraction 2.” Cr: Netflix
Levan Saginashvili as Vakhtang, George Lasha as Sergo, Tornike Gogrichiani as Zurab and Daniel Bernhardt as Konstantine in “Extraction 2.” Cr. Larry Horricks/Netflix
Golshifteh Farahani as Nik Khan and Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Jasin Boland/Netflix
Adam Bessa as Yaz, Andro Jafaridze as Sandro, Chris Hemsworth as Tyler Rake Tinatin Dalakishvili as Ketevan and Golshifteh Farahani as Nik Khan in “Extraction 2.” Cr. Larry Horricks/Netflix
Tornike Gogrichiani as Zurab in “Extraction 2.” Cr: Jasin Boland/Netflix
From “Extraction 2.” Cr. Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake and Tinatin Dalakishvili as Ketevan in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake and Tinatin Dalakishvili as Ketevan in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake and Tinatin Dalakishvili as Ketevan in “Extraction 2.” Cr: Jasin Boland/Netflix
From “Extraction 2.” Cr. Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake, Miriam and Marta Kovziashvili as Nina, Tinatin Dalakishvili as Ketevan and Andro Jafaridze as Sandro in “Extraction 2.” Cr: Jasin Boland/Netflix
Chris Hemsworth as Tyler Rake, Andro Jafaridze as Sandro, Miriam and Marta Kovziashvili as Nina and Tinatin Dalakishvili as Ketevan in “Extraction 2.” Cr: Jasin Boland/Netflix
“There were 75 stunt performers and a bunch of specialty backgrounds woven in there,” Hargrave told Menta. “That took three nights to do that exterior, that whole thing. I relied heavily on our amazing stunt team to choreograph the background fights. There are layers and layers and layers of stunts and background and extras.”
The car chase through the forest used many of the same techniques Hargrave used for the car chase in the first movie, but there were a lot more cars. The production rented over 150 vehicles and purchased 116 more modified for shooting, including SUVs modified to drive from the top and SUVs modified to drive from the rear. Co-stunt coordinator Noon Orsatti operated the SUV “driven” by Chris Hemsworth in the movie, while in reality, she was driving the car from the backseat.
Hargrave also wanted to up the stakes from the first film by bringing the camera in and out of the cars more often and getting closer to the actors during the chase.
Operating the camera was sometimes a matter of him offloading it to himself in a different shot, “In the car, for instance, it was like a contortionist ballet. I’m falling all over the place, hitting my head, and jamming my finger, but it all adds to the chaos of the sequence.”
As for the helicopter and moving train sequence, the team rehearsed the stunt by first landing the helicopter on a flatbed truck. “We started stationary with a semi-truck, and then the truck moving in a large open parking lot, and got to the speed we wanted to show that he could consistently do it. It took a couple of days,” Hargrave said. But beyond the pilot Fred North learning the stunt, Hargrave had to figure out how to film it.
“I was up on the top of the train, handheld, and as (the helicopter) came in, my biggest concern was not getting in the way of those moving blades,” the director said with a laugh. “I had to wait for the right moment, as he flares out, and then I ran towards it. It felt like running into a hurricane because the downdraft of that helicopter is powerful. So I have to run through that. I was as close as three feet, maybe four feet — I could reach out and touch Fred if I wanted to.”
Ultimately Hargrave’s philosophy on a successful oner is based on a different type of role playing. “A oner is more like a video game or an interactive play. Traditional cutting and coverage is traditional filmmaking, and that’s how it’s been done for a long time. So the idea of the oner is to follow a character through a scenario and experience it with that character in real-time,” he tells Menta.
“It’s its own version of forced perspective. Yes, the camera is looking where I want you to look. However, it feels more organic than putting the camera over here and then cutting over there and forcing the audience to see and feel something. The oner allows the audience to feel something during these action sequences, and it’s one way to differentiate yourself.
“I’ll never be able to out-kick, out-punch, and out-choreograph [Chad Stahelski or David Leitch] because they’re the best. So the best I can do is to offer a slightly different perspective on the action and say, “This is how I see it. This is how it’s fun for me to experience it.” And hopefully, audiences appreciate that when they watch it.”
Polygon’s Streussnig, however, recognizes the stuntman-turned-director as a savior of sorts. “With innovators like Sam Hargrave running around, throwing themselves underneath helicopters to get the perfect shot, the oner has been rescued just as it was getting stale,” he writes. “He’s found a way to extract it, if you will, from thoughtless, CGI-laden exercises and propel it to explosive new heights. If Extraction 2 proves anything, not everyone can pull these sequences off — at least not in ways that feel like they’re worth the effort.”
Twisted (Twin) Sisters: Double the Cinematography for “Dead Ringers”
https://youtu.be/FA_XOruRFfU
TL;DR
Starring Rachel Weisz as the Mantle twins, Amazon Prime Video series “Dead Ringers” nods to the 1998 David Cronenberg movie but changes the fundamentals.
Laura Merians Gonçalves, one of the show’s DPs, said showrunner Alice Birch and lead director Sean Durkin referenced Ingmar Bergman’s “Persona,” Robert Altman’s “Three Women,” and Andrzej Zulawski’s “Possession” for the series.
The work of Thomas Eakins, who executed some of the first paintings of surgeries, the photography of Gregory Crewdson, and Heji Shin’s birth portraits also served as points of reference.
VFX supervisor Eric Pascarelli employed a motion control rig with a Panavised Sony Venice camera to replicate Cronenber’s “twinning,” or shooting one actor who plays twins.
2023’s Dead Ringers gave the nod to David Cronenberg’s 1998 movie through its six-episode Amazon Prime Video run, but changed the fundamentals. This time the twin OB-GYNs would be female, played by Rachel Weisz, with an emphasis on exposing some of the horrors hidden in plain sight for pregnant mothers in the 21st century. The new Dead Ringers was happy to delve into those but still cherish what had come before.
The production would follow the sisters’ mental descent as all their dreams come true with the opening of their birthing centers. One of the show’s DPs, Laura Merians Gonçalves, explained the narrative changes and the references that guided them in an interview for the Panavision website.
“Every episode has a slightly different tone, so the references changed as the visual style expanded with the narrative. The showrunner Alice Birch and lead director Sean Durkin and I discussed Bergman’s Persona, Altman’s Three Women, and Zulawski’s Possession.”
The color red was also a touchstone for the show and another salute to the 1998 movie. Gonçalves explains how the color grounded the plot, “When I started prepping, we knew the style of the show would shift to incorporate a lot more color, specifically reds. Dialing in the red was a huge focus for us. It was sort of a nod to the original, that these characters are wearing red scrubs, and many different reds — from blood red to scarlet — get introduced in the Mantle birthing center.”
The design of the new birthing centers needed to be starkly different compared to the bland Westcott Memorial, where the sisters used to work. Gonçalves described to Daniel Eagan at Filmmaker Magazine the multiple locations that completed the new center. “We needed to pay attention to how the environment would be observational and private simultaneously, a kind of bespoke experience they wanted to create for their clients. A challenge was that the center was a combination of several locations.
“The exterior was a carousel in Battery Park City with VFX added on, the atrium was a gym, and the embryology lab was the old TWA terminal at JFK. We had about six sets that we used for different things.”
The age-old technical hurdle of twinning, shooting one actor who plays twins, was also something Dead Ringers duplicated from the 1998 movie. They would use a motion control rig but this time with a Panavised Sony Venice camera.
Split-Screen Magic Trick
Using motion control determined a highly disciplined process on set, as all departments had to act as one. The show would need a VFX supervisor to check when composites were required, especially when the sisters were close to each other.
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz on the set of “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
Rachel Weisz stars as the Mantle twins in “Dead Ringers.” Cr: Amazon Prime Video
“We wanted to make sure that everything was laid out so that the acting could shine,” Pascarelli said. “It was not necessarily the most efficient way, but the things we could have done to save time on set would have required cheats [in post] that would have kept us from knowing what we were getting on the day.”
For each scene where Weisz acted opposite herself, the screen would essentially be divided into an “A” and a “B” side. The first takes would be Weisz on the “A” side of the frame — typically portraying Elliot, the more prominent personality — opposite her double, Kitty Hawthorne; once the filmmakers had a few takes that they liked, the best one would be selected, and the production sound mixer quickly edited together a track of Weisz’s dialogue. At the same time, she changed into costume for Beverly.
Then, Weisz returned to shoot the “B” side of the frame with an earwig playing the dialogue in her ear while Hawthorne and the other actors pantomimed the scene.
“The video assist triggered everything,” Pascarelli said. “It was the timecode master of the whole process, and it synchronized audio and the lighting board.”
The filmmakers created what Pascarelli described as “a crude, hand-operated split screen” to watch the interaction between the twin characters on the video assist system and ensure the eyelines matched and everyone was happy with the performances.
https://youtu.be/smRrEqRShsc
The key was to get clean shots of Weisz as both Mantle twins, so occasionally Hawthorne would duck in the middle of a shot while Weisz crossed her; on a few rare occasions, the blocking required that Pascarelli use deep fake technology to map Weisz’s face onto Hawthorne’s. However, that was only done five or six times, and the filmmakers tried to avoid it.
“The problem with that technology is that whatever Kitty’s acting becomes Rachel’s. Since this is a show about Rachel’s performance, we tried to limit those situations to ones where the second twin was standing there stone-faced,” Pascarelli said.
He felt the series had in common with Cronenberg’s original desire to keep things simple whenever possible. “We wanted to be as sparing as possible with the effects and keep the show-offy things to a minimum.”
Cadence Effects’ split-screen work on Dead Ringers
The split-screen magic trick works beautifully, with an outstanding performance by Weisz to cement it. Angelica Jade Bastién at Vulture had this to say about her portrayals:
“Her performances shimmer with a feminist current that charges every rise of her voice and every gesture of her body.
“It feels like the apotheosis of what she has demonstrated before and then some — a gentle beauty complicated by fierce intelligence, a graceful presence stitched through with ungainly wants, a voice that flows like the tides. Dead Ringers is the ultimate acting opportunity: Rachel Weisz topping herself.”
Cinematographer Larkin Seiple goes from restrained to “full Coen Brothers mayhem” for the Netflix series “Beef.”
June 20, 2023
Capturing the Chaos: Cinematography on the Next-Level Bonkers “Beef”
https://youtu.be/AFPIMHBzGDs
TL;DR
Starring Ali Wong and Stephen Yeun, Netflix series “Beef” was shot by “Everything, Everywhere All At Once” DP Larkin Seiple.
Seiple opted to use the ARRI Alexa LF equipped with Zeiss Supreme Primes to shoot the series, with the Sony Venice 2 employed for the nighttime car crash scene.
The production team worked to find the most realistic way to light, design, and clothe the actors while at the same time also trying to give hints and clues and absorb their personalities in those choices.
Creator and showrunner Lee Sung Jin said his idea for the show was loosely based on a road rage incident he experienced.
It is perhaps moot that a giant Netflix episodic should have an aesthetic that lets its story breathe during a WGA strike when writing itself is being reconsidered. Beef was shot by Everything, Everywhere All At Once DP Larkin Seiple; he deliberately downplayed the presence of his cameras to “get out of the way” of the story.
Larkin explained what that meant in practice. “We opted to minimize the amount of coverage we did and stick to the basics. Can we do this whole scene as a close-up? Can the camera stay with either Amy [Ali Wong’s character] or Danny [Stephen Yeun’s character]? We didn’t want to cover it as much as most shows cover things; we wanted to see how little we could do.”
This condensing of the coverage extended to all departments. “In collaboration with our costume designer Helen (Huang), and our production designer Grace (Yun), we tried to find the most realistic way to light, design, and clothe the actors while at the same time trying to give hints and clues and absorb their personalities in those choices as well.”
It’s a shooting design without ego, shying away from extreme filtering, flares, and shallow depth of field — but finding that they are much more effective if and when they appear.
Even Larkin’s lens choice shunned the limelight. “I think this is the first show that I picked lenses that were sterile,” he said. “We wanted lenses that could get close to the actors that didn’t distort. Normally I’m a fan of using vintage glass with more character. On this show, it just felt that the intentions had to be bare and honest. We didn’t want to create subjective imagery; we wanted to create subjective shot structure. We wanted to be with the actor the whole time. You’re bound to these two terrible people.”
Iain Blair at Post Perspective had more details on the gear choices Larkin made: “We shot it on the [ARRI] Alexa LF but with the Sony Venice 2 for the night car crash scene, and we used Zeiss Supreme Primes. We went with sharper glass because, in our testing, we found out that we could degrade the image with more control in post,” he said.
“We also shied away from super-flary or vintage lenses, as we felt it was too affected for the story and put too much emphasis on the filmmaker instead of keeping the audience with the characters.”
Beef follows the aftermath of a road rage incident between two strangers. Danny Cho, a failing contractor with a chip on his shoulder, goes head-to-head with Amy Lau, a self-made entrepreneur with a picturesque life. From then on, it was an actions-have-consequences blowout.
Lee Sung Jin is the creator and showrunner of Beef, and his idea for the show was loosely based on a road rage incident he experienced. “I thought there was a show there about two people who are very much stuck in their perspectives and have a lot going on in their individual lives that this incident unravels.”
In her review of the series, IndieWire’s Sarah Shachat describes a camera that traps the characters. “The show’s visual language remains restrained throughout most of Beef, showing the characters only in the light that they deserve; the camera traps them in single shots that go on just long enough to hint at how tenuous a grip Danny and Amy have on their lives but without the sweeping camera movement that announces a Capital-O Oner. But these shots are still precisely timed so the audience can be thunderstruck,” she writes.
“But Beef can’t live in the moment forever. The visual language of the show expands from, in Seiple’s words, invisible and observational to ‘full Coen Brothers mayhem’ by the series’ end, with some especially wonderfully bold color choices taking place in Episode 9 during a heist gone wrong at the home of the wealthy Jordan (Maria Bello).”
Beef’s road rage sequence at the start of Episode 1 also acted as the trailer for the show, so it had to entice you into the story — the coverage meter ramped up to 10 for it.
Larkin broke down the scene. “Our original choice was to keep the camera in the van with Danny to see his POV and face. So we could feel what it was like actually to chase someone. But we felt that shot this way you wouldn’t understand how dangerous it was,” he said.
“Most of the sequence is very much with Danny, and we put him on a car rig which is a platform that has another driver behind it. Stephen’s reactions are real, so he asked us to slow down as it was terrifying. The other camera angles were very close to Danny’s car; there’s never a wide shot, drone, or Technocrane shot, nothing fancy.
Behind the scenes of “Beef.” Cr: Netflix
Behind the scenes of “Beef.” Cr: Netflix
From Episode 10 of “Beef.” Cr: Netflix
Behind the scenes of “Beef.” Cr: Netflix
“We kept it grounded and concentrated on showing how much he wanted to see who was in the car he was chasing.”
Larkin recounted how the scene was shot to interviewer Nathaniel Goodman, ASC during an episode of the ASC Clubhouse Conversations podcast. “So we mounted three cameras to his car. And it’s not like a process trailer. It’s like he’s swerving through traffic. And you’re seeing him adjust to the inertia and dodge traffic. And that’s what makes his POV shots successful, the shots on him, and it feels like he’s making these terrible choices,” he said.
“It took some convincing to do because it’s always tricky to ask production to spend the extra resources on something that you couldn’t just do on a process trailer; it’s not going to feel the same. It has to feel 100% like he’s driving.”
After the scene, Larkin’s coverage dies down to the minimum, swapping it out with scenes that linger with the leads in either sweaty close-ups or their own style of wide shots, which Larkin explained.
“We shot Ali and Stephen on wider lenses than all the other characters in the show. So you’d naturally get that subjective sense that you’re closer to them, and maybe their ideas are a little wonky.
“It sounds weird, but we built the concept of it, and then we also just felt it, like we would walk up and watch the rehearsal and say that ‘the camera should be here’ and then find the lens that would make that work rather than saying that it’s a 52mm or whatever. We’d feel the relationship with the actors and find the lens that matched that.”
As only his second TV show, Larkin is getting used to shooting at the speed of an indie movie for 60 to 70 days. “It’s tricky for me, different.” His next project was a movie that shot for the same time but for a two-hour film. Let’s hope we don’t lose him to the longer form.
“Black Mirror:” Charlie Brooker Sees a Different Reflection
TL;DR
After years of exploring society’s dark absurdities, the sixth season of Netflix’s dystopian anthology series “Black Mirror” gazes at its own reflection.
In the episode “Joan Is Awful” we see deepfakes generating content tailored to individual users.
Charlie Brooker, the show’s sardonic mastermind, says he toyed with generative AI during the making of the show and found it lacked any semblance of original thought.
Like all good sci-fi, Black Mirror reflected our present into the future, but in the four years since the last run of episodes on Netflix the world seems to have become so dystopian that you couldn’t make it up.
The pandemic forcing everyone indoors, the riots on Capitol Hill fed on social media conspiracy, the rise and rise of generative AI, entrepreneurs commercializing space, and, of course, the metaverse.
The Emmy-winning anthology series is back and writer-creator-showrunner Charlie Brooker has been talking about how he took the opportunity to mix things up.
Production on season six of “Black Mirror,” Cr: Netflix
Production on season six of “Black Mirror,” Cr: Netflix
Production on season six of “Black Mirror,” Cr: Netflix
Production on season six of “Black Mirror,” Cr: Netflix
Production on season six of “Black Mirror,” Cr: Netflix
“It feels like the dystopia is lapping onto our shores at the present moment,” he told GQ’s Brit Dawson of the five-episode instalment.
“I definitely approached this season thinking, ‘Whatever my assumptions are about Black Mirror, I’m going to throw them out and do something different,’” Brooker said.
This included more comedy, particularly in the episodes “Joan Is Awful” and “Demon 79,” a horror story subtitled “Red Mirror” that draws on staples like Hammer and the work of Dario Argento.
“I sort of circled back to some classically Black Mirror stories as well,” Brooker said. “So it’s not like it’s a bed of roses this season. They’re certainly some of the bleakest stories we’ve ever done.”
He’s also perhaps not as wary of the future or of technology as his Black Mirror persona might suggest. He recalls how frightening it was in the 1980s during the height of the nuclear cold war.
“That didn’t quite happen! The other thing I would say, I do have faith in the fact that the younger generation seem to have their heads screwed on and seem to be pissed off. So that’s going to be a tsunami of people, it’s just that they’re not at the levers of power yet,” he says.
“We have eradicated lots of diseases and generally lots of things are going well that we lose sight of but it’s just a bit terrifying if you think democracy is going to collapse. That and the climate breaking down.”
From “Joan is Awful,” Episode 1 of “Black Mirror: Season 6.” Cr: Netflix
From “Beyond the Sea,” Episode 3 of “Black Mirror: Season 6.” Cr: Netflix
From “Mazey Day,” Episode 4 of “Black Mirror: Season 6.” Cr: Netflix
From “Demon 79,” Episode 5 of “Black Mirror: Season 6.” Cr: Netflix
From “Mazey Day,” Episode 4 of “Black Mirror: Season 6.” Cr: Netflix
From “Demon 79,” Episode 5 of “Black Mirror: Season 6.” Cr: Netflix
Zazie Beets in season six of “Black Mirror,” Cr: Netflix
From the “Mazey Day” episode of “Black Mirror,” Cr: Netflix
In another pre-season interview with Amit Katwala at Wired, Booker continues, “I am generally pro-technology. Probably we’re going to have to rely on it if we’re going to survive, so I wouldn’t say [Black Mirror episodes] necessarily warns, so much as worries, if you know what I mean. They’re maybe worst-case scenarios.”
Three of the five episodes are set in the past, with seemingly no connection to the evils of the internet from past seasons.
“I think there was a danger that Black Mirror was becoming the show about consciousness being uploaded into a little disc,” Brooker explains to Emma Stefansky at Esquire. “Who says I have to set this in a near-future setting, and make it all chrome and glass and holograms and, you know, a bit Minority Report?” he asks. “What happens if I just set it in the past? That opens up all sorts of other things.”
However, one episode does openly “worry” about a near-future in which AI takes control of our lives in ways we hadn’t imagined. “Joan Is Awful” is about a streaming service called Streamberry — cheekily mirroring Netflix and clearly with the streamer’s consent — that makes a photoreal, AI-generated show out of a woman’s life.
From “Beyond the Sea,” Episode 3 of “Black Mirror: Season 6.” Cr: Netflix
From “Joan is Awful,” Episode 1 of “Black Mirror: Season 6.” Cr: Netflix
From “Demon 79,” Episode 5 of “Black Mirror: Season 6.” Cr: Netflix
From “Beyond the Sea,” Episode 3 of “Black Mirror: Season 6.” Cr: Netflix
From “Joan is Awful,” Episode 1 of “Black Mirror: Season 6.” Cr: Netflix
It was specifically inspired by The Dropout, the ABC mini-series about Theranos founder and convicted fraudster Elizabeth Holmes, along with the possibility of all of us having the ability to generate personalized media using AI. Except in Black Mirror’s take this is another example of Big Tech using our private data for the entertainment of others.
People prefer viewing content “in a state of mesmerized horror,” the CEO of Streamberry says in the episode.
“Obviously the first thing I did was ask [generative AI] to come up with a Black Mirror episode to see what it would do,” Brooker told GQ. “What it came out with was simultaneously too generic and dull for any serious consideration. There’s a generic quality to the art that it pumps out.
“That was the first wave [of generative AI], when people were going, ‘Hey, look at this, I can type “Denis Nilsen the serial killer in the Bake Off tent” into Midjourney’ and it’ll spit out some eerily, quasi-realistic images of that, or ‘Here’s Mr. Blobby on a water slide,’ or ‘Paul McCartney eating an olive.’
“It’ll be undeniably perfect in five years, but at what point it’ll replace the human experience? It does feel now like we’re at the foothills of new, disruptive technology kicking in again.”
In an interview with Vox and Peter Kafka’s Re/code podcast, Brooker put his thoughts on generative AI and emerging tech another way: ‟It’s like we’ve suddenly grown an extra limb, which is amazing because it means you could juggle and scroll through your iPhone at the same time. But it also means that we’re not really sure how to control it yet.”
In general, it seems more accurate to say that Brooker is skeptical about people, rather than certain technologies. ‟Usually our technologies give with one hand and sort of slap us round the back of the head with the other,” he told Kafka. He thinks people tend to be the problem, rather than the tech, so ‟I wouldn’t want to delete this stuff from existence necessarily.”
The episode “Loch Henry” is set in the present day, following a pair of documentary filmmakers who plan to give a shocking hometown murder the lurid true crime treatment. “Loch Henry” relies on VHS tapes to build its narrative, instead of a smartphone app or webcam.
From “Loch Henry,” Episode 2 of “Black Mirror: Season 6.” Cr: Netflix
From “Loch Henry,” Episode 2 of “Black Mirror: Season 6.” Cr: Netflix
From “Loch Henry,” Episode 2 of “Black Mirror: Season 6.” Cr: Netflix
“It’s a weird one, because it is about the archive of the past that people are digging into,” Brooker tells Stefansky. “But it is also about the way all that stuff is now hoovered up and presented to you on prestige TV platforms — that we’re mining all these horrible things that happened and turning it into a sumptuous form of entertainment.”
He continues, “There’s nothing more frustrating than when you’re watching a true crime documentary, and it starts to dawn on you somewhere around Episode 3: They’re not going to tell me who did this. Not what I want. I want to see an interview with the killer. Go and generate one on ChatGPT.”
At Wired, Katwala speculates that maybe the next step is personalized content about personalized content. Society and social media has been moving in this direction for years, he says.
“One of the supposed benefits of generative AI is that it will enable personalized content, tailored to our individual tastes: your own algorithmically designed hell, so horribly well-targeted that you can’t tear your eyes away.”
But, he wonders, what happens to cultural commentary when everyone is consuming different stuff?
The irony is that while hyper-personalized content might be great for engagement on streaming platforms, it would be absolutely terrible for landmark shows like Black Mirror and Succession, which support a whole ecosystem including websites like Wired and NAB Amplify.
“We siphon off a portion of the search interest in these topics, capitalizing on people who have just watched something and want to know what to think about it. This helps explain the media feeding frenzy around the Succession finale and why I’m writing this story about Black Mirror even though we ran an interview with the creator yesterday,” Katwala argues.
“In a way, you could see that as the media’s slightly clumsy attempt to replicate the success of the algorithm.”
How the Cinematography of “Succession” Takes You Inside
Jeremy Strong as Kendall Roy, Sarah Snook as Shiv Roy, and Kieran Culkin as Roman Roy in Season 4 of “Succession.” Cr: HBO
TL;DR
In the wake of the season finale of “Succession,” cinematographer Patrick Capone, ASC, series director Mark Mylod and senior colorist Sam Daley reflect on what made working on the series so special.
Both Mylod and Capone tried to not get in the way of the action that unspooled in the pivotal boardroom meeting in the final episode, letting it unfold as naturally as possible.
Following the devastation of the Season 4 finale, fans of Succession waited for a hint of more from the Roys. In its wake, we looked for signs of an offshoot story from supremo Jesse Armstrong or an origin tale, maybe examining the depths of how Marcia had earned her ultimate prize or how Kendall would finally fit the half-Loganized skills he had begun to exhibit. But like the wait for a concert encore, the creeping realization arrived that that was that.
But Succession fandom means never having to apologize for picking over seasons, episodes, or even furtive glances contained in scenes. We’re all guilty of that, with enough room left over for even more dissection.
One of the show’s cinematographers, Patrick Capone, ASC, who had worked on the HBO series since its first season in 2018, recently commented on his time there. He summed up his postpartum feelings. “Listen, we’ve all had jobs that we love working on, and they were lousy movies or really good movies, but they were tough to work on. This is the perfect storm. It was great people. It had phenomenal scripts. I had a voice. I’m proud of my work. We traveled all around the world. And hopefully, there’ll be another job like this somewhere.”
Capone was talking to American Cinematographer’s Iain Marcks immediately following the show’s final grading session with senior colorist Sam Daley; they were both feeling raw and emotional, which made talking with them even more heartfelt, but they were ready to genuflect. It’s a poignant interview.
Marcks began with a question that hung in the air, “So what is there to say about the making of Succession that hasn’t already been said?”
Both interviewees settled on tributes to the medium of film. All four seasons had stuck to celluloid, but a world without Succession may mean tough times ahead, especially for New York’s film production community. Sam Daley, “I realized I don’t know any other episodics that primarily shoot film. I know of a few that may have done it for certain scenes or episodes, but primarily as the acquisition source. I thought it would be nice to remark on that so that people can appreciate when they watch our, you know, our final episodes that, you know, a lot of time and effort and craft went into making these images that were captured on film.”
Jeremy Strong as Kendall Roy in Season 4 of “Succession.” Cr: HBO.
Jeremy Strong as Kendall Roy, Sarah Snook as Shiv Roy, and Kieran Culkin as Roman Roy in Season 4 of “Succession.” Cr: HBO
Jeremy Strong as Kendall Roy in Season 4 of “Succession.” Cr: HBO.
Matthew Macfadyen as Tom Wambsgans and Sarah Snook as Shiv Roy in Season 4 of “Succession.” Cr: HBO
Capone added a layer of worry to Daley’s point, “And my concern with Succession ending is we have enabled smaller films in the New York area to be able to shoot film because there is an active lab that’s staying above water because we shoot 30,000 feet a day. So I’m concerned with this lack of volume going to the Kodak lab, which might affect the film community in New York City if, indeed, this lab can no longer stay above water.”
The show used (Kodak Vision 3 5203) 50 daylight. “But often we had to use the (Kodak Vision 3 5219) 500 tungsten depending on where we are, what type of day it is,” Capone added.
In another post-finale session, Capone shared his thoughts with series director Mark Mylod. They spoke with Chris Murphy at Vanity Fair about the showdown of the Roy siblings, especially the scene where Shiv hurriedly exits the boardroom with the final vote for control of Waystar Royco hanging in the balance.
“We wanted to use the reflections and the glass bowl within the glass bowl within the glass building,” says Capone. The result is the Roy children on full display, putting on a show of their worst qualities and deepest insecurities in front of both board members and their employees. “With camera placement and the actors blocking, we were able to make their fight kind of on a little bit of a stage where the board members can see it, and yet they can’t hear it,” continues Capone. “Only we can hear it until it gets vocal toward the end.”
“It was probably the most important scene of the entire series,” added Mylod.
Both Mylod and Capone tried not to get in the way of the action that unspooled, letting it unfold as naturally as possible. “That’s the beauty of our style,” says Capone, “It’s just so subjective, and the camera has the ability to point the audience where we feel they should go to get what’s being told, like a fly on the wall.”
To properly capture the end of an American dynasty, finding a location that was appropriately devastating and stately was important. “We found these two offices. The main boardroom, and we found this other office that I had never shot in,” says Capone. “It’s one of the World Trade Center buildings. I want to say it’s number seven. And it’s about 35 floors up.”
To create that fishbowl effect, Capone had to change some of the overhead lighting in both the boardroom and the adjacent room where the siblings fight, filling the overhead with Astera lights. “When you show up at these buildings, you have no idea what kind of day it’s going to be,” Capone said. “It could be a hot, sunny, cool, cloudy day. So that was important to us and to be able to use as much negative fill as we could.”
“Some of my favorite shots are very low depth of field, and the figures in the background through the glass are almost ET-like,” Capone continues. “They’re very, very thin slivers of line out of focus, and with the light from outside just wrapping around them. And I love stuff like that. It gives it a lot of depth.”
Capone and Mylod also unpacked their Succession journey with IndieWire’s Sarah Shachat, sharing anecdotes with a focus on Logan’s funeral in Episode 9, “Church and State.”
Brian Cox as Logan Roy in Season 4 of “Succession.” Cr: HBO
Jeremy Strong as Kendall Roy in Season 4 of “Succession.” Cr: HBO
Jeremy Strong as Kendall Roy, Sarah Snook as Shiv Roy, and Kieran Culkin as Roman Roy in Season 4 of “Succession.” Cr: HBO
Jeremy Strong as Kendall Roy, Sarah Snook as Shiv Roy, and Kieran Culkin as Roman Roy in Season 4 of “Succession.” Cr: HBO
Kieran Culkin as Roman Roy and Alexander Skarsgård as Lukas Matsson in Season 4 of “Succession.” Cr: HBO
Sarah Snook as Shiv Roy in Season 4 of “Succession.” Cr: HBO
Sarah Snook as Shiv Roy in Season 4 of “Succession.” Cr: HBO
Sarah Snook as Shiv Roy and Kieran Culkin as Roman Roy in Season 4 of “Succession.” Cr: HBO
“The tonal tug between tragedy, comedy, and a sort of breathless incredulity is both very funny and very sad and is often carried through camera movement and the way the operators find character reactions. Multiple cameras covered all the eulogies at once, but the energy of the performances impacts how shaky or stable the camera feels to the viewer,” Shachat writes.
“Even Ewan’s (James Cromwell) eulogy was handheld, but it’s like a dance. When you feel that it’s emotional when you feel that Roman’s becoming unhinged, you know we feel a gut instinct (to have) more movement and move around him more. Ewan was a more stable foundation of a eulogy, so even though we’re still handheld, we didn’t feel necessarily that we had to dance around him as much,” Capone said.
The shift to Kendall, who has not had a great day with the women in his life, and the choice to emphasize his non-reaction to that moment, was also the kind of unplanned kismet that comes from leaving the camera operators open to where the emotion of a scene takes them. “Mark Mylod and I are sitting next to monitors all the time, and we’ll know something’s coming up, or the operators do it instinctively,” said Capone.
“At one point, we pretty much did a 20-minute take, similar to the day Logan died,” he continued. “The A camera operator and assistant had two cameras, and as soon as one ran out of film, they would just pick up another (camera body), and another team would reload it for them. We had five cameras at the funeral but six bodies because we would flip the A camera and keep going.”
If you wanted an educated take from this superb television series, it would be how we, as the audience, witnessed these selfish monsters’ often terrible behavior.
Many words and videos have endorsed how the cameras move through the Succession story; it’s ultimately a cynical eye, an anti-capitalist stance. Capone explains the original idea, “Billionaires cannot control the weather, they cannot control health, things like this. So the fly on the wall camera effect was the rest of the world watching these billionaires,” he says.
“They have no idea how good they have it. So we tried to create a naturalistic environment of classic films where the actors could move around. This is the most amazing ensemble I’ve ever been exposed to. The operators, I feel, fall into that ensemble,” Capone continues.
“Cinematography is more than just lighting for the cinema. Cinematography is camera placement, camera movement, and the ability to take the audience and point them in the direction that you think they should be watching. And that’s what we do so well, I think.
“So, we like mistakes, but we like mistakes that just happen to real life, that happen a lot of times. Whether it’s a late focus or a camera getting to an actor moment after his word. I was brought up in a classic cinema business where some DPs said, you know, the crosshairs have to be just to the left of the nose, and this far, and the headroom is here, and the horizon has to be there.
“If you look at artwork, there are no rules. You just need to have an image that helps tell that story and, more importantly, the emotions of that moment. And I think that’s something we’ve done pretty well, and people have picked up on. And they think they’ve picked up on things where they’re totally wrong. But other times, they pick up on things that are right on.
HBO’s “Succession” takes cues from Dogme95, cinema verité, and other styles that use documentary techniques to create fictional stories.
June 15, 2023
Posted
June 12, 2023
Is Generative AI Bad for the Environment? A Computer Scientist Explains the Carbon Footprint of ChatGPT and Its Cousins
BY KATE SAENKO, BOSTON UNIVERSITY
AI chatbots and image generators run on thousands of computers housed in data centers like this Google facility in Oregon.
Tony Webster/Wikimedia, CC BY-SA
Generative AI is the hot new technology behind chatbots and image generators. But how hot is it making the planet?
As an AI researcher, I often worry about the energy costs of building artificial intelligence models. The more powerful the AI, the more energy it takes. What does the emergence of increasingly more powerful generative AI models mean for society’s future carbon footprint?
“Generative” refers to the ability of an AI algorithm to produce complex data. The alternative is “discriminative” AI, which chooses between a fixed number of options and produces just a single number. An example of a discriminative output is choosing whether to approve a loan application.
Generative AI can create much more complex outputs, such as a sentence, a paragraph, an image or even a short video. It has long been used in applications like smart speakers to generate audio responses, or in autocomplete to suggest a search query. However, it only recently gained the ability to generate humanlike language and realistic photos.
Using More Power Than Ever
The exact energy cost of a single AI model is difficult to estimate, and includes the energy used to manufacture the computing equipment, create the model and use the model in production. In 2019, researchers found that creating a generative AI model called BERT with 110 million parameters consumed the energy of a round-trip transcontinental flight for one person. The number of parameters refers to the size of the model, with larger models generally being more skilled. Researchers estimated that creating the much larger GPT-3, which has 175 billion parameters, consumed 1,287 megawatt hours of electricity and generated 552 tons of carbon dioxide equivalent, the equivalent of 123 gasoline-powered passenger vehicles driven for one year. And that’s just for getting the model ready to launch, before any consumers start using it.
Size is not the only predictor of carbon emissions. The open-access BLOOM model, developed by the BigScience project in France, is similar in size to GPT-3 but has a much lower carbon footprint, consuming 433 MWh of electricity in generating 30 tons of CO2eq. A study by Google found that for the same size, using a more efficient model architecture and processor and a greener data center can reduce the carbon footprint by 100 to 1,000 times.
Larger models do use more energy during their deployment. There is limited data on the carbon footprint of a single generative AI query, but some industry figures estimate it to be four to five times higher than that of a search engine query. As chatbots and image generators become more popular, and as Google and Microsoft incorporate AI language models into their search engines, the number of queries they receive each day could grow exponentially.
AI chatbots, search engines and image generators are rapidly going mainstream, adding to AI’s carbon footprint.(AP Photo/Steve Helber)
AI Bots for Search
A few years ago, not many people outside of research labs were using models like BERT or GPT. That changed on Nov. 30, 2022, when OpenAI released ChatGPT. According to the latest available data, ChatGPT had over 1.5 billion visits in March 2023. Microsoft incorporated ChatGPT into its search engine, Bing, and made it available to everyone on May 4, 2023. If chatbots become as popular as search engines, the energy costs of deploying the AIs could really add up. But AI assistants have many more uses than just search, such as writing documents, solving math problems and creating marketing campaigns.
Another problem is that AI models need to be continually updated. For example, ChatGPT was only trained on data from up to 2021, so it does not know about anything that happened since then. The carbon footprint of creating ChatGPT isn’t public information, but it is likely much higher than that of GPT-3. If it had to be recreated on a regular basis to update its knowledge, the energy costs would grow even larger.
One upside is that asking a chatbot can be a more direct way to get information than using a search engine. Instead of getting a page full of links, you get a direct answer as you would from a human, assuming issues of accuracy are mitigated. Getting to the information quicker could potentially offset the increased energy use compared to a search engine.
Ways Forward
The future is hard to predict, but large generative AI models are here to stay, and people will probably increasingly turn to them for information. For example, if a student needs help solving a math problem now, they ask a tutor or a friend, or consult a textbook. In the future, they will probably ask a chatbot. The same goes for other expert knowledge such as legal advice or medical expertise.
While a single large AI model is not going to ruin the environment, if a thousand companies develop slightly different AI bots for different purposes, each used by millions of customers, the energy use could become an issue. More research is needed to make generative AI more efficient. The good news is that AI can run on renewable energy. By bringing the computation to where green energy is more abundant, or scheduling computation for times of day when renewable energy is more available, emissions can be reduced by a factor of 30 to 40, compared to using a grid dominated by fossil fuels.
Finally, societal pressure may be helpful to encourage companies and research labs to publish the carbon footprints of their AI models, as some already do. In the future, perhaps consumers could even use this information to choose a “greener” chatbot.
From predicting the probability of a catch in real time to forecasting ticket sales, tech is revolutionizing the business of sports. Learn how AWS is changing the game.
June 11, 2023
Apple’s New Vision Pro Mixed-Reality Headset Could Bring the Metaverse Back to Life
BY OMAR H. FARES, TORONTO METROPOLITAN UNIVERSITY
The Apple Vision Pro headset is displayed in a showroom on the Apple campus on June 5, 2023, in Cupertino, California.
(AP Photo/Jeff Chiu)
After the struggles Meta has faced in driving user engagement, many have written off the metaverse as a viable technology for the near future. But the technological landscape is a rapidly evolving one and new advancements can change perceptions and realities quickly.
The Vision Pro headset is spatial computing device that allows users to interact with apps and other digital content using their hands, eyes and voice, all while maintaining a sense of physical presence. It supports 3D object viewing and spatial video recording and photography.
The Vision Pro is a mixed-reality headset, meaning it combines elements of augmented reality (AR) and virtual reality (VR). While VR creates a completely immersive environment, AR overlays virtual elements onto the real world. Users are able to control how immersed they are while using the Vision Pro.
A video from Apple introducing the Vision Pro headest.
The new R1 chip processes input from 12 cameras, five sensors and six microphones, which reduces the likelihood of any motion sickness given the absence of input delays.
The Vision Pro display system also features a whopping 23 million pixels, meaning it will be able to deliver an almost real-time view of the world with a lag-free environment.
Why Do People Use New Tech?
To gain a better understanding of why Apple’s Vision Pro may throw the metaverse a lifeline, we first need to understand what drives people to accept and use technology. From there, we can make an informed prediction about the future of this new technology.
The first factor that drives the adoption of technology is how easy a piece of technology will be to use, along with the perceived usefulness of the technology. Consumers need to believe technology will add value to their life in order to find it useful.
The second factor that drives the acceptance and use of technology is social circles. People usually look to their family, friends and peers for cues on what is trendy or useful.
The third factor is the level of expected enjoyment of a piece of technology. This is especially important for immersive technologies. Many factors contribute to enjoyment such as system quality, immersion experiences and interactive environment.
The last factor that drives mainstream adoption is affordability. More important, however, is the value derived from new technology — the benefits a user expects to gain, minus costs.
Can Apple Save the Metaverse?
The launch of the Vision Pro seems to indicate Apple has an understanding of the factors that drive the adoption of new technology.
Apple CEO Tim Cook poses for photos in front of a pair of the company’s new Apple Vision Pro headsets in a showroom on the Apple campus on June 5, 2023, in Cupertino, California.(AP Photo/Jeff Chiu)
When it comes to ease of use, the Vision Pro offers an intuitive hand-tracking capability that allows users to interact with simple hand gestures and an impressive eye-tracking technology. Users will have the ability to select virtual items just by looking at them.
The Vision Pro also addresses another crucial metaverse challenge: the digital persona. One of the most compelling features of the metaverse is the ability for users to connect virtually with one another, but many find it challenging to connect with cartoon-like avatars.
The Vision Pro is attempting to circumvent this issue by allowing users to create hyper-realistic digital personas. Users will be able to scan their faces to create digital versions of themselves for the metaverse.
The seamless integration of the Vision Pro into the rest of the Apple ecosystem will also likely to be a selling point for customers.
Lastly, the power of so-called “Apple effect” is another key factor that could contribute to the Vision Pro’s success. Apple has built an extremely loyal customer base over the years by establishing trust and credibility. There’s a good chance customers will be open to trying this new technology because of this.
Privacy and Pricing
While Apple seems poised to take on the metaverse, there are still some key factors the company needs to consider.
By its very nature, the metaverse requires a wealth of personal data collection to function effectively. This is because the metaverse is designed to offer personalized experiences for users. The way those experiences are created is by collecting data.
Users will need assurances from Apple that their personal data and interactions with Vision Pro are secure and protected. Apple’s past record of prioritizing data security may be an advantage, but there needs to be continuous effort in this area to avoid loss of trust and consumer confidence.
Price-wise, the Vision Pro costs a whopping US$3,499. This will undoubtedly pose a barrier for users and may prevent widespread adoption of the technology. Apple needs to consider strategies to increase the accessibility of this technology to a broader audience.
As we look to the future of this industry, it’s clear the metaverse is anticipated to be fiercely competitive. While Apple brings cutting-edge technology and a loyal customer base, Meta is still one of the original players in this space and its products are significantly more affordable. In other words, the metaverse is very much alive.
Where Sony Is Taking Virtual Production Development (Next)
TL;DR
Sony expands its Digital Media Production Center at Pinewood Studios with the first virtual production unit in the UK using Sony Crystal LED technology.
The new Pinewood virtual production stage employs Sony’s ultra-thin pixel pitched 1.2mm and 1.5mm Crystal LED panels, allowing for closer camera placement than the industry standard, which is currently 2.6mm.
Sony also recently unveiled its Virtual Production Toolkit, a suite of software products that act as a failsafe for working with LED volumes.
Sony’s recent NAB Show and post-NAB Show announcements cements its central role in the virtual production industry. Not only has Sony opened a dedicated LED volume in Pinewood Studios outside of London, it has also released a range of production-specific products that advances the workflow.
Sony has expanded its DMPC or Digital Motion Picture Centre demo space at Pinewood Studios into an LED volume to showcase its latest Crystal LED panels and demonstrate how its new products work with the Venice range of cameras.
The opening follows Sony’s partnership with Studio de France to equip Europe’s first virtual production studio early last year in Seine Saint Denis, north of Paris.
But as well as showing customers its new Crystal LED panels, at the DMPC visitors can see how the new products work with them, in particular its ultra-thin pixel pitched 1.2mm and 1.5mm panels. The current market standard is judged to be 2.6mm.
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Sony’s new virtual production space at Pinewood Studios. Cr: Sony
Virtual Venice
The first of the new products announced at NAB is what Sony is calling the Virtual Production Tool Set, a suite of software products that act as a failsafe for working with LED volumes.
There are two parts to the toolkit; the first is a camera and display plugin for Unreal Engine that works in a number of ways. When users open the plugin they will see a virtual Venice camera interface with pull-down menus for its feature set. Users can virtually adjust those parameters, exporting them to the camera once they’re ready to shoot.
So this is a previs tool for this particular camera. As it’s part of the Unreal Engine, users can start to simulate shooting scenarios. To help with this, they’ll see a mannequin-type figure inside the Unreal Engine scene, which produces reflections from the virtual lights the user turns on. The idea is that users rehearse the shots they have planned, save the settings, and then export them to their Venice camera.
Deciding on an exposure setting or lens choices ahead of a shoot should give users a better sense of accuracy than before. “Fix it in pre” they’re calling it.
Another part of the toolkit is a Moiré alert. Moiré is a big issue when shooting in LED volumes and can ruin a shot. It’s a problem mainly for CMOS sensors when shooting highly repeatable patterns on clothing or brick walls, for instance. But Sony’s new tool warns users when Moiré is about to appear, a by-product of how close the screens are and the resolution they have.
Again, this feature is available inside the Unreal plugin when users are simulating their shooting — but this time available for cameras other than the Venice. Users will have to fill in some parameters like LED type, and contrast ratio with the correct pixel pitch of the panel. Other changeable parameters include the color of the alerts. Red is usually selected for “Moiré is present” warnings.
It’ll already know the sensor and lens to give it depth of field information. As users move the camera virtually they will see a series of different colors from green, which means no Moiré, to yellow, “a chance of Moiré,” to red, “you are experiencing Moiré.”
Color Calibrator
Usually calibrating a camera against an LED volume takes around two hours to complete. Sony’s new calibrator is claiming a 15-minute process to do the same thing. The new calibrator will play a series of test patterns on the wall recorded on the camera. The resulting clip is imported into the calibrator and through a series of alignments and corrections, a profile is produced. This profile is then exported as a 3D LUT which in turn is imported into Unreal Engine and applied as a .look file to the LED wall.
Sony’s new products, including the recently launched 1.2 and 1.5mm pixel pitch panels, mean real forward movement in LED volume development. The increased pixel pitch panels allow cameras to get closer to walls; a 1.5mm pixel pitch puts the camera’s distance from the volume at around three meters. This could also mean that stages could become smaller with the same visual results. A 2.5mm pixel pitch puts the camera at eight meters from the wall, making the stage proportionally larger.
With tools like the color calibrator, set-up times can be shorter and be completed by non-color specialized staff, saving time and staffing costs.
Color, Psychology, and the Age of Image Permafrost
From HBO’s “Euphoria”
TL;DR
Independent creative studio Nice Shoes delves into the relationship between color and narrative, exploring its applications for storytelling, communication and brand-building.
Color is one of the fundamentally human ways in which we experience the world, functioning biologically and culturally; storytelling is the other.
There is an innate link between color and our perception of characteristics, such as the stimulating effects of orange and red and the calming effects of green and blue.
“If the question is how you want your audience to feel, color can give you the answer,” says Phil Choe, a senior colorist at New York’s Nice Shoes.
The link between color and human psychology is well known to filmmakers and the advertising industry. A new white paper released by independent creative studio Nice Shoes highlights research showing the extent to which it can transform the success and effectiveness of creativity.
“As well as influencing our moods, color has a habit of reflecting our mental state. Throughout history, it’s provided a fascinating glimpse into our cultural psyche,” the paper states.
Referencing films and TV show ranging from Natural Born Killers and Midsommar to Euphoria, Marriage Story and Atomic Blonde, it explains how color dictates human behavior and psychology, how it can be used as a tool for storytelling, the trends which are driving innovation in the art of color today, how color defines culture, and the best practices for getting the most out of color when on-set.
“Whether you are joyful, grieving, relieved or anxious, color has always played an important role in the fabric of our lives,” says Phil Choe, a senior colorist at Nice Shoes. “There’s no question that this is translating over into the digital worlds which are at the forefront of modern culture. The ability to set the tone of those worlds begins with color.”
For example, scientists have noted that warm-colored placebo pills are more effective than those colored blue or white. Conversely, blue-colored streetlights have been linked to a reduction in both crime and suicide rates.
According to researchers, this is because of an innate link between color and our perception of characteristics. For example, we see orange and red as “stimulants” whereas green and blue are more likely to be considered as “calming” colors.
“And so it’s not only creativity that relies on color: All forms of communication do,” states the paper. “The link between color and human psychology is so established that any communications strategy must utilize its power.”
There’s even science behind this. Research shows that the right hemisphere of our brain deals with vibrant, warmer colors whereas the left side ‘prefers’ sharp imagery standing out in front of more dulled backgrounds.
For Orlando Wood, chief innovation officer at System1, who contributed to the paper, these are troublesome findings.
From Netflix’s “Squid Game.” Photo by Youngkyu Park
“Unfortunately, my sense is that we are living through a cool kind of cultural darkness, or something of a permafrost in imagery,” he says. “Think of the work which has caught the cultural zeitgeist in the past ten years or so — Scandinavian noir dramas, for example, or Squid Game last year. The color tends to be either sharp or cold, with not a lot of warmth around.”
Wood says, “My contention is that our culture has been speaking so prominently to the left brain for so long now that we’ve neglected our right brains. We are missing those warmer colors to which we know, for example, audiences respond more positively in advertising.”
From Cloud-First to Virtual Production: Amazon Studios’ Next-Generation Approach
Watch the full NAB Show 2023 session “Amazon Studios: Building a Next Generation Studio” above.
TL;DR
As part of its Intelligent Content Experiential Zone, the 2023 NAB Show assembled a panel of Amazon Studios execs to share their insights into constructing the next-generation studio.
In a session moderated by Jessica Fernandez, head of tech & security communications at Amazon Studios, panelists included head of technology workflow strategy Christina Aguilera, worldwide head of visual effects Chris Del Conte, and head of product strategy Eric Iverson.
From the adoption of a cloud-first approach to the extensive use of in-camera VFX, the panelists highlighted how Amazon Studios is redefining the entertainment studio model.
Espionage thriller “All the Old Knives,” and Amazon Prime series “Solos” and “The Lord of the Rings: The Rings of Power” served as key examples of Amazon’s next-generation studio approach.
Imagine having sunset for nine hours a day on a film set, or creating a bustling crowd scene in the midst of a pandemic. These aren’t scenes from a sci-fi movie, but real-life examples of how Amazon Studios is revolutionizing the production ecosystem.
As part of its Intelligent Content Experiential Zone, the 2023 NAB Show assembled a panel of key technology leaders from Amazon Studios to share their insights into the groundbreaking strategies and technologies they’re leveraging to build the next-generation production studio. The panel discussion, “Amazon Studios: Building a Next Generation Studio“ was moderated by Jessica Fernandez, head of technology & security communications at Amazon Studios, and featured head of technology workflow strategy Christina Aguilera, worldwide head of visual effects Chris Del Conte, and head of product strategy Eric Iverson.
L-R: Jessica Fernandez, Christina Aguilera, Chris Del Conte and Eric Iverson.
From the adoption of a cloud-first approach to the extensive use of in-camera VFX, the panelists highlighted how Amazon Studios is redefining the entertainment studio model and leading the way in the new era of film and TV production. The discussion centered around Amazon Studios’ pioneering use of a fully AWS-powered cloud infrastructure, its innovative virtual production facility, Stage 15, and the company’s commitment to sustainability and diversity, equity and inclusion. Watch the full NAB Show session in the video at the top.
Espionage thriller All the Old Knives, starring Chris Pine and Thandie Newton, served as a prime example of Amazon’s next-generation studio approach. The lead characters in the film frequently meet up in an oceanfront restaurant at different times of day. The production team initially considered shooting on location, but since they were filming in London, capturing authentic sunset views was challenging.
To solve this problem, they turned to virtual production. They shot plates of a sunset and projected these onto an LED wall that was placed outside the window of the set on a stage. This allowed them to control the lighting and weather conditions, effectively giving them “sunset for nine hours a day,” said Del Conte. This approach also offered significant efficiencies and sustainability benefits, as they didn’t have to fly the crew out to a beach location or chase the sun to capture the perfect shot.
The use of virtual production was so successful that Variety, in its review of the film, complimented the beautiful sunset scenes, not realizing that they were digitally created, Del Conte recounted. “Variety got fooled,” he said. “So, at the end of the day, [virtual production is] the right kind of tool to be using for these kind of conditions. You don’t have magic hour, you have magic day.”
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
The Amazon Studios virtual production volume. Cr: Amazon Studios
Aguilera highlighted the production of the Amazon Prime series The Lord of the Rings: The Rings of Power as her favorite example of the studio’s next-gen approach. She emphasized the studio’s proactive adoption of a cloud-first strategy, which ensured a seamless data flow from camera to final creative output. This strategy proved invaluable when the COVID-19 pandemic struck, allowing the production to continue unabated while much of the world came to a standstill. The experience underscored the critical importance of a cloud-first approach and system interoperability in today’s production ecosystem.
“The fact that we had the cloud-first approach, the interoperability between the systems, the data flow straight from the camera, all the way through final creative, and all of these concepts, you know, they took work,” she said. “But When COVID hit, [the production team] didn’t skip a beat. They didn’t have to stop. The rest of the world stopped. They kept going. So that was pretty amazing, the fact that we were able to be proactive and be in a position to keep moving forward.”
Amazon Prime anthology series Solos was another example of the studio’s innovative approach, Del Conte said, describing an episode featuring Helen Mirren. “The entire scene was her inside the space pod the entire time, all white interior bubble windows. She had a red reflective leather space suit on and she has whitish hair.”
The initial plan was to use green screen outside of the windows, but this approach would have resulted in green screen spill, changing the color of the pod interior and Mirren’s hair. The bubble windows themselves also presented challenges and, recognizing these issues, Del Conte proposed virtual production as the solution.
He recalled a gratifying moment towards the end of the shoot when a member of the post-production team thanked him for his suggestion, saying, “Not only did we save time and money, but we’re also able to start testing this episode in two weeks,” as opposed going through iterative processes of VFX shots and management.
Del Conte emphasized that this approach was not only more efficient and cost-effective but also resulted in in-camera final effects shots ready for testing. “This was really the only way to do this kind of shoot, and [resulted in] a better creative experience.”
If you want to make money as a content creator you’ll need to think and work like a business. Not only does it take five months to earn your first dollar, and just over a year to begin working full time as a creator, you’ll most likely need upwards of $10,000 in the bank to support yourself before the dimes roll in.
“It’s a business, not a freelance gig, and it requires a business approach to revenue generation, management, operations, etc. — even at a small scale of one — to be successful.”
That’s according to new research into the creator economy from The Tilt. Specifically, its research asked creators themselves what it actually took to do their jobs.
“A content enterprise is not a get-rich-quick scheme… it’s not even an ever-get-rich scheme for most,” the report’s authors say.
Even if creators go full time on their content business there’s no bonanza in earnings. The average full-time creator earned $86,000 in 2022. On average, full-time creators expect to bring in approximately $108,199 in revenue in 2023 and will pay themselves $62,224 — a gross margin of 59%.
Most creators (86% in this report) say they think of themselves as entrepreneurs where non-creative tasks take up nearly half of their time.
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Cr: The Tilt
Content entrepreneurs spend a little less than half of their week on creative efforts. The rest of the time, they’re knee-deep in operations, marketing and sales, content distribution, and other unglamorous tasks.
As one creator reports, “I spend most of my time managing people, doing accounting, talking to sponsors, managing editorial calendars, fixing equipment, etc. People think being a creative is all puppies and rainbows — and no one wants to hear you complain.”
The biggest challenge experienced by 64% of respondents was to grow their online audience.
Tilt has some advice: The more niche the audience and the narrower the topic, the better the odds for success. Build relationships with your audience by responding to comments, asking for feedback, and creating more of the types of content they respond to, it suggests.
Focus on building connections with your audience and other creators. Someone following you is the start of a relationship, not the end result.
“It’s not actually about how good your content is, it’s about how you leverage it and monetize it. And that mostly comes down to marketing and publicity. The so-called ‘best’ creators are often those who are just best at doing their own marketing.”
The 2023 NAB Show session, “The Independent Age in the Creator Economy,” offered expert insights into brand partnerships and more.
May 7, 2023
Will Video Games Give Hollywood an “Extra Life”?
Pedro Pascal and Bella Ramsey in “The Last of Us,” courtesy of HBO
TL;DR
The success of HBO’s “The Last of Us” has streamers rapidly expanding their game-related series, with game IP providing culturally relevant content that can relatively easily adapted.
While it seems likely that video games could be the next frontier for box office dominating big-budget adaptations, the medium has plenty of quirks that will make the adaptation process difficult.
At least 60 game-to-screen adaptations are currently in various stages of development.
HBO’s The Last of Us has been hailed as the first good video game adaptation. After years of trying has Hollywood finally got the formula right — or is it just that video games developers have become better at storytelling than the studios?
Perhaps neither, but one thing everyone seems to agree upon is that The Last Of Us provides a benchmark for the rapidly number of game-related feature films and series coming our way.
Global video game adaptations soared by 47% from 2021 to 2022, according to analyst firm Omdia, and streamers are now increasing investments in bringing games to screens as high-end live action series.
“Streaming services and studios need more content to monetize their services and reach profitability,” Maria Rua Aguete, chief media analyst at Omdia, tells Richard Middleton at Television Business International. “Dedicated fan bases across IP such as games, books and podcasts are becoming increasingly valuable.”
In some quarters this business concept is being called “transmedia storytelling.” It’s not a new idea, as studios have consistently done their level best to wring as much IP out of a successful franchise as they can by spinning features into TV shows and video games and all sorts of other media merch.
In addition, game studios were already looking to Hollywood to spread their stories and have struck development deals with streamers. “Now they have an ideal to aspire to,” points out Will Bedingfield at Wired.
Broadly speaking, if a story is extended, it’s transmedia, so the third episode of The Last of Us, which explores the love between minor characters Bill and Frank, counts; other episodes constitute straightforward adaptation.
“Playing The Last of Us, few people thought of Bill as much more than a trap-setting maniac; watching The Last of Us, they saw him in a different light,” says Bedingfield. “The game’s universe grew deeper.”
To the game studios, it seems obvious why the TV sector is taking increased notice in the content and experiences they are creating.
“AAA video games are very close to TV series (not movies) in terms of creating complex narratives and building strong relationships between in-game characters and gamers/audience, which is a great foundation to build on,” Bartosz Sztybor, comic book and animation narrative director at Polish video game developer CD Projekt Red, tells TBI.
The size of the gaming market — two-thirds of US consumers are gamers across mobile, PC and console platforms — also provides huge cross-selling potential, meaning more adaptations can be expected.
Adaptations are also in the rise because gaming IP tends “to lean into the political and social zeitgeist,” the Omdia analyst adds, citing the fresh approach to LGBTQ+ representation and multigenerational lead characters in The Last of Us.
Yet the formula that made the HBO drama a hit could simply be the “genius” pairing of showrunner Craig Mazin with The Last of Us game creator Neil Druckmann. As HBO Max chief content officer Casey Bloys tells TBI, “there is nothing particular about video games that make them better or worse to develop,” and stresses that he was drawn to the project due to having had “a very good experience” with Mazin on his drama Chernobyl.
Helene Juguet, who runs the French division of Ubisoft Film & Television, says original creators should not necessarily serve as a showrunner or writer, “because those are two very different types of expertise.”
She does suggest, however, that it is “absolutely necessary for the show’s creative team to deep dive into the world of the game they are adapting” so they can understand what the creator was aiming to achieve and connect with “what makes the fans tick.”
“At the same time that video game adaptations have been waxing, superhero movies have been waning,” says Andrew King of The Gamer, who has crunched the numbers. Shazam: The Fury of the Gods opened with a $65 million global weekend, $20 million under the studio’s lowest estimations and was just the latest in a line of superhero movies that have underperformed.
Though Spider-Man: No Way Home was a massive hit, the MCU has now had two flops in close succession with 2021’s Eternals and last month’s Ant-Man and the Wasp: Quantumania, while none of its TV shows have lit up water coolers like The Last of Us.
With the growing sense of superhero fatigue, there is fair speculation for video game adaptions to take their place.
“In a world where studios seem primarily interested in proven properties with an excited fan base already willing to pay for admission, it’s not hard to imagine video game adaptations becoming the new superhero genre,” says Devin Baird, writing at MovieWeb.
Studios may be tempted to leap onto games as the next big multi-universe franchise, but the same approach that worked with the MCU will be harder with a video game as the source material.
“Comics have always married graphic art and written stories, but games have often treated stories as an afterthought, a connective tissue added after the fact to make the transitions between levels and set pieces run more smoothly,” says King. “Hollywood execs quickly run into the inconvenient reality that games don’t tend to provide much story justification for why characters from different series are suddenly in the same world together.”
Plus, each games hardware manufacturer has its own exclusive series but, unlike with DC and Marvel, just because Microsoft owns Master Chief and Marcus Fenix, that doesn’t mean they exist in the same universe.
“The upside of this is that audiences likely won’t have to keep up with a bunch of interconnected movies and TV shows just to understand the next big thing. Mario isn’t going to end on a tease for The Last of Us season 2.”
The reality could also be, as an article in Time magazine points out, that video games have become so cinematic in their scope, storytelling and visuals that they’ve superseded some films in terms of their ambition and emotional resonance.
“The Last of Us stayed close to its source material exactly because the original game was designed, essentially, as interactive cinema with all the twists and heartbreaks one might expect from a prestige project.”
So, while it seems likely that video games could be the next frontier for box office-dominating big-budget adaptations, the medium has plenty of quirks that will make the adaptation process difficult.
There are currently upwards of 60 game-based productions in development and analyst firms like Newzoo are already reporting that game IP is climbing in value “as transmedia becomes more relevant.”
Time lists a number of game-to-screen adaptations, including Gran Turismo starring Orlando Bloom, which was released in August; a Tomb Raider reboot, which Fleabag writer and star Phoebe Waller-Bridge is putting together for Amazon Prime; Bioshock, a post-apocalyptic thriller with Hunger Games director Francis Lawrence attached for Netflix; a series based on Metal Gear Solid with Oscar Isaac attached; and Borderlands, a feature with Cate Blanchett, Jack Black, and Jamie Lee Curtis adapted by TLOU’s Craig Mazin.
This might just be the start of a new wave of video game movies that rule movie theaters and streaming content to become the new dominant media.
At NAB Show, “Last of Us” producer Craig Mazin discusses the “luck” involved in assembling the right team for the show’s production and post.
May 7, 2023
The First Wave of AI Software for Production and Post
Artificial intelligence generated a lot of excitement this spring. Its influence spread from Silicon Valley to Hollywood, with a confluence of technologies introduced in Las Vegas at this year’s NAB Show.
Generative AI, in particular, caught the attention of content creators and consumers, manufacturers and media companies. Its promises range from improved efficiency to truer independence for creatives. In Q2 2023, we’re seeing workflows and products shaped by the potential of neural networks, while creatives are starting to take advantage of AI tools.
However, that excitement has come paired with several flashing ⚠️ caution ⚠️ signs. The ethics of AI (and we’re not just talking about deepfakes), concerns about inadequate regulatory frameworks, and ever-present worries about how machines may change the job market persist.
But one expert, ETC’s Yves Bergquist, tells us to consider the source when listening to these worries: “Are they accountable to their audience, or to their code?”
That is to say, what do those who are actually working on the algorithms and data sets and workflows think — and contrast that information with context from those who might benefit from controversy and hyperbole.
Another generative AI proponent and creative practitioner, Pinar Seyhan Demirdag, thinks these tools will ultimately find their footing in use cases determined by the four Ds: tasks that are dull, dirty, dangerous, or dear for humans to perform. (Rotoscoping comes to mind as an example of dull.)
Demirdag joined Bergquist at NAB Show to discuss generative AI workflows, as well as their thoughts for how this tech may shape the industry’s near-term future. Watch the session below.
Read on: Adrian Pennington also explores a lot of AI myths in his coverage for NAB Amplify. His breakdown of What Ifs vs. What Ares for generative AI is especially clarifying (as of April 2023, anyway).
You’re most likely familiar with OpenAI’s ChatGPT and may even be a paying subscriber for GPT-4. DALL-E and Midjourney may have been in your arsenal since last summer. Many other companies have long integrated forms of AI and machine learning into their offerings, but here are a few newer generative AI tools targeted at M&E creatives.
Adobe: Firefly
Adobe announced its AI art generator ahead of this year’s NAB Show. The beta version of Firefly is focused on text and images, but the company expects to integrate its features across its video, audio, animation and motion graphics design apps in the near future.
Colourlab.ai
Colourlab promises film-quality color-grading without the drudgery. It pitches this AI tool as an assistant who frees up creatives to have more time and focus on the unleashing of our imaginations and a return to kid-like experimentation. There’s also a pitch for simplifying remote collaboration and reduced file storage for versioning (two other trends that we know aren’t going anywhere for M&E).
DeepBrain AI
Are you interested in beginning a “virtual human journey”? That’s how DeepBrain AI describes its tools for creating 2D and 3D avatars and related text-to-video solutions.
Flawless
This startup’s tagline is “Hollywood 2.0,” and it promises “magical new tools and emerging technologies” tailored for filmmaking. What does it currently offer in its generative AI toolbox?
NVIDIA & Getty Images collab: NVIDIA Picasso
NVIDIA and Getty Images announced a collaboration in late March, in which NVIDIA Picasso’s generative AI model will be trained on Getty’s image library of fully licensed images (and Getty contributors will be credited and paid accordingly). You can learn more via NVIDIA CEO Jensen Huang’s GTC keynote here.
This New York-based startup offers more than 30 of its so-called “AI Magic Tools” (are you sensing a theme in how we think about AI?) that aid creatives in generating and editing images, audio, and video content.
You may have heard about Runway during the 2022 Awards Season; some of the VFX work for “Everything Everywhere All at Once” was crafted using Runway’s editing suite (specifically the rotoscoping work for that rock scene, IYKYK).
Seyhan Lee: Cuebric
This browser-based app promises the ability to move “from concept to camera in minutes” using Stable Diffusion image generation model. Forbes’ Tom Davenport shares a good overview of Cuebric.
Synthesia
Synthesia Studio is an AI video generation platform with a lens on the corporate video space. Its home page highlights the ability to transform PowerPoint presentations into training videos with avatars, as well as use cases for sales, onboarding and more. There’s less emphasis on creativity and more focus on efficiency and budgets.
This Week: Ad Pullback continues as YouTube revenue declines. New Gen AI breakthrough adds recursion and autonomy. A study debunks the middle class in the creator economy – but I see it differently (it’s all about the school of hard-knocks). Plus, updates on LinkedIn, TikTok, Wattpad, Bored Apes, and more. It’s the first week of May 2023 and here’s what you need to know NOW!
Ad Pullback Continues: Although Google had a good quarter, YouTube revenue continued to decline. Ad revenue was down 2.6% from last year’s 2nd quarter, albeit a better YOY decline than first quarter. It’s both structural and cyclical. Cyclical, as ad revenue has been broadly down (although Meta was up 3% YoY, with 24% higher Reels views which is a bright spot). But also structural as more viewers move to Shorts, which is likely cannibalizing longer-form views and revenue. Getting Shorts monetization right is critical, but so is blunting TikTok’s time-spent. I won’t rehash our Shorts updates from last week, but even as views are growing the clock is still ticking on unlocking Shorts revenue.
The latest Gen AI breakthrough: You need to know about Auto-GPT, which adds recursion, autonomy and self-prompting to ChatGPT. It’s well worth checking out. I could see a multi-step Auto-GPT process that evaluates and updates video clips for a daily news summary video or iterates headlines and thumbnails at light speed. Here are 5 early AutoGPT efforts to help you brainstorm potential creator economy applications.
New Study Debunks the Middle Class: Seems like everyone wants to release a study about the Creator Economy. This week’s newest entry is from Goldman Sachs, predicting that the global creator economy will grow to 500B by 2027. Goldman, alas, doesn’t see the emergence of a middle class, predicting that only the top 4% will make over $100k a year even as the pie grows. Simon Owens refutes this report – and Citi’s March study – and says a middle class indeed does exist. According to Owens we need to see creators as startup CEOs, and shouldn’t expect profitability for a while. I see it slightly differently – see below.
The Value of a Creator Education: Lots of noise around the unlikely career prospects for creators – particularly in this FT story sent to me by reader Don Anderson last week. I don’t necessarily agree with the article thesis though. Yes many (most) creators won’t find it a sustainable long-term career. But it’s not a hopeless endeavor. Any creator with over 50,000 followers has (like it or not) become the CEO of small – or not so small – direct to community business. There are many avenues to building sustainability even as your community growth stalls (see Kajabi, Orca, Gumroad and many others).
But even when the rainbows and unicorns fade, there’s value remaining. Becoming a creator isn’t a dead end. Stick at it, find some success (if fleeting), and it’s equivalent to a graduate degree in social video. Those skills you learned telling stories, optimizing thumbnails and headlines, planning and editing videos, scouring analytics, developing a community and building a support team are all eminently transferrable to the corporate world.
That TikTok MBA, YouTube MS or PhD in LinkedIn is your ticket to a lucrative career telling corporate stories on digital video platforms. Smart companies are already realizing they need a chief TikTok officer, YouTube Director or social video strategy expert.
And if you run a business, perhaps your next hire should come from the ranks of the creator world. They might be a bit raw after working on their own, but with mentorship and support she just might be the best hire you make in 2023.
Being a creator isn’t dead-end. It’s a hard-knock entrée into even bigger success – both for you and the lucky company that hires you.
Thanks for reading and see you around the internet. Send me a note with your feedback, or post in the comments! Feel free to share this with anyone you think might be interested, and if someone forwarded this to you, you can sign up and subscribe on LinkedIn for free here!
FAST has evolved at speed in the US, proving a popular andprofitable way for rights holders to incrementally increaserevenue from existing library content. It is from this base thatthe industry isexpected to develop into a $10 billion+ industry with few signs that growth will be stunted any time soon.
For channel owners, discoverability has become the key focus and it is apparent both in the US but also in less mature markets, such as the UK. Whether in Europe or the US, cutting through on the EPG means providing a curated experience for the viewer with content that pops and demands attention.
Markets outside the US are expected to create a $2 billion revenue opportunity. Indeed, it is in countries outside of the US that growth is strongest,with FAST revenue surging by almost 50 times between 2019 and 2022.
With more than 1,500 FAST channels in the rapidly maturing US market, the challenge for services providers is one of discoverability.
The US will remain dominant in absolute revenue terms, but the fastest growth will come from countries outside of the US, as FAST’s international momentum gathers pace.
These are the key findings from a new report, “Move FAST or Get Left Behind,” into the spread of the Free Ad-Supported TV ecosystem by Television Business International.
There are now an array of FAST services in the US, with the majority offering between 200 and 350 channels: LG Channels, Roku and Paramount-owned Pluto TV all hover around the top end of the spectrum, but there are also numerous niche audiences catered to via services such as current affairs-related Haystack News, The Weather Group’s Local Now, and Sinclair Broadcasting-backed STIRR.
“Clearly, with so many channels such an array of topics now available, viewers are faced with vast choice while FAST channel owners are facing considerable challenges around discoverability,” says TBI Vision editor Richard Middleton, one of the report’s authors.
“Innovation is needed for discoverability of channels and providers need to try to work together with platforms on that,” says Bea Hegedus, global head of distribution at Vice, which operates FAST channels on Tubi and Samsung TV Plus. Hegedus, quoted in the report, says that FAST channel owners “that lack clear branding will need heavy investment to find and retain an audience.”
For the major players, TBI finds that the strategy of quantity is now shifting to quality. With FAST advertising revenue in the US expected to hit $7.3 billion in 2024, FAST services are looking to provide more premium fare and brand cut-through: WBD’s deal with Tubi, for example, will see the service launch 14 WB-branded FAST channels, as well as three curated FAST channels crossing reality, series and family.
“As platforms compete for viewers they will try and differentiate themselves from the competition by looking for exclusive content or an exclusive launch window for new content,” Bob McCourt, COO at Fremantle International, tells the report authors. “We are seeing this trend already, as some of the major studios are windowing more premium content into the FAST space, which is legitimizing its growing adoption as a free, cable replacement.”
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Cr: TBI Vision
Models are also shifting, depending on the company and how it is approaching FAST. Per the report, Paramount increasingly sees Pluto TV, which it acquired in 2019 for $340 million, as a way to funnel viewers to its other streaming products, while the service itself carries numerous channels with Paramount content. However, most channels are striking multiple deals with FAST services to ensure “carriage.”
Models differ, but the US industry has tended towards an approach that sees channel owners selling a proportion of the ad inventory. Revenue share is also popular (often now around 50/50 between FAST service and the channel), while some rights holders may receive a fee for licensing a channel.
Aggregators are also buying up programming rights and curating their own channels, which are then put back into the FAST channel ecosystem, with each party receiving a fee.
McCourt adds, however, that a “critical mass” of channels is being reached. He also expects “increased allocation of advertising dollars by agencies to FAST,” and “more innovations in advertising, as platforms adopt brand integrations as well as traditional adverts.”
Shaun Keeble, VP of digital at Banijay Rights, which operates more than 20 FAST channels globally, is quoted by the report as saying the need for exclusivity of channels will heighten across all platforms.
“I expect there will be more personalization of EPG offerings and more sophisticated tailoring of content down the line, too,” he says. Keeble also believes that first and second window runs, and even original series, “will increase in number as commercial models adapt and the need for viewer retention becomes more vital than ever.”
The Rest of the World
Europe has lagged behind FAST uptake but it is now growing in popularity, with advertising revenues expected to hit $500 million this year and more than twice that by 2027. There are limiting factors in some markets, such as the UK, where FTA is more common, but the opportunity to more closely target specific viewers means ad growth is widely expected.
Much of that growth will come from a select few markets, with UK revenue expected to quadruple over the next four years to hit $506 million by 2027, but Germany is also expected to provide potential, with revenues that exceed $200 million within five years.
Marion Ranchet, founder of The Local Act, tells TBI, “It’s not easy to copy and paste the US winning formula. Each region is different when it comes to CTV penetration, advertising maturity and the like, all of which are key ingredients to FAST.
“In Europe, one major difference is the fact that free as a value proposition is nothing new. We have FTA broadcasters bringing us amazing content. Therefore, the immediate appeal of FAST in the US won’t win hearts as quickly here.”
And while English-language countries have dominated the FAST landscape to date, this could change: for example, Omdia has found that Tubi is particularly popular with US Hispanic audiences largely because it carries considerable amounts of Spanish-language content.
Executives in charge of four of the leading FAST services convened at NAB 2023 to give a full overview of the sector and where it is headed.
April 25, 2023
Posted
April 18, 2023
New Normal, Work/Life Balance: Remote Production Is Here to Stay
The benefits to news and sports include reduced equipment, personnel requirements
BY Peter SUCIU, tv technology
It was three years ago that the industry pivoted to accommodate COVID-19. The world has returned to normal for the most part, but broadcast production remains changed forever. The pandemic forced the industry to adapt and overcome, and in the process, it discovered that having teams work remotely had some advantages.
“Remote production is here to stay,” said Sasha Zivanovic, CEO of Nextologies, which operates a broadcast video delivery network specializing in broadcast-grade video connectivity. “The pandemic showed that you could practically get away with consumer equipment from home to facilitate what is normally produced in a studio or broadcast center.”
The sentiment was shared by Nick Ma, CEO and CTO of Magewell, which is showing its Ultra Encode AIO advanced live media encoder at the show.
“The benefits of remote production for news and sports — such as reduced equipment and personnel requirements at the event site, which in turn lowers production costs — are compelling,” said Ma. “It also allows valuable resources such as experienced staff to be maximized. For example, with remote production, a producer or director may be able to work on multiple events taking place in different cities on the same day, all from a centralized production center.”
However, bandwidth will continue to be an important consideration when sending multiple remote feeds concurrently from these distant locations and the reliability of the IP connection is critical.
“The ongoing rollout of 5G technology promises to bolster both of these aspects in wireless production environments, delivering lower latency and higher bandwidth that can be combined with cellular bonding to provide network redundancy across multiple carriers,” added Ma.
LESS TRAVEL, SMALLER TEAMS
Remote production has continued to result in smaller teams heading out on the road, even for major events — while the bulk of the staff can also be available to work from a broadcast center or even from home — reducing costs in the process. This will continue to present opportunities, as well as some challenges.
“We are traveling a lot more than just six months ago, but even before the pandemic our customers were seeing the benefits of remote production,” said David Edwards, product manager at Vislink. The company is demonstrating its 5G-Link, a cellular video and data communication device that enables bidirectional data communication between free-roaming wireless cameras and production centers.
“Remote production allows teams to be more efficient and provide for that work/life balance, and this makes it easier to recruit better as not everyone is ready to constantly be on the road all the time,” said Edwards. “We also see that cloud production is enabling teams to work together on multiple events without the need to travel to each one.”
In other cases, smaller — possibly even single-person — teams can do what used to require a truckload of gear.
“Thanks to efforts in miniaturization, most everything that is in a grip truck can practically fit in a backpack,” said Douglas Spotted Eagle, UAS instructor and director of educational programming at Sundance Media Group. “The heaviest kit is just 65 pounds, which can be important to note if you’re traveling by airplane.”
There are still times when not all aspects of a production can be handled remotely.
“A lot of what our team discusses in pre-production is how to get the most authentic content.
Much of my directing work is interview-driven docu-style, and I can tell you that, as many remote interviews as I have successfully conducted during and since the pandemic, there is nothing like speaking face–to-face,” said Amy DeLouise, FMC programming director and founder of #GalsNGear.
“During the pandemic, we pivoted to producing entirely virtual events, which meant bringing a lot of our clients up to speed on effectively what it means to produce hours of broadcast television. You can’t just have talking heads for a day-long conference.”
IP-enabled workflows for remote video production will allow broadcasters to cut costs and innovate new ways to create and consume content.
March 11, 2024
Posted
April 18, 2023
Despite Challenges, NAB Chief LeGeyt Says Next 100 Years Look Positive
NAB President and CEO Curtis LeGeyt (left) and KMEX Los Angeles anchor Gabriela Teissier at NAB Show’s welcome session on Monday.
In welcome address, CEO says broadcast TV, radio have a street-level edge in media ‘arms race’
BY Michael malone, broadcasting + cable
Watch the full NAB Show welcome session above.
NAB President and CEO Curtis LeGeyt sat for questions from Gabriela Teissier, anchor at Univision’s KMEX Los Angeles, during NAB Show’s welcome session. LeGeyt said the show’s 100th anniversary allows NAB to reflect on the past and sort out how to best help members in the future.
“The thread throughout all of it is innovation,” he said. “Innovation in service to our local communities.”
LeGeyt spoke of the “arms race” in terms of the modern media age, as the likes of Google, Amazon Prime Video, Spotify and other tech giants fight traditional media for users and revenue dollars. LeGeyt stressed that broadcast is unique in that it covers news at the street level. “We have the boots on the ground in local communities,” he said, “and that’s something no one else is doing.”
The pandemic was a reminder, LeGeyt said, of local broadcasters’ role in the communities it serves. Each subsequent severe weather event or other major local story is another reminder. “It’s very, very clear where broadcasters thrive — free, local, live,” he said.
ATSC 3.0 enhances local broadcast’s role in the community, LeGeyt added, noting how 60 markets now feature the standard also called NextGen TV.
Teissier asked about radio’s efforts to maintain its presence in automobiles. LeGeyt cited the “fierce competition for real estate on the dashboard” involving the newer media players. Making radio’s battle more difficult, he said, is that the industry has a vast array of owners, while the likes of Sirius XM and Spotify have but one.
“If we’re not all rowing in the same direction as an industry,” LeGeyt said, “we’re going to lose this arms race.”
Cable TV, podcasts and other digital media often offer users a forum among those who share their take on the world, LeGeyt said. Local broadcast, for its part, brings people with “different world views, different persuasions, different interests” together with unbiased content. Reporters in a given community know the best way to communicate with consumers in that region.
LeGeyt stressed that AM radio remains “very relevant,” reaching rural communities and serving as backbone of the Emergency Alert System.
Teissier asked about Spanish-language consumers, and LeGeyt mentioned that that demographic is most reliant on local broadcast. “They are far and away the most susceptible to disinformation on social media,” he said.
Local broadcast is “filling an enormous void left by the newspaper industry,” said LeGeyt, and has the support of the lawmakers he encounters in Washington.
Moments before FCC Chairwoman Jessica Rosenworcel took the stage, LeGeyt pressed for “modernized” ownership regulations, which better reflect the state of media in 2023.
Asked about NAB weighing in on the FCC’s handling of the proposed Standard General-Tegna merger, LeGeyt said NAB typically stays out of mergers and acquisitions. But he felt the “uncertainty” in the FCC’s regulatory process may get in the way of further investment in broadcasting. “Do some fundamental changes need to be made to that process?” he wondered.
Teissier also asked about AI, and LeGeyt said such technology can both help and potentially hurt broadcasters. AI can help stations better understand what’s going on in their communities and can free up reporters to be out in the field more. He’s also concerned about content creators being fairly compensated and the potential for mistakes in reporting rooted in AI.
“There are a lot of opportunities but I want to wave the caution flag,” he said.
The future looks bright for broadcast, the NAB chief said, but it will take hard work. “I couldn’t be more excited to lead that fight,” LeGeyt concluded.
The Library of American Broadcasting Foundation’s second annual Insight Award to CBS newsmagazine “60 Minutes” during Monday’s welcome session (from left): Session co-chair Heidi Raphael of the LABF; “60 Minutes” Executive Producer Bill Owens; session co-chair Jack N. Goodman and NAB President and CEO Curtis LeGeyt.
His words seemed to resonate with the audience. “I think that he’s got a great grasp of the issues, he communicates his point of view and NAB’s point of view, and he talks about broadcasters in the most favorable light — that we’re here to serve our local communities,” Phil Lombardo, CEO of Citadel Communications, said. “He makes that point time and time again and that’s what you want from the CEO of the National Association of Broadcasters.”
Immersive Technology Lands in the Spotlight Via the Metaverse
Growth to take off as virtual interfaces transition from tech toys into tech tools
BY Susan Ashworth, Tv technology
If you needed a specific definition for what the metaverse can do, you may be waiting awhile.
It’s an immersive embodiment of the internet. And it’s a shared, personalized experience. But it’s also an animated, interactive playroom, one that gives us the chance to experience our existence in ways we can’t in the physical world.
For broadcasters, game makers and content creators, the metaverse has the capacity to transform production of the industry.
By using affordable virtual reality technology, a media company might have the capacity to view, manage and interact with an ongoing production regardless of where it is happening. Or engage linear television viewers with an immersive, companion experience.
What’s clear is that the metaverse is poised to be something big.
“In general, it feels like the industry is on the doorstep of taking major strides toward delivering on truly immersive entertainment,” said Chris Brown, NAB executive vice president and managing director of Global Connections and Events.
GROWING MARKET
In its Tech Trends 2023 report, Deloitte Insights found that the metaverse is expected to be an $80 billion market by 2024 as companies begin to use the technology to create an enriched alternative to the flat, two-dimensional world we currently access via video feeds, email and texts.
“In other words, the metaverse is best thought of as a more immersive incarnation of the internet itself,” the authors of the Deloitte report wrote. “[It is an] ‘internet plus’ as opposed to ‘reality minus.’”
Growth is expected to take off as virtual interfaces transition from technology toys into technology tools, with new business models following closely behind.
In a recent panel discussion about the metaverse, Deloitte Consulting Principal Jessica Kosmowski said that industries are just now at the cusp of exploring unique initial use cases of the metaverse.
“We are essentially looking at the next evolution of the internet,” Kosmowski said. “Every aspect of the tech, media and telecom ecosystem is in for a major change in the next few years. Media companies will need to develop new business models [and] engage consumers with new content and experiences. Products and services will be reimagined at every layer of the technology stack.”
NAB Show is tackling the issue with a series of sessions, roundtable discussions and exhibitor displays. Leaders from Microsoft and Dentsu will take an in-depth dive into the metaverse and explore how companies have already begun to create destinations and experiences during the Tuesday session “Secrets of Building Your Brand in the Metaverse.”
Another Tuesday session, “East vs. West: How Will the Metaverse Evolve and Converge Globally,” will explore the commonalities and obstacles that exist between the Western entrepreneurial model and the Eastern centralized model and what businesses can expect when it comes to building within this new interconnected universe.
CREATIVE USES
What are the possibilities of all this? Consider scented packs that could be connected to a virtual reality headset to mirror the lush, scent-filled environment a user is watching on screen. Or a hyperreal augmented reality shopping experience led by an AI-powered avatar. Or the use of sensitive, interactive haptic gloves that would give a user a sense of touch.
There’s already demand for blending physical and virtual worlds in the media industry.
Sinclair Broadcast Group and Deloitte recently announced plans to launch a new metaverse sports fan community driven by a 3D creation tool. Beyond simply viewing a live game, fans can engage before the season and before each game. Sinclair called the partnership a key step in driving new revenue streams and deepening engagement with its viewers by redefining the sports viewing experience.
Almost universally, experts are saying that those interested in what the metaverse has to offer should start with strategy, whether the main goal is to develop new streams of revenue or to improve production operations through an augmented work experience.
On the show floor, exhibitors will spotlight their work in the metaverse and related experiences like Web3, AI and data-driven personalization.
“New immersive content experiences are imminent, from pure AR/VR or mixed reality variations to the full-blown promise of new digital worlds with users as the central character,” said NAB’s Brown.
While there are certainly hurdles ahead and the challenge of syncing all sides of the ecosystem — from the development of content, the process of content creation and distribution and the creation of the consumer technology necessary to deliver the ultimate user experience — the industry looks to be ready to take strides toward delivering deeply immersive entertainment, Brown said.
On The Main Stage: A Case Study — Color and Finishing in the Cloud
“The Lord of the Rings: The Rings of Power.” Cr: Amazon Prime Video
A Case Study: Color and Finishing in the Cloud Today | 12:45–1:30 p.m.
Jesse Kobayashi, VFX producer on The Lord of the Rings: The Rings of Power, will showcase how Blackmagic Design, Company 3 and AWS collaborated to create an entirely cloud-based infrastructure for conform, color-grading and delivery on one of the largest television shows in history and how these learnings and values are leading to new use cases and opportunities for productions across the industry.
Jesse Kobayashi
The session will detail this collaboration and explore how learnings and values from the production are leading to new use cases and opportunities for productions across the industry.
Kobayashi is a visual effects producer with more than two decades of experience in the industry. In addition to The Rings of Power for Amazon Studios, his credits in visual effects include Kong: Skull Island and Warcraft for Legendary Pictures and Krampus for Universal Pictures. Kobayashi has also served as director of visual effects at Legendary Pictures and as a post producer at both Warner Bros. and Laser Pacific.
The session is part of the P|PW, produced by Future Media Concepts. Being held on the Main Stage, the session is open to all.
NAB Show: Learn How the Cloud Workflow… Worked on ‟The Lord of the Rings: The Rings of Power”
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
TL;DR
“A Case Study: Color and Finishing in the Cloud” is scheduled for April 16 at 12:45 p.m. on the Main Stage.
“The Lord of the Rings: The Rings of Power” VFX Producer and modern filmmaking consultant Jesse Kobayashi will share insights from that production.
Blackmagic Design, Company 3 and AWS collaborated to create a customized infrastructure, which Kobayashi will describe.
“The Lord of the Rings: The Rings of Power” VFX Producer Jesse Kobayashi will head to the NAB Show Main Stage to discuss how the production created a cloud-based infrastructure for conform, color-grading and delivery.
On April 16 at 12:45 p.m., he’ll deliver “A Case Study: Color and Finishing in the Cloud,” detailing how Blackmagic Design, Company 3 and AWS collaborated on this project. Kobayashi will also share best practices and takeaways from “The Rings of Power” ways of working.
This keynote is billed as a free “bonus” Post|Production World session, and is open to all show attendees. (P|PW is produced by Future Media Conferences.)
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
From “The Lord of the Rings: The Rings of Power,” courtesy of Amazon Studios
Kobayashi has two decades of experience as a visual effects producer, with credits for “Kong: Skull Island” and “Warcraft” for Legendary Pictures and “Krampus” for Universal Pictures.
Kobayashi also works as a consultant and advocates for the adoption of new filmmaking technology.
Kobayashi has also served as director of visual effects at Legendary Pictures and as a post producer at both Warner Bros. and Laser Pacific.
He is a graduate of Azusa Pacific University, where he helped found its first film courses.
Containing nearly 10,000 VFX shots, post-production on the first season of “The Lord of the Rings: The Rings of Power” was enabled by AWS.
April 7, 2023
NAB Show: Generative AI, Bringing Together the “Why” and “How”
TL;DR
Generative AI (think ChatGPT and DALL●E) is poised to change the media and entertainment industry in myriad ways.
Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag will discuss how creatives can use generative AI tools to facilitate their work today at a NAB Show Create session on April 17 at 3 p.m.
A NAB show panel discussion aims to separate the hype from the “how” and “now” of generative AI for M&E.
This panel, featuring AI & Blockchain in Media Project Director Yves Bergquist and Seyhan Lee AI Director Pinar Seyhan Demirdag, will discuss how generative AI tools can help the media and entertainment industry in 2023, and consider how this technology might disrupt and augment workflows in 2024 and beyond.
Discover where Bard, Whisper, and Dall●E might fit into your creative process, and learn about other AI tools that could soon automate microworkflows at a desk near you.
A NAB Show Conference Pass is required for this session. Register here.
Speakers
Yves Bergquist is a data scientist and the director of the AI & Neuroscience in Media Project at USC’s Entertainment Technology Center, where his team helps the entertainment industry accelerate the deployment of next-generation analytics standards and solutions, including artificial intelligence.
He is also the CEO of AI engineering firm Novamente, which applies neural-symbolic artificial general intelligence to large enterprise problems. Novamente is the AI developer behind Hanson Robotics’ “Sophia.” His team also built the world’s very first fully autonomous AI-driven hedge fund, Aidyia, which is now defunct.
Before Novamente, Bergquist managed business development at analytics firms Bottlenose and Ranker in Los Angeles. He was part of the founding team at Singularity University, a joint venture between Google and NASA.
Pinar Seyhan Demirdag is an A.I. director, multidisciplinary creator, visionary, an outspoken advocate for the conscious use of technology, and an opinion leader in generative A.I.
In 2020, Demirdag and Gary Koepke founded Seyhan Lee, which has become the bridge between generative AI and the entertainment industry. Seyhan Lee created the first generative AI VFX for a feature film (“Descending the Mountain“) and the first brand-sponsored generative AI film (“Connections/Beko“).
In 2022, they announced Cuebric, a tool that combines several different AIs to streamline the production of 2.5-D environments for virtual production stages.
The panel will be moderated by NAB Amplify Senior Editor Emily M. Reigart.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
ChatGPT poses a fundamental question about how generative artificial intelligence tools will transform the workforce for all creative media.
April 6, 2023
“The Last of Us” Creative Team Takes the Stage at NAB Show
TL;DR
HBO’s adaptation of “The Last of Us” dystopian video game, starring Pedro Pascal as Joel and Bella Ramsey as Ellie, is a 2023 fan favorite and also hailed by critics for artful storytelling.
The panel will feature show creator and showrunner Craig Mazin, as well as Timothy Good, ACE, and Emily Mendez; Ksenia Sereda; Alex Wang; and Michael J. Benavente.
“The Last of Us” showrunner and creative team will discuss HBO’s small-screen adaptation of the hit video game on the Main Stage of the 2023 NAB Show.
The Sunday morning panel, presented by American Cinema Editors, will discuss the editing, cinematography, VFX, and sound artistry that brought Ellie and Joel to life.
Executive Producer Craig Mazin will be joined on stage by editors Timothy Good, ACE, and Emily Mendez; cinematographer Ksenia Sereda; VFX supervisor Alex Wang; and sound supervisor Michael J. Benavente.
The conversation will be moderated by The Hollywood Reporter’s Carolyn Giardina.
In addition to his role as executive producer, Mazin is also the multiple Emmy award-winning co-creator, writer and director of “The Last of Us.” Previously, he served as the creator, writer and executive producer of HBO limited series “Chernobyl,” for which he won Golden Globe, BAFTA, Writers Guild, Producers Guild and Peabody awards.
During “The Last of Us,” Timothy Good, ACE, was primarily responsible for editing the first season finale and the third episode, featuring the love story of Bill and Frank. In addition to editing a wide variety of TV series and miniseries, including ABC’s “When We Rise,” Netflix’s “The Umbrella Academy” and Fox’s “Fringe,” he has also worked on the original “Gossip Girl” on the CW and Fox’s “The O.C.”
Pedro Pascal and Bella Ramsey in “The Last of Us,” courtesy of HBO
Pedro Pascal and Bella Ramsey in “The Last of Us,” courtesy of HBO
While working on “The Last Of Us,” Emily Mendez rose from assistant editor to co-editor alongside Good for four episodes. She has also worked on the editorial teams for “The Umbrella Academy,” Fox’s “The Resident,” Hulu’s “Light as a Feather” and Fox’s “Rosewood.”
Sereda was listed as one of the “20 Cinematographers You Should Know at Cannes 2019” for her work on “Beanpole.” Before “The Last of Us,” she worked on films such as “Little Bird,” “Petersburg. A Selfie,” “House on Clauzewert’s Head” and “Acid.”
Wang is a 20-year veteran in the film and television industry. He has worked for VFX studios such as Digital Domain, DNEG and Industrial Light & Magic. Wang became a VFX supervisor for “Deadpool” in 2015 and counts “Jurassic World Dominion” as a recent project.
Benavente names Hulu’s “Under the Banner of Heaven” as one of his most recent projects. He sits on the Sound Branch Executive Committee of the Academy of Motion Picture Arts and Sciences.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
Award-winning multihyphenate Brett Goldstein will dive into his creative process in a fireside chat on April 17 at 4:00 p.m.
April 8, 2023
Posted
April 4, 2023
When It’s All an Action Sequence: Editing “John Wick Chapter 4”
TL;DR
Director Chad Stahelski wanted to work with an editor who came with no preconceptions about how a John Wick action film should be put together.
Editor Nathan Orloff talks about being able to accomplish a fantastic rhythm, but over a near three-hour run time.
Stahelski discusses cinematic influences including “The Good, The Bad and the Ugly” and MGM musicals.
With John Wick 2 and 3 editor Evan Schiff unavailable, franchise director and co-creator Chad Stahelski cast around for a new cutting room collaborator for Chapter 4. He alighted on Nathan Orloff (Ghostbusters: Afterlife), in part because Orloff had limited experience editing action movies.
“In my interview with Chad, we just really hit it off,” Orloff explains on the Next Best Picture podcast. “I found out many months later that one of the reasons he wanted to bring me on is because I don’t have extensive experience in action. He didn’t want someone to come in and do their thing that they’ve been doing on other action movies… because John Wick is sort of antithetical to how a lot of action movies are cut these days.”
To understand why, you have to appreciate that Stahelski’s vision for the fourth installment in the franchise was to expand the John Wick universe by bringing in multiple storylines and a longer run-time to let the action play out on screen, rather than having the editing dictate the action.
“The other films are very much like, you know, that John is on a direct rampage or running for his life. This film was intentionally designed to be more reflective and contemplating, that after his entire career as a hitman, he is forced to reckon with his past and what he’s done.”
Stahelski’s influences range from the lush visuals of Wong Kar-wei to the operatic staging of Sergio Leone westerns. As the director explained to Jim Hemphill at IndieWire: “I love the seventies movie style. I love four act operas. I love Kabuki theater. The Asian cinema kind of breaks a lot of rules that we adhere to in the three act version [of movies] and we’d like to think John Wick breaks a lot of those rules because we do go a little operatic.
“Lawrence of Arabia is a good example like that. That movie kind of flies by to me and it doesn’t feel like you need an intermission in it.”
The filmmaker’s homage goes so far as to mimic the famous “match cut” by editor Ann V Coates in David Lean’s Lawrence of Arabia in which Lawrence in profile blows out a match and Coates cuts to a blazing desert sunrise.
“I remember vividly when I went to set in Paris, Chad asked me ‘what’s the most famous cut in all of cinema?’ and said we’re going to do it our way,” Orloff relates to Next Best Picture. “I wanted to make sure we did the exact number of frames when the fire was blown out before cutting to the sunrise. You know, I wanted to do it justice.
“He told me he’d rather swing and miss than do the same thing over again. And so that match cut is indicative of [telling the] audience what we’re going for.”
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Bill Skarsgård as Marquis in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Donnie Yen as Caine, Bill Skarsgård as Marquis and Marko Zaror as Chidi in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Donnie Yen as Caine in John Wick: Chapter 4. Photo Credit: Murray Close
Ian McShane as Winston, Lance Reddick as Charon and Clancy Brown as Harbinger in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Keanu Reeves as John Wick in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Ian McShane as Winston, Keanu Reeves as John Wick, and Chad Stahelski – Director in John Wick: Chapter 4. Photo Credit: Murray Close
Director Chad Stahelski with Laurence Fishburne and Keanu Reeves on the set of “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Director Chad Stahelski and Bill Skarsgård on the set of “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Ian McShane as Winston and Bill Skarsgård as Marquis in “John Wick: Chapter 4.” Cr: Murray Close/Lionsgate
Donnie Yen as Caine in John Wick: Chapter 4. Photo Credit: Murray Close
Ian McShane as Winston and Chad Stahelski – Director in John Wick: Chapter 4. Photo Credit: Murray Close
Another acknowledged influence on the director’s action style are classic MGM musicals or those featuring Fred Astaire. In films like Singing In The Rain or Top Hat the camera stays generally static and in wide shot with minimal edits so the viewer can take in all the dancing brilliance performed by the film’s stars.
“I love Bob Fosse here, one of my huge inspirations,” Stahelski tells Indiewire. “You take Gene Kelly, the old Sunday Morning Sunday Parade or something like that. You watch Fred Astaire do his thing. And if you watch the way we shoot, it’s very simple. The way we train people [to perform stunts] is very, very, very dance oriented.”
Orloff elaborates on what this means to decisions in the cutting room.
“Musicals like back then were sort of like you edited around the dancing,” he says on an episode of The Rough Cut podcast. “You showed them dancing. They would do a move, finish, cut, start something else. And the way Chad talked about that really inspired me to do that with our characters and not use the editing to try to punch anything up.”
There are times when the stunt performance or that of Keanu Reeves aren’t quite perfect “they slip or there’s something not great about the timing of this or that, but not being so obsessive about perfection makes it just so much more real. When you’re cutting less, you’re able to absorb everything more. You feel more empathy for the characters because you feel like you’re just there.”
John Wick: Chapter 4 clocks in at 169 minutes, more than an hour longer than the original. Stahelski explains why he wanted a movie of this length.
“In our heads we knew that we wanted to show this constant decreasing circle that spirals closer and closer as [the stories] come together. So every act brings us closer together. That was the plan. It sounds like a very genius plan, but you don’t know until you cut the whole thing together. Our first cut was 3 hours 45 minutes.”
So how did the edit team set about cutting that down, and knowing which killing to leave in or excise?
“When you have 14 action sequences, you can’t just edit that sequence,” the director explained. “You’ll never know if a five minute car scene or a ten minute car scene is good to watch in the two and a half hour movie.
So the only way to truly know that you’re doing the right thing is step back and take that half day. We’d edit all morning but by four p.m. we’re like, Let’s watch the movie. And my editorial staff probably hates me. We’ve watched it so many times because literally even if we just took like 30 seconds out of something, I’d make everybody watch the movie again, because that’s the only way you know, you have the right pace.
He adds, “It’s the whole song that makes you rock out. I think that was a big learning experience with me and my editorial team to constantly watch a two and five hour movie and feel where the slow parts were and to work on those parts.”
Because John Wick is dispatching henchmen left and right in intricately planned and executed stunts, making the decision about what to cut was a tricky one admits the editor.
“There is definitely sometimes overkill when something is too similar to something else,” Orloff told Next Best Picture, “but going back to the music was a huge help in creating different tones and alternating what we were doing to avoid the things feeling the same. And to Chad’s credit, especially in the last act when we go from street fight to a car chase to a lengthy overhead shot that, even though the audience has watched non-stop action for 30-45 minutes the movie is structured so skillfully that you’re seeing something you’ve never seen before.”
“The Last Jedi” and “Glass Onion” director Rian Johnson on his new Peacock “howcatchem” series, “Poker Face,” starring Natasha Lyonne.
April 5, 2023
Posted
April 4, 2023
Step Into the Ring: Kramer Morgenthau’s Cinematography for “Creed III”
TL;DR
Even though the “Creed” movies are part of a expanded “Rocky” cinematic universe, this is the first in nine films that doesn’t have the original character as a part of the plot.
Director and star Michael B. Jordan collaborated with “Creed II” cinematographer Kramer Morgenthau to reinvent how boxing scenes are shot.
The filmmakers aimed for a heightened visual style influenced by Japanese anime, including what they called “Adonis vision,” a subjective POV from Adonis Creed as he’s clocking each fight.
“Creed III” was shot in IMAX format with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics.
Like the story arc of the majority of boxing movies, Creed III had a number of challenges to overcome in its production journey. Firstly, even though the Creed movies were part of a expanded Rocky cinematic universe, this was the first in nine films that didn’t have the original character as a part of the plot; Rocky had left the story.
Flipping this negative into the positive feel of a new start gave newbie director and still star Michael B. Jordan a chance to reinvent how to shoot the boxing scenes in particular. An easy reference, subconsciously or not, was Scorsese’s Raging Bull as its fighting is stylistically different from everything else.
Also, a new POV suited the storyline of a fight between a retired Adonis Creed and a significant person reappearing from his past with major issues to resolve.
Previous Creed II cinematographer Kramer Morgenthau and Jordan laid plans for a new “in ring” aesthetic as Max Weinstein explains in American Cinematographer. “Settling into his duties as a director, Jordan determined early in prep that he and Morgenthau would need to take two ‘big artistic swings’ to fully engross audiences in Donnie’s next chapter.”
The intention was to aim for a heightened visual style. “Michael is hugely influenced by Japanese anime — that’s completely his stamp on the movie. So, he brought that into the way we cover the fights,” Morgenthau says. “There’s this thing we call ‘Adonis Vision,’ where you’re seeing subjective point-of-view from Adonis as he’s clocking each fight, and that plays out in an anime style, with these hyper-real close-ups.”
For that, they switched to very wide-angle lenses, a 12mm Panavision H Series and a 14mm VA. “That again was part of Michael’s vision from the beginning. It’s very much an anime approach.”
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed, Mila Kent as Amara and Tessa Thompson as Bianca in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Tessa Thompson as Bianca and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Tessa Thompson as Bianca and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed and Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Tessa Thompson as Bianca in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors as Damian Anderson in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Jonathan Majors and director Michael B. Jordan on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed, Mila Kent as Amara and Tessa Thompson as Bianca in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Mila Kent as Amara, Tessa Thompson as Bianca and Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
But the action in general had to seen from the inside, not the outside, which is the problem for most sports action movies.
The Panavision website described how the boxing was shot within reach of the fighters. On both Creed II and III, Morgenthau was joined in his corner by A-camera and Steadicam operator, Michael Heathcote. “Mike and I came in early during prep to work with [2nd-unit director and supervising stunt coordinator] Clayton Barber and [assistant stunt coordinator] Eric Brown to help design the moves for the fight choreography. There’s an arc to what happens in the fights and the stories happening in the corners and in the ringside seats. That was all carefully choreographed, like shooting a piece of dance.”
Working with Panavision Atlanta, Morgenthau chose to shoot Creed III with Panavised Sony Venice cameras and a lens package that included both anamorphic and spherical optics. “We shot all the dramatic scenes with T Series and C Series anamorphic lenses, and for the fights, which are in the 1.90:1 aspect ratio for IMAX, we used [prototype] spherical VA primes that we customized to add a bit more softness and help them match the look of our anamorphic lenses,” the cinematographer explains.
Morgenthau also shot certain sections of Donnie’s bouts with the Phantom Flex4K, whose high-speed capabilities enabled him to create an “ultra-slow-motion analysis of some of the major moments in the fights, where we wanted to be inside the boxers’ heads.”
Other cameras used included prep cameras to rehearse moves, “We prepped by shooting each fight with small digital cameras, and shooting sketches of what it should be, figuring out the most impactful places to place a camera and trying to show what it’s like to be in the ring from a boxer’s perspective.”
Director Michael B. Jordan and cinematographer Kramer Morgenthau on the set of “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan and cinematographer Kramer Morgenthau on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan and José Benavidez Jr. on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan on the set of “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Michael B. Jordan as Adonis Creed in “Creed III.” Cr: Ser Baffo/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan, Mila Kent and Tessa Thompson on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan, José Benavidez Jr. and Jonathan Majors on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
Director Michael B. Jordan and cinematographer Kramer Morgenthau on the set of “Creed III.” Cr: Eli Ade/Metro-Goldwyn-Mayer Pictures Inc.
The other big “artistic swing” was the unveiling of a new taller aspect ratio to give the fighters almost god-like statue. “In the film’s dramatic scenes, intimate glimpses of Donnie’s and Dame’s out-of-the-ring lives are framed for the 2.39:1 aspect ratio, but whenever a match is underway, the frame is expanded to 1.90:1 Imax. The filmmakers opted to shoot most footage for both aspect ratios with Sony Venice cameras certified by the ‘Filmed for Imax’ program,” Weinstein notes in American Cinematographer.
With up to 26% more picture, this third installment in the Creed franchise became the first sports-based film included in the “Filmed for IMAX” program.
“It was really exciting to be able to integrate the IMAX cameras into the filmmaking process, especially the way we used them to open the world up and to make it very immersive and visceral for the flight sequences,” says Morgenthau, according to a report by ReelAdvice. “And that’s how we chose to use it; there was just something very magical, especially the scene at Dodger Stadium, where MBJ is walking out onto the field and the image aspect ratio expands in shot and the black bars recede, and you get this really tall, beautiful, powerful image. It just elevates everything, there is just something hyperreal about it. And to be the first sports movie doing that, it was a creative high.”
Director and star Jordan, speaking to American Cinematographer, says “We were looking at these old photos of Muhammad Ali by Neil Leifer, and [we called] the shots that he would get of these outdoor fights ‘clouds to the canvas,’ where you can see everything in the frame. So, we just wanted to recapture that — get all that information up on the screen. Then, we’d ask, ‘Okay, when is it going to open up? When is it going to transition into that ratio?’ It was about picking those moments and balancing them.”
Morgenthau concluded with almost reverence for the sport and the fighters in an interview with Gary M. Kramer for Salon. “The way we photographed the bodies was like photographing sculpture. Their bodies are sculpted and beautiful, and covered in sweat and oil and very reflective. Shooting them was about how their bodies and faces were reflecting light and honoring their performances was showing them in their ‘best light,’ so to speak,” he said.
“I studied paintings by George Bellows, and the Ashcan school of painting was an inspiration. There was an Eakins painting in a museum in Philadelphia that I was looking at, and I referenced great boxing photography, like some of the Ali color photographs. These images inspired how we lit the boxers.”
How “The Boy, The Mole, The Fox and The Horse” Won Hearts and Minds
TL;DR
Based on the bestselling illustrated book by Charlie Mackesy, the Oscar-winning animated short film “The Boy, The Mole, The Fox and The Horse” has been described as “‘The Little Prince’ for a new generation.”
The international animation team that brought the film to life spanned 20 different countries, with artists working remotely due to the pandemic.
The filmmakers wanted to retain the signature style of Mackesy’s ink and watercolor illustrations, with Mackesy closely involved in the process to ensure that the film stayed true to his vision.
The Oscar-winning short The Boy, The Mole, The Fox and The Horse is like receiving an “emotion bomb” when you first see it. If you have any pent-up sentiment left over from the pandemic, Charlie Mackesy’s animated story of a young boy and his animal friends might extract it from you, so be warned.
The award-winning animated story, now streaming on Apple TV+ and the BBC iPlayer, is the realization of Mackesy’s beautifully rendered ink and watercolor drawings, which were immortalized into an illustrated book that ended up topping the bestseller lists in both the United States and in the UK.
Filmmakers then approached Mackesy to take the story to the next level, but how do you turn heavily characterized pencil drawings into moving images and keep the signature style of the artist?
Initially, Mackesy’s intentions were less about the bottom line than more spiritual and Christian ambitions. He explained to Ryan Fleming at Deadline that helping people was his driving force and he thought the film would add to that.
That the book even became a hit shocked him, Mackesy said. “When the book came out, I got so many emails, like thousands of emails, telling me how the book had moved them or helped them, particularly in the pandemic,” he said. “I felt like if the book had done that, could a film reach people in the same way?”
He soon had his answer. After reading Mackesy’s book in 2019, producer Cara Speller said she “completely fell in love with it and got in touch with Charlie and his partner, Matthew Freud, and talked to them about what we could potentially do in turning it into a short film.” After a discussion with the creators, Speller contacted Peter Baynton, who was ready to join as director.
Speller told Jérémie Noyer at Animated Views how important it was to have Mackesy front and center in the process. “It was always really important to me right from the start that Charlie be at the center of any team that we put together to make the film. You can tell immediately from the book that he has incredibly strong instincts about what works. To me, it didn’t make any sense to try and make that without having him so closely involved.”
The animation team worked remotely because of COVID, with a shared goal of creating a look that reflected as closely as possible the drawings in the book, which were ink and watercolor. “We wanted to make those drawings basically move but keep the spirit of the fluidity of the ink and the line and the varying thickness of line,” Mackesy says.
From “The Boy, The Mole, The Fox, and The Horse,” The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+
From “The Boy, The Mole, The Fox, and The Horse,” The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+
From “The Boy, The Mole, The Fox, and The Horse,” The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+
Lead compositor Nick Herbert working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Art director Mike McCain working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Lead compositor Johnny Still working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
From “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Director Peter Bayton underlines the connection between Mackesy’s style and his animation team, “Charlie’s drawing is underpinned by a great knowledge of anatomy. So, even though he draws extremely quickly and quite impressionistically, you can tell he knows horse or boy or fox anatomy so well. For the mole, it’s a little bit different.
“It was important not to lose that lovely loose quality and make things stiff. So, we came down to a system where we would animate quite tightly on detailed models, and then, on the ink stage, we encouraged the artists to find that looser way of inking. It was about finding that very fine line that sort of drifts around the characters.”
“It was a very international crew,” noted Speller, “coming from 20 different countries. We started the work on the film in the middle of the Pandemic, so everyone was working remotely from their homes. We built the team in the same way you build any team on a production. You’re always looking for the most talented artists you can find; it doesn’t matter where they are in the world, as long as you think they’re the right fit for the project and for the team.”
“The style warrants movement,” said Gladstone, “but how did you achieve it? The line halo that goes around the drawings, how is that translated to movement?” Director Pete Bayton explained the significance of the halos: “Charlie describes those lines as thinking lines and they’re very characteristic of his drawings,” he noted.
“The process is that we start with pencil rough character animation, to define the performance and then it goes through a clean-up stage where we adjust the model where everything looks like a model and then we go to an inking stage where we do these key ink drawings and at that point we would add these lines, these thinking lines or ‘thinkies,’” he continued.
“It was a careful balance as sometimes that would feel too stiff and attached like a wire so we found a way of making them dissipate and reappear.”
Art director Mike McCain summed up Mackesy’s style and how it was transferred to motion. “Charlie has such a beautiful economy with ink and the book has such a minimal approach to storytelling and it’s just what’s needed on the page,” he said. “As we were looking to bring that wilderness to life the biggest challenges were finding how to add and not to over add. Just put what’s needed on screen to make it feel like you’re surrounded by this world.”
Variety’s Peter Debruge calls the short “The Little Prince for a new generation.” He goes on to add, “Beautifully adapted from British illustrator Charlie Mackesy’s international bestseller. Those who know the book — a Jonathan Livingston Seagull-esque life preserver for many during the pandemic — will appreciate how the team managed to translate Mackesy’s unique ink-and-watercolor style, with its distinctive blend of thick brushstrokes and loose, unfinished lines.
“Isobel Waller-Bridge’s gentle score coaxes audiences into a receptive place, while the quartet of Jude Coward Nicoll (the Boy), Tom Hollander (the Mole), Idris Elba (the Fox) and Gabriel Byrne (the Horse) lend sincere voice to various affirmational ideas,” Debruge continues.
“Cynics may dismiss what one acquaintance called its ‘bumper sticker wisdom,’ but they miss how vital it is to plant ideas of this nature in the heads of young viewers: boosting their confidence and unpacking what it means to feel lost — or seen — before social media brainwashes them otherwise.”
The 2023 NAB Show will host a conversation with the creative team behind short film “The Boy, the Mole, the Fox, and the Horse.”
March 28, 2023
NAB Show Leads an Exploration of the Evolving Internet
TL;DR
The 2023 NAB Show will explore the impact Web3 and other internet advances are having on the media and entertainment industry.
Attendees can learn about the next generation of the internet through educational programming, demonstrations, special events, networking sessions, and exhibitor participation on the show floor.
NAB Show will also feature the Intelligent Content Experiential Zone that will serve as a home base for attendees interested in new internet technologies. The area will allow visitors to participate in collaborative workshops and presentations.
NAB Show will explore the impact Web3 and other internet advances are having on the media and entertainment industry.
Exploration of the next generation of the internet at the 2023 NAB Show will include educational programming, demonstrations, special events, networking sessions, and exhibitor participation on the show floor.
“Web3 and other emerging technologies like generative AI and the metaverse are opening an entirely new chapter for content creators,” said Chris Brown, NAB executive vice president and managing director, global connections and events. “The 2023 NAB Show is the ideal platform to dive into these new tools and ideas by meeting face-to-face with the experts, innovators and companies that are unleashing the possibilities pushing the limits of our imagination.”
The 2023 event, which marks 100 years for NAB Show, takes place from April 15-19 at the Las Vegas Convention Center.
Event educational sessions will span multiple conferences and tracks, looking at key trends surrounding the next era of internet tech. Topics covered include Web3, data and analytics, generative AI, metaverse, and blockchain and NFTs.
“Web3 is a rapidly evolving technology, and the most successful companies will be those that are willing to experiment with new approaches and collaborate with other industry players,” said Andrea Berry, head of development at Theta Labs and a member of the NAB Show Web3 Advisory Council.
The Web3 Advisory Council, which offers guidance and expertise on NAB Show programming regarding the next generation of the internet, will provide an update on April 17 on the state of Web3, the impact of technology and the current economic and cultural trends that are driving the next phase of content.
NAB Show will also feature the Intelligent Content Experiential Zone, which will serve as a home base for attendees interested in new internet technologies. The area will allow visitors to participate in collaborative workshops and presentations with products from companies such as Interra Systems, Shotshack, Veritone and Wiland. The zone will also feature roundtables, theaters, the AWS Partner Village, the Innovation Village and NABiQ.
In collaboration with StoryTech, NAB Show will offer attendees guided, curated tours. Options include the Data, Data, Data tour, focusing on data and meta data management; the New Production Modalities tour, covering Web3, virtual production solutions and immersive content creation tools; and the Evolution of Video tour, showcasing the current state of video.
A variety of companies will be exhibiting new Web3 and other next-gen internet tech and solutions at NAB Show. These companies include Amagi, AWS, Digital Nirvana, Microsoft, Oracle, SDVI, SSimWave, TSV Analytics, Veritone and Vistex.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
A comprehensive plan for cloud-to-edge computing and connectivity will be important for high-performance consumer metaverse experiences.
March 28, 2023
Resilience, Remote Collaboration, and Creativity on “The Boy, the Mole, the Fox, and the Horse”
Watch the NAB Show session above.
TL;DR
The 2023 NAB Show will host a Main Stage conversation with the creative team behind Academy Award-winning animated short film “The Boy, the Mole, the Fox, and the Horse.”
Open to all attendees, the session “How to Win and Oscar With a Fully Remote Creative Team” will take place Sunday, April 16 at 2:00 p.m. and will feature visual artists for the production.
Art Director Mike McCain and Animation Senior Support Specialist Ben Wood will chat with session host Dave Leopold, strategic development director at LucidLink, about how cloud workflows allowed the film’s creatives to collaborate.
The 2023 NAB Show will host a Main Stage conversation with the creative team behind Academy Award-winning animated short film “The Boy, the Mole, the Fox, and the Horse.”
Open to all attendees, the session “How to Win and Oscar With a Fully Remote Creative Team” will take place Sunday, April 16 at 2:00 p.m. and will feature visual artists for the production.
Art Director Mike McCain and Animation Senior Support Specialist Ben Wood will chat with session host Dave Leopold, strategic development director at LucidLink, about how cloud workflows allowed the film’s creatives to collaborate.
“The Boy, the Mole, the Fox, and the Horse” first aired on the BBC in December 2022 to more than seven million live viewers.
McCain, who has worked with a variety of studios, has credits on “Spider-Man: Across the Spider-Verse” and “The Ghost and Molly McGee.” Before focusing on animation and painting, he directed video games.
Art director Mike McCain working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Lead compositor Nick Herbert working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Lead compositor Johnny Still working on “The Boy, the Mole, the Fox and the Horse,” now streaming on Apple TV+.
Wood, who has more than nine years of visual effects industry experience, has worked at multiple VFX studios including Smoke & Mirrors, DNEG, and NoneMore Productions. He began his career as a post-house runner and then progressed to senior-level IT positions.
Leopold has held roles across the media and entertainment industry, including editor, motion graphics artist, producer and post supervisor. He most recently worked at ViacomCBS where he created content of all types. At LucidLink, he brings remote collaboration solutions to the global creative community.
It’s time! Come celebrate the 2023 NAB Show’s 100th anniversary.
Registration is now open for the 2023 NAB Show, taking place April 15-19 at the Las Vegas Convention Center. Marking NAB Show’s 100th anniversary, the convention will celebrate the event’s rich history and pivotal role in preparing content professionals to meet the challenges of the future.
NAB Show is THE preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry. With an extensive global reach and hundreds of exhibitors representing major industry brands and leading-edge companies, NAB Show is the ultimate marketplace for solutions to transform digital storytelling and create superior audio and video experiences.
See what comes next! Technologies yet unknown. Products yet untouched. Tools yet untapped. Here the power of possibility collides with the people who can harness it: storytellers, magic makers, and you.
NAB Show will look at how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18.
March 24, 2023
The Art of the Prompt
BY ROBIN RASKIN
TL;DR
Now that the initial knee-jerk reactions to having Generative AI as our companions have quieted down a bit, it’s time to get to work and master the skills so that Generative AI is working for us, not the reverse.
The Kevin Roose shockwave, goaded every tech columnist to write something about how they broke AI, through a combination of provocation and beta testing the hell out of publicly released platforms.
Educational institutions are trying to figure out whether to ban Generative AI or teach it to their students. We’re rolling up our collective sleeves for the human/machine beta test.
Now that the initial knee-jerk reactions to having Generative AI as our companions have quieted down a bit, it’s time to get to work and master the skills so that Generative AI is working for us, not the reverse. The Kevin Roose shockwave, goaded every tech columnist to write something about how they broke AI, through a combination of provocation and beta testing the hell out of publicly released platforms like Bing AI, Google’s Bard, and the wildly popular ChatGPT.
In the early days of ChatGPT’s general release, Cnet had some faux pas including plagiarism and misinformation seeping into its AI-generated journalism. This week Wired magazine very carefully spelled out its internal rules for how it will incorporate generative AI in its journalism. (No for images, yes for research, no for copyediting, maybe for idea generation.) Educational institutions are trying to figure out whether to ban it or teach it to their students. We’re rolling up our collective sleeves for the human/machine beta test.
Meanwhile, folks like Evo Heyning, creator/founder at Realitycraft.Live, and author of a lovely interactive book called PromptCraft has been doubling down to dissect, coach, and cheer us into the world of using Generative AI effectively. The book, co-written with a slew of AI companions like Midjourney, Stable Diffusion,ChatGPT, and more, looks at the art, science, and lots of iterations that will help get the most out of the creative man/machine communications. You can watch some of her fast-paced Promptcraft lessons on YouTube. They’re kind of the AI-generative of the arti-isodes of Bob Ross on PBS.
A Magic Mirror for Collective Intelligence
Heyning has worked in AI, as a coder, storyteller, and world-builder since the early days of experimentation. She’s also been a monk, chaplain, and just about everything else that defines a renaissance woman who thinks deeply about AI. “Are the models merely stochastic parrots that spit back our own model? Or are they giving us something that’s a deeper level of comprehension?” she asks.
A prompt and resulting output using Midjourney.
“AI,” she continues, “is like querying our collective intelligence. Right now most of our chat tools are mirrors of everything that they’ve experienced. They’re closer to asking a magic mirror about collective intelligence than they are about any sort of unique intelligence.”
Our jobs are to learn the language or the query to coax the best out of the machine. “AI Whisperers,” those who can create, frame, and refine prompts are out of the gate with a valued skillset.
While prompts for generating text, images, movies, and music will vary, there are certain commonalities. “A prompt,” says Heyning, “starts by breaking down the big vision of what you’d like to see created, encapsulating it into as few words as possible.” She likens a lot of the process to a cinematographer calling the shots. “You’re thinking about what the focal point of your creation will be. The world of the prompt is about our relationships with AI, and it includes shifts in language that come from both sides, not just from the human side, but also from alternative intelligences.”
Five Easy Pieces
Heyning talks about her process of including five pieces in a prompt. They include the type of content, the description of that content, the composition, the style, and the parameters.
Content Type: In art prompts, the type of content might be a poster or a sticker. For text it might be a letter or a research paper.
Description: The description of the content defines your scene (a frog on a lily pond).
Composition: The composition is the equivalent of your instruction in a movie (frog gulping down a fly or in the bright sunshine).
Style: The style might be pointillism, (or for text the style of comedy writing).
Parameters: Finally, the parameter might be landscape or portrait, or, for text a word count.
Providing context is also a key component. Details about the setting, characters, and mood help you get the image you had in your mind’s eye. “Negative weights” — things that should not be in your creation can be important, too. Heyning discourages the use of using artists, especially living artists, names in the prompt. These derivatives beg copyright questions and remind us to use commas in our prompts to make them more intelligible to the machine. “They act as separators to help the generator parse a scene.”
Heyning’s quite the optimist about how humans and AI will work together, even in much-debated areas like education. “Kids are learning about art history from reading prompts created using Midjourney,” she marvels. “They are introduced to impressionism, realism and abstract art. They’re using terms like knolling (knolling is the act of arranging different objects so that they are at 90-degree angles from each other, then photographing them from above), once relegated to the realm of trained graphic designers.”
What I learned from my crash course in prompting? The power of a good prompt is the power of parsimonious thinking — getting to the essence of what you want to create. Similar to coding, but different, because you don’t need to learn a foreign language, this is a much more Zen-like effort. Stripping away all that’s unnecessary; down to the perfect phrase. (P.S. If you prompt ChatGPT to tell you how to write the perfect prompt you’ll read even more about the Art of the Prompt.)
Even with AI-powered text-to-image tools like DALL-E 2, Midjourney and Craiyon still in their relative infancy, artificial intelligence and machine learning is already transforming the definition of art — including cinema — in ways no one could have ever predicted. Gain insights into AI’s potential impact on Media & Entertainment in NAB Amplify’s ongoing series of articles examining the latest trends and developments in AI art
Executives at the 2023 South by Southwest conference urge users to consider AI tools as helpers for human activities such as brainstorming.
March 23, 2023
Posted
March 14, 2023
Blinding Lights: Creating Cinematic Beauty for The Weeknd’s Concert Special
TL;DR
Both nights of The Weeknd’s spectacular 90-minute show at Inglewood’s SoFi Stadium in LA were recorded for an HBO concert special, “The Weeknd: Live at SoFi Stadium,” which is now streaming on HBO Max.
Director Micah Bickham employed roughly 25 Sony Venice cameras outfitted with Angenieux and Canon Cine zoom lenses to capture footage of the live concerts.
The production team partnered with a company called Live Grain to add texture and grain to the concert footage to emulate 35mm film stock.
Last November The Weeknd, aka Abel Makkonen Tesfaye, put on a spectacular 90-minute plus show at Inglewood’s SoFi Stadium in LA. Both nights were recorded for an HBO concert special, The Weeknd: Live at SoFi Stadium, which is now streaming on HBO Max.
It was the last stop of the first leg of the “After Hours til Dawn” tour, and Tesfaye pulled out all the stops to reinforce his performance artist handle but still confound and confuse the critics as to what music genre to place him in.
Direction was by Micah Bickham, whose collaboration with the artist goes back to the Starboy album in 2015. He talked to NAB Amplify about how the show was created, recorded and broadcast.
“My focus with The Weeknd is particularly around live production,” Bickham said. “We have quite the partnership really from the Starboy era around 2015. It’s been a handful of years just to understand the world they’re creating from an album point of view and how that translates in to live.”
SoFi stadium was primarily chosen for the recording as The Weeknd was doing two nights there. Both nights would be recorded and then cut together. “Just thinking how I wanted to shoot it and present it, we had to shoot across the two nights, plus a handful of pickups that we ended up doing. Also being LA, it was just perfect.”
The discussion before the show about how they wanted the concert film to look took quite a few diversions but a cinematic theme was always front and center. “We talked a lot about cinematic integrity. Yes it’s a concert and yes it’s an artist performing these songs but with a world being created and shaped inside of it,” he said.
“We talked about what the DNA and visual language of this film was but in the end for me it had a lot to do with how we presented it more like a film and less like a concert. What I mean by that is when you watch the film the pace and the tension that the pace creates is pretty unusual for a typical concert film.
“We wanted you to sit with the artist and digest what was happening right in front of you not through an edit and cut that might pull you away too quickly. We wanted you to live in it, when you see it there’s something that resonates differently than a typical concert film.”
The Weeknd’s live shows have already made headlines, especially his 2021 Super Bowl halftime show, which he had reportedly underwritten to the tune of several million dollars. But the show was to be nominated for an Emmy for Outstanding Variety Special (Live), Outstanding Lighting Design/Lighting Direction for a Variety Special, and Outstanding Technical Direction, Camerawork, Video Control for a Special.
The SoFi concerts were specifically staged to allow viewers see the most of The Weeknd. Tesfaye had the run of the center of the stadium with an apocalyptic Toronto skyline at one end and a huge suspended moon at the other. No sign of the band, who were hidden out of sight. Tesfaye was on his own apart from 33 dancers parading as red-shrouded sirens who walked as one.
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
“The Weeknd: Live at SoFi Stadium.” Cr: HBO Max
Concert films can be free-running, sometimes allowing the camera positions to operate without timecodes, picking and choosing shots as they go. Bickham wanted a tighter regime for SoFi. “For this particular project there were a couple of differences, just because of the scale of it. It was important to me early on that if I just monitored the board and did a pure just shoot for capture and I didn’t create a line cut, my feeling was that we weren’t going to able to hold that many cameras accountable to each moment,” he said.
“So the way I directed it was a little bit of a hybrid in the sense that I did cut a line cut. When a director cuts a line cut there’s an immediacy that takes place from the operators that you’re working with. Everybody sits up a little straighter and there’s a little more tension than if I asked them to ‘just shoot and I’ll nudge you around.’
“Certainly there are times when that’s important and the best way to approach it. For this I felt creating a little bit of tension and immediacy was important so everyone stayed focused. It’s a long show, top to bottom it’s just under two hours. It’s an easy scenario even for the best team to kind of settle in and perhaps not necessarily to be on top of every single moment throughout the show. So yes, we cut it but with a series of pick ups too.”
These were mostly single close-up Steadicam shots featuring The Weeknd and the dancers. Adding them to the two nights at the stadium gave Bickham a substantial editing job, but inevitably it was all about finding the show. “We wanted to break it apart and understand the shape of the narrative and how we could build it in the edit.”
With around 25 Sony Venice cameras in play — both first- and second-generation, but mostly second — there was a lot of footage to work with. Lenses were primarily Angenieux and Canon Cine zooms, with a couple of prime lenses employed on handheld cameras. “They were all human-operated and were all on my Multiview and so we’re cutting the full volume of the 25 throughout the night,” Bickham detailed. “From night one to night two we augmented the positions of some of those 25 cameras just to enlarge out coverage. That gave us a slightly different mindset going in to night two. It would just accentuate what we had already done on the previous night.”
Designing the cut, it was always planned to let it breathe, especially around Tesfaye who was mostly alone in such a huge space. Bickham explains the thinking: “It was partly because it gives the audience an opportunity to be on stage with him. That’s a very unusual experience especially for a stadium film. Additionally by doing that it creates a tension. The audience are expecting you to cut, they’re expecting to be moved on to something wide or something different and when you don’t do that and you stay kind of locked in to that position something really interesting happens; it makes the next shot that much more impactful.
“So in other words, we kind of lingered even if the song ramped up and became more manic. We didn’t let the pace of the moment dictate the pace of the film.”
Bickham and Le Mar Taylor, The Weeknd’s creative director, had talked a lot about letting moments develop in front of the lens and not blasting through the coverage. “We wanted our performance to be more like a film edit.”
The concert film was meant to be celebratory career moment for Tesfaye and the means of capture was always up for discussion, with even IMAX put forward as a medium. “We did think about using 35mm film, in fact through our discussion we did end up partnering with a company called Live Grain,” Bickham recounted. “We wanted the concert film to have representation of the texture of film to push in to a space where you don’t particular see it. So that was a huge part of our decision making even through the grade and the finishing. It’s got a timelessness with this textural element to it and just feels different from a typical concert picture.”
The use of the Live Grain process is usually for digitally shot movies. NAB Amplify previously reported on Stephen Soderbergh’s No Sudden Move using the process, but for a live production it’s new.
“The cameras didn’t have any filtration in place just to make sure the process wasn’t disrupted. Live Grain is essentially real time texture mapping. A lot of great films that were shot digitally used Live Grain to make it feel like it’s 35mm. In a multi-camera almost two hour production 35mm itself isn’t really practical with the mag changes and the amount of film you use.”
The use of Live Grain was in fact introduced by HBO, which has an ongoing relationship with the company. “It’s been tested by them many times on films but our film was maybe the first time being used for a live concert application or certainly one of the first.”
The post effect of film grain really nails the timeless cinematic effect, but was there ever an option to broadcast the concert live? “There was a time when we considered a one day IMAX special but when HBO got involved we realized we had a great partner for what eventually we wanted to do and it tied in with the upcoming The Idol drama series.
“Ultimately we were able to bring a more caring approach to it, we could take our time.”
Looking to stay ahead of the curve in the fast-changing world of live production? Learn how top companies are pushing the boundaries of what’s possible in live events and discover the cutting-edge tools and technologies for everything from live streaming and remote workflows to augmented reality and 5G broadcasting with these fresh insights from NAB Amplify:
Kendrick Lamar’s “The Big Steppers: Live from Paris” employed multiple digital cinema cameras to deliver a livestreamed outdoor broadcast.
March 26, 2023
Posted
March 10, 2023
Devoncroft Executive Summit Set For April 15 at NAB Show
TL;DR
The 2023 Devoncroft Executive Summit will take place April 15 in Las Vegas.
Running from 12 p.m. to 6 p.m. on the NAB Show Main Stage, the conference will feature speakers and moderated panel sessions with C-level executives.
This year’s event, with the theme “The Business of Media Technology”, will bring together thought-leaders from across the media technology sector.
The 2023 Devoncroft Executive Summit will take place April 15 in Las Vegas.
This year’s event, with the theme “The Business of Media Technology”, will bring together thought-leaders from across the media technology sector to discuss the issues facing the industry, share best practices and network with peers.
Running from 12 p.m. to 6 p.m. on the NAB Show Main Stage, the conference will feature speakers and moderated panel sessions with C-level executives from broadcasters, service providers, media technology vendors, and IT vendors.
The BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians and media technology managers.
March 6, 2023
YouTube Unveils 2023 Priorities As Shorts Monetization Struggles, Plus TikTok’s Surprising New Feature for Teens
By JIM LOUDERBACK
TL;DR
Neal Mohan reveals YouTube’s creator-centric priorities while Shorts monetization lags.
TikTok rolls out time limits for teens while the U.S., Canada, U.K. and the EU ratchet up the pressure.
Twitch’s never-ending creator problems, the surprising upside of paid verification, a call to restrict AI research and new crypto and consumer research.
This Week: Neal Mohan reveals YouTube’s creator-centric priorities while Shorts monetization lags. TikTok rolls out time limits for teens while the U.S., Canada, U.K. and the EU ratchet up the pressure. Plus, Twitch’s never-ending creator problems, the surprising upside of paid verification, a call to restrict AI research and new crypto and consumer research. It’s the first week of March and here’s what you need to know. Oh, and how’s that “in like a lion” working out for you?
New YouTube Chief Lays Out Priorities: A few weeks late, but Neal Mohan has laid out YouTube’s 2023 priorities in a blog post. There’s not a lot new – the most important message was Mohan’s strong endorsement of getting creators paid. Mohan did announce new AI tools – about time – although with “guardrails”. I think that means it’ll be a while before we see anything useful.
Trouble at Twitch Amidst Abundance: Congrats to Kai Cenat for becoming Twitch’s biggest streamer. His month-long subathon – a throwback to the original Justin.TV mission – pushed him over 300,000 subscribers. But it also renewed calls for Twitch to properly compensate creators as Drake suggested he get a $50M payout. Cenat, who just signed with UTA, seemed to agree. Could this be a Ninja repeat all over again? Twitch is trying to do better by creators at least in some ways. For example, the new “experiments page” provides transparency to streamers and provides an interesting lens for the Twitch curious (like me) too.
The Upside of Paid Verification: Although many (including me) decried the paid verification initiatives at Twitter and Meta, a few experts see a silver lining. Brendan Gahan sees a lessening of sensationalist clickbait stories and a renewed focus on quality content and user experience. Alex Kantrowitz goes even further, positing that because most platforms are now dominated by professional creators, it’s time for them to pay for the privilege of making money. I think Gahan’s vision is idealistic but unrealistic, while Kantrowitz ignores the paltry creator middle class that will likely pony up for the check. Decide for yourself – both takes are worth reading.
Thanks so much for reading and see you around the internet. Send me a note with your feedback, or post in the comments! And see you at SXSW!
Feel free to share this with anyone you think might be interested, and if someone forwarded this to you, you can sign up and subscribe on LinkedIn for free here!
For more on Mohan’s priorities, TikTok’s teen time limits, Jellysmack’s OTT plans and AI’s copyright dilemma, check out this week’s Creator Feed – the weekly podcast Renee Teeley and I produce – get it on Apple Podcasts, Spotify or Stitcher!
Three different topics impacting the Creator Economy: The ban or sale of TikTok, Meta’s latest layoffs, and the release of GPT4.
March 3, 2023
NAB Show’s BEIT Conference Dives Into Media Delivery
TL;DR
SMPTE President Renard Jenkins will deliver the opening keynote at the NAB Show Broadcast Engineering and IT (BEIT) Conference on April 15 at 10 a.m.
Running from April 15-18, the BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians, media technology managers, contract engineers, broadcast equipment manufacturers and distributors, engineering consultants, and R&D engineers.
The conference is produced in partnership with the Society of Broadcast Engineers, the Society of Cable Telecommunications Engineers and the North American Broadcasters Association.
SMPTE President Renard Jenkins cr: NAB Show
SMPTE President Renard Jenkins will deliver the opening keynote at the NAB Show Broadcast Engineering and IT (BEIT) Conference on April 15 at 10 a.m.
Jenkins, currently senior VP of production integration and creative technology services at Warner Bros. Discovery, has spent more than 35 years in the television, radio, and film industries.
Running from April 15-18, the BEIT Conference will feature technical presentations geared toward broadcast engineers and technicians, media technology managers contract engineers, broadcast equipment manufacturers and distributors, engineering consultants and R&D engineers.
“The BEIT Conference is the place for media professionals to discover the latest breakthroughs helping to make the content pipeline more effective, efficient and expedient,” said Sam Matheny, NAB executive vice president, Technology and chief technology officer. “We are looking forward to an impressive lineup of presentations at NAB Show that will provide our community with real-world insights into keeping pace with the rapid evolution in how content gets delivered.”
The conference is produced in partnership with the Society of Broadcast Engineers, the Society of Cable Telecommunications Engineers and the North American Broadcasters Association.
BEIT will also feature the presentation of technical papers on topics including NextGen TV, artificial intelligence, data analytics, cybersecurity, media workflows, innovation in radio, media in the cloud, hybrid radio, sustainability, streaming, 5G and video coding, among others. The papers will be included in the BEITC Proceedings, which will also be released by PILOT, the innovation wing of the National Association of Broadcasters, on April 15. For the full schedule, click here.
For more information on the BEIT Conference, click here.
NAB Show is the preeminent event driving innovation and collaboration across the broadcast, media and entertainment industry.
March 3, 2023
NAB Show Is Immersed in… Immersive Storytelling
From Dreamscape Immersive’s “Dragons Flight Academy” experience
TL;DR
NAB Show will explore how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18 at 1 p.m. at the Las Vegas Convention Center.
The session, titled “Immersive Storytelling: Expanding Audiences with XR in Games, Education, and Location-Based Entertainment,” will feature leaders in advanced technology.
Panelists include Aaron Grosky, president and COO of Dreamscape Immersive and COO of Dreamscape Learn; Ted Schilowitz, futurist, Paramount Global; and Jake Zim, senior VP, virtual reality, Sony Pictures Entertainment.
Dreamscape’s Aaron Grosky (l), Paramount Global’s Ted Schilowitz (c), and Sony Pictures Entertainment’s Jake Zim (r) will participate in a Main Stage session on immersive storytelling during the 2023 NAB Show.
NAB Show will explore how advanced technology is changing immersive storytelling experiences during a Main Stage session on April 18 at 1 p.m. at the Las Vegas Convention Center.
Immersive experiences have become easier to access than ever before. From headsets at home and in schools to location-based entertainment venues, consumers are embracing innovative ways to find their favorite stories.
Today’s entertainment technology has the ability to make every player the main character in their favorite worlds, expanding the universes they love and breathing new life into these stories and characters. We now have the ability to immerse our audiences’ minds into infinite architectures—from blasting ghosts in the Ghostbusters universe to teaching biology by having students solve the mystery of a dying species at an intergalactic wildlife sanctuary.
Sony Pictures Virtual Reality’s “Ghostbusters: Rise of the Ghost Lord” VR game
Sit down with our panelists as they discuss increasing convergence between traditional entertainment and advanced technology; how nostalgia fuels new technology adoption; and what’s next for VR/AR/XR in the entertainment industry.
Grosky oversees the creation of VR adventures for Dreamscape Immersive and Dreamscape Learn. The adventures are designed to give users the experience of watching a story unfold around them as they explore cinematic worlds, characters, and creatures. He previously served in strategic leadership and creative development roles for entertainment ventures focused on television, radio, music, online, and mobile productions.
Schilowitz, the first-ever futurist-in-residence for Paramount Global, works with the company’s technology teams to explore new and emerging technologies, including VR and AR. He previously served as consulting futurist at 20th Century Fox and worked on product development teams that have produced ultra-high resolution digital movie cameras, advanced hard-drive storage products, and desktop video software.
Paramount’s VR experience for “A Quiet Place
In his role at Sony Pictures Entertainment, Zim oversees global VR production and strategy for the motion picture group. He has produced a variety of interactive projects released across a spectrum of distribution channels. In addition, he works across business units to develop partnerships with technology and production companies in the emerging immersive entertainment space.