Cancelling streaming services is no longer niche or occasional. Churn has gone mainstream and premium SVODs are going to have to employ new tactics to compete for a share of the household wallet.
Finished watching The Bear? Ditch Hulu. Want to watch Fallout? Sign up for Amazon Prime Video. Time for Slow Horses Season 3? Then cancel Amazon (having already binged Fallout) and get Apple for at least a couple of weeks. More and more of us are doing this, partly because price inflation has exhausted the amount people are willing to spend on stacking SVODs, most of whose content they don’t actually watch.
It’s also because the SVOD system of no-contract, one-month viewing — so important in kick-starting the streaming business — makes it so easy to do.
According to data from Antenna, at the end of 2023 nearly one-in-four streaming US consumers qualified as serial churners — individuals who have canceled three or more Premium SVOD services in the past two years. That’s an increase of 42% YoY.
Antenna even identifies a group of “super heavy serial churners,” those who make seven or more cancellations within the past two years, and found that they constituted 19% of subscribers in 2023.
More data: serial churners were responsible for 56.5 million cancellations in 2023, up a whopping +54.6% year-on-year, while cancellations by non-serial churners increased 18.5% to 82.8 million in the same period.
While consumers value flexibility, the implications could be significant for the major media companies, especially as it seems likely this behavior will become even more common.
One option outlined by John Koblin in The New York Times is to bring back some element of the cable bundle by selling streaming services together.
Executives believe consumers would be less inclined to cancel a package that offered services from multiple companies. Disney, for instance, is bundling Disney+, Hulu and ESPN+ into one package and, later this year, will launch a sports streamer pooled with Fox and Warner Bros. Discovery.
Another tactic is to promote “coming soon” content prominently on the home page. For instance, Apple TV+ is teasing Dark Matter, a science-fiction series that comes out in its app in May.
Peacock promoted a special offer to deter new subscribers from cancelling by offering a deal to sign up for a full year at a discount.
According to Antenna research, cancellation rates for those who did sign up did not drop off a cliff a month later, but instead were close to average.
Netflix appears immune, according to Antenna data. Or rather; it is the service most likely to be part of household bundles with every competitor part of a revolving carousel that consumers pick and mix according to the latest show to land.
Without a predictable revenue stream, it is harder for streamers to invest in new content, causing them to cut production and deliver fewer stand out new releases to market, in a vicious cycle that will gather pace unless nothing changes.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Available now to download, “A Beginner’s Guide to FAST” will be presented at NAB Show by GRG Global’s VP of Media Research, Gavin Bridge.
April 25, 2024
Posted
April 25, 2024
Podcast Listening is Hitting New Highs… So Who’s Listening?
TL;DR
Podcast reach is increasing across all age groups, according to an annual survey from Edison Research, with female listeners driving growth.
Nearly half of the adult population in the US has listened to a podcast in the last month, up 12% year-over-year.
Thirty-four percent have listened to a podcast in the last week, up 10% year-over-year. Online audio listening has also hit record highs, the report finds.
The number of Americans consuming podcasts has reached record highs and growth is being driven by female listeners, according to “The Infinite Dial,” an annual survey from Edison Research with support from Audacy, Cumulus Media, and SiriusXM Media. The study is based on a national survey in January 2024 of 1,086 individuals age 12 and older.
Nearly half of the adult population in the US has listened to a podcast in the last month, up 12% year-over-year. Thirty-four percent have listened to a podcast in the last week, up 10% year-over-year.
Increases in the number of monthly and weekly podcast listeners are seen across all age groups, but that growth is driven by large increases among women.
The study found that 45% of women have listened to a podcast in the last month, up from 39% in 2023, an increase of 15%. Thirty-two percent of women have listened to a podcast in the last week, up from 27% in 2023, an increase of 19%.
“Listening levels are up markedly despite changes in how downloads are being delivered and counted.” said Edison Research VP Megan Lazovick.
Online audio listening has also hit record highs, according to Edison Research. More than three quarters of Americans have listened to online audio in the last month, an estimated 218 million people. Ninety per cent of those aged 12-34 and 85% of those age 35-54 have listened to online audio in the last month, it found.
Other findings include that 70% of those age 18+ who have driven or ridden in a car in the last month currently listen to radio as an audio source in their primary car; 55% listen to online audio and 32% listen to podcasts.
AM/FM radio is still popular with 60% of those polled having a traditional set at home.
An outlier in this audio report is that Twitter/X usage has seen a sharp 30% decline in use following Elon Musk’s acquisition.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Steve Raizes, executive VP of podcasting and audio at Paramount Global, says the death of podcasting has been greatly exaggerated.
April 25, 2024
Posted
April 25, 2024
With “Cowgirls on the Moon,” the Workflow’s in the Cloud
TL;DR
A high-concept sci-fi movie trailer made using modern cloud computing showcases a raft of new filmmaking technologies from virtual production and cloud rendering to generative AI.
Ron Ames, producer of Amazon series “The Lord of the Rings: The Rings of Power,” and Katrina King, global strategy leader for content production at AWS, discussed the making of ‘Cowgirls on the Moon” at the 2024 NAB Show.
The principal generative AI tool was Cuebric, which was used to generate assets and import them into Unreal Engine, and to automate certain technical aspects of the production.
Many producers remain concerned about putting their project in the cloud, which case studies like this aim to challenge and reassure.
It began as a joke, but putting cowgirls on the moon is a serious attempt to showcase what is possible by using a raft of new filmmaking technologies from virtual production and cloud rendering to generative AI.
Unveiled and demonstrated at NAB Show, the faux movie trailer for Cowgirls on the Moon is a goofy but high-concept challenge led by AWS that conforms to elements of MovieLabs 2030 Vision.
“It started off as a joke,” Katrina King, global strategy leader for content production at AWS, explained. “Let’s do something ridiculously out there that’s really going to force us to lead into modern cloud computing and generative AI. And I said something like ‘cowgirls on the moon.’ It was a joke, but nobody came up with a better idea so that’s what we went with.”
The aim was to demonstrate the power of three technologies working in tandem: generative AI-assisted virtual production, cloud rendering and VFX with the use of AWS Deadline, and holistic production in the cloud.
“At AWS, we believe very strongly in the responsible use of generative AI,” King continued. “So we used applications that allow artists to work more efficiently and to offload the mundane technical aspects.”
For instance, they used text-to-video generator Runway for concept art and storyboards, another AI tool for facial recognition, and an enhanced speech tool included within Adobe Premiere. The latter tool completely rebuilt the dialogue track as if it had been recorded in an ADR session. “The amount of time that saved us up having not to go into an ADR session was incredible,” King said.
The principal generative AI tool was Cuebric, which was used to generate assets and import them into Unreal Engine and to automate certain technical aspects of the production. All of the backgrounds in the virtual production and animated in Unreal Engine were generated using Cuebric.
“Once Cuebric exports it we have these different layers which are then presented in Unreal, so that as we move the camera and the camera tracks on the volume we get parallax,” King explained.
Visual effects facility DNEG delivered 36 VFX shots for the production in just eight days. The whole project was essentially run as a full studio and render farm in the cloud.
Project producer Ron Ames talked up the benefits of the virtual production, such as being able to swap out entire infrastructure and multi-location collaboration.
“We first said, ‘We want these machines to be Linux.’ But then we changed our minds. ‘Now we want them to be Windows.’ Literally in minutes we had new machines up and running,” he said. “The ability to work quickly to collaborate, to tear down walls. We had groups working in Vancouver, in London, LA, Boston, Idaho, Switzerland, Turkey, Tucson, Netherlands [on the project all linked to assets by cloud].”
Ames previously used extensive AWS workflows as producer of Amazon series The Lord of the Rings: The Rings of Power. He thinks other producers remain unconvinced about choosing to put their next project into the cloud.
“Petrified would be the word, not reluctant. The notion that we’ve done it before this way, or we have investments in a certain infrastructure, is one of the impediments to moving forward,” Ames said.
“On Rings of Power, the great good fortune we had was a team of producers and at AWS supporting us to try new stuff and if it doesn’t work, we’re not going to give up, we’re going to make it work and make it work in a way that actually has benefits. Once we saw the efficiencies, the creative possibilities, and truly the collaborative power of breaking down silos, walls, traditional ways we’ve been working, we realized that this had a great value.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
There’s an old tech industry joke that “the cloud” is a fancy way of saying “somebody else’s computer.” That’s a bit of an oversimplification since cloud computing services are a lot more involved than just providing access to a server someone else owns.
But the fact remains that the primary attribute of cloud computing is accessing computing resources — software applications, servers, data storage, development tools, and networking functionalities remotely over the internet.
Increasingly that means everyday post-production processes and crafts like editing too. Like much of post-production activity, the real shift to cloud came with COVID. If productions were to continue behind closed doors then remote and collaborative ways to continue the job had to be found.
While many facility managers and editors found those ad hoc attempts at the start of 2020 to be just about workable, the way the technology was proven to work opened up people’s minds to the benefits of more permanent cloud-based editing.
Today, at the very least, hybrid work-office scenarios are common, with cloud-based workflows no longer considered unusual across all genres ranging from live news and sports to feature animation, scripted TV and documentaries.
In a series of primers (ostensibly to promote its cloud storage), LucidLink explains cloud video editing and outlines the benefits it can offer.
Much of what the company has to say will be familiar to industry pros, but there’s a no-nonsense clarity for anyone unsure.
Cloud video editing refers to workflows that leverage the cloud rather than on-premise infrastructure. Editors can share their data with the complete toolset of a desktop-based NLE such as Adobe Premiere, Avid or DaVinci Resolve. The key difference is that the data itself is stored in the cloud, rather than on local devices. With the right software, cloud-based video editing can also include tools installed on virtual machines that perform parts of an editing workflow.
One of the chief benefits of working this way is remote collaboration. Since cloud-based systems and storage are inherently accessible from anywhere in the world, this enables both hybrid and fully remote workflows for editing teams.
Configured correctly (and the article doesn’t particularly delve into the cost of cloud storage and data transfer which vary greatly depending on facility needs), cloud can save time and cash.
“Although the cloud offers clear advantages when it comes to smaller files (like low-res video proxies), until recently handling large files was an unsolved challenge for cloud video editors due to lengthy upload and download times,” LucidLink notes, before offering its tech as a solution.
There’s also a look at the merits of cloud versus on-premise set-ups with files residing in a SAN or NAS system within a facility.
This latter approach, says the vendor, “requires copying large files to hard drives or using file transfer services if collaboration requires working with freelance talent in other locations other than at the facility itself.
“Even when working with large amounts of raw video data, editors often need to search, analyze and tag files, preferably in real time. The larger the file, the longer it takes to download, upload, render, or share. Beyond the costly hardware investment, these systems still don’t solve the problem of waiting for files to download or distribute.”
However, it’s not usually a zero sum game. Most facilities at this moment in time prefer to retain one foot in both camps, in part as a safety net for data loss.
There are of course lots of choices when it comes to storage and the right strategy is vital for any production, says LucidLink.
“On-prem SAN and NAS systems can be very performant, but those benefits only exist in one location: a facility. The need to collaborate anywhere however is not addressed by these legacy approaches. This is where a cloud-based approach comes in.”
As we have seen from the recent NAB Show, more and more vendors are offering cloud based workflows. These increasingly start from the camera, where proxies are uploaded directly via the internet to some form or media management platform, and from which authenticated users anywhere can download or stream files to work from.
In a few years, looking back at the heavy duty power hungry monoliths of Silicon Graphics machines, Quantel boxes, or Autodesk hardware, we will wonder just how we ever worked without the internet.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
From Blackmagic, Frame.io, EditShare, Backblaze and more, see all the latest developments in cloud workflows to emerge from NAB Show 2024.
April 24, 2024
Posted
April 24, 2024
AI Is Definitely Changing (But Not Destroying) Hollywood
TL;DR
Generative AI is creeping in behind the scenes of film and TV production, but is not yet good enough to auto-generate an entire feature from scratch.
Hollywood has been here before in the sense that any disruptive tech — from soundtracks to digital cameras — were co-opted to the goal of telling stories.
There’s a current backlash in town against using AI for anything more than ideation or to quicken some processes up — but that may change when the next generations of tools like Sora land on the market.
The current consensus appears to be that generative video is not yet a Hollywood-killer and perhaps never will be. While AI is creeping into production, it is doing so to augment certain workflows or make specific alterations with no sign of it being used to auto-generate entire feature films or push creatives out of a job. But it’s still the early days.
“It’s a fraught time because the messaging that’s out there is not being led by creators,” said producer Diana Williams, a former Lucasfilm executive now CEO and co-founder of Kinetic Energy Entertainment at the 2024 SXSW panel, “Visual (R)evolution: How AI is Impacting Creative Industries.”
Certainly, AI is a disruptive technology, but M&E of all industries should be used to taking tech change on board.
Julien Brami, a creative director and VFX supervisor at Zoic Studios, spoke on the panel with Williams, as Chris O’Falt reports at IndieWire. Brami said the common thread with each tech disruption is that filmmakers adopt new tools to tell stories. “I started understanding [with AI] that a computer can help me create way faster, iterate faster, and get there faster.”
Speed. That’s what you hear, over and over again, as the real benefit of Gen AI imaging, writes O’Falt who spoke to numerous filmmakers about the topic.
“Few see a viable path for Gen AI video to make its way to the movies we watch. Using AI is currently the equivalent of showing up on set in a MAGA hat.”
Finding actual artists who are willing to use AI tools with some kind of intention is tough, agrees Fast Company’s Ryan Broderick. Most major art-sharing platforms have faced tremendous user backlash for allowing AI art, and there’s even a new technology called Nightshade that artists are using to block their images from training generative AI.
Graphic designer and digital art pioneer Rob Sheridan tells Fast Company that the backlash against AI tech in Hollywood is directly caused by both tech companies and studios claiming that it will eventually be able to spit out a movie from a single prompt. Instead, Sheridan says it’s already obvious that AI technology will never work without people who know how to integrate it into existing forms of art, whether it’s a poster or a feature film.
“The thing that is hurting that progress — for this to kind of fold into the tool kit of creators seamlessly — is this obnoxious tech bubble shit that’s going on,” he says. “They’re trying to con a bunch of people with a lot of money to invest in this dream and presenting this very crass image to people of how eager these companies are, apparently, to just ditch all their craftspeople and try out this thing that everyone can see isn’t going to work without craftspeople.”
Media consultant Doug Shapiro tells Fast Company that AI usage will increase in Hollywood as studios grow more comfortable with the tech. He also suspects the current backlash against using AI is likely temporary.
“There’s this kind of natural backlash that tends to ease over time,” he says. “It’s going to get harder and harder to tell where the effects of humans stopped, and AI starts.”
Generative AI is cropping up most commonly in relatively small-stakes instances during pre- and post-production. “Rather than spend a ton of money on storyboarding and animatics and paying very skilled artists to spend 12 weeks to come up with a concept,” Shapiro adds, “now you can actually walk into the pitch with the concept art in place because you did it overnight.”
Studios have also begun using AI to touch up an actor’s laugh lines or clean up imperfections on their face that might not be caught until after shooting has wrapped. In both cases, viewers might not necessarily even know they’re looking at something that has been altered by an AI model.
David Raskino, co-founder and CTO of AI developer Irreverent Labs, suggests to Will Douglas Heaven at MIT Technology Review that GenAI could be used to generate short scene-setting shots of the type that occur all the time in feature-length movies.
“Most are just a few seconds long, but they can take hours to film,” Raskino says. “Generative video models could soon be used to produce those in-between shots for a fraction of the cost. This could also be done on the fly in later stages of production, without requiring a reshoot.”
AI is putting filmmaking tools in the hands of more people than ever and who can argue that’s not a good thing?
Somme Requiem, for example, is a short film about World War I made by Los Angeles production company Myles. It was generated entirely using Runway’s Gen 2 model then stitched together, color-corrected, and set to music by human video editors.
As Douglas Heaven points out, “Myles picked the period wartime setting to make a point. It didn’t cost anywhere near the $250 million of Apple TV+ series Masters of the Air, nor take anywhere like as long as the four years Peter Jackson took to produce World War I doc They Shall Not Grow from archive video.”
“Most filmmakers can only dream of ever having an opportunity to tell a story in this genre,” Myles’ founder and CEO, Josh Kahn, says to MIT Technology Review. “Independent filmmaking has been kind of dying. I think this will create an incredible resurgence.”
However, he says, he believes “the future of storytelling will be a hybrid workflow,” in which humans make the craft decisions using an array of AI tools to get to the end result faster and cheaper.
Michal Pechoucek, CTO at Gen Digital, agrees. “I think this is where the technology is headed,” he says. “We’ll see many different models, each specifically trained in a certain domain of movie production. These will just be tools used by talented video production teams.”
A big problem with current versions of generative video is the lack of control users have over the output. Producing still images can be hit and miss; producing a few seconds of video is even more risky. Its why humans will need to be involved. But, of course, as you read this OpenAI’s Sora just got better and better.
“Right now, it’s still fun, you get a-ha moments,” says Yishu Miao, CEO of UK-based AI startup Haiper. “But generating video that is exactly what you want is a very hard technical problem. We are some way off generating long, consistent videos from a single prompt.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
The radical production efficiencies afforded by generative AI will either collapse the content creation industry or crack it wide open.
April 24, 2024
Posted
April 24, 2024
“Civil War” and “The Creator:” Choose the Camera That Works for Your Story
Choice of camera and lens has always mattered to visual artists but automatically reaching for a high-end digital cine (or 35mm) package is being challenged thanks to the success of recent filmmakers.
Gareth Edwards’ $80 million The Creator and Alex Garland’s $50 million Civil War are the prize examples of big budget IMAX releases shot largely with unconventional prosumer-style cameras.
Although cameras like the Sony FX3 and DJI Ronin 4D are considerably less expensive than high end digital cine gear like a Panavised RED or Sony Venice cost was less a reason for their use on these films than their suitability for the job.
As Garland said about the Ronin 4D integrated camera and gimbal, “It’s a beautiful tool, not right for every movie, but uniquely right for some.”
That’s how filmmaker and photographer Patrick Tomasso thinks filmmakers should now approach all cameras.
“Not every camera is right for every movie,” he argues in a YouTube video. “I’m not suggesting that you go shoot your movie with an iPhone or a GoPro. I’m suggesting that you just find the camera that lets you accomplish the narrative and story that you wish to tell.”
It could be an ARRI Alexa but it could just as easily be a Blackmagic Ursa 12K or a Sony FX3 or maybe even a GoPro.
“It doesn’t really matter as long as you know the one that’s going to have the least amount of barriers between you and the thing that you want to create.”
Tomasso points to the 2013 thriller Blue Ruin shot on the Canon C300 as another example where director and DP Jeremy Saulnier wanted kit for rapid set ups and control over lighting.
Steven Soderbergh is a past master in this field, having experimented by shooting an entire feature on an iPhone (Unsane, 2018). “I think this is the future. Anyone going to see this movie without any idea of the backstory to the production will have no idea this was shot on the phone.”
As The Guardian’s Charles Bramesco pointed out, “It’s the skill of a great artist to turn a limitation into a strength, and indeed, Soderbergh has harnessed the potential of the gizmo in your pocket to create a striking and affecting new visual dialect.”
Soderbergh is at it again with new psychological thriller Presence, which he shot on a Sony DSLR to achieve a fluid point-of-view perspective in keeping with his creepy story.
Provided the camera meets the specs for distribution (theatrical, IMAX, streaming, etc.) then what’s the big deal with using a camera that could be snapped up on eBay rather than rented for a hundred thousand dollars?
These films don’t look as good as they do because of the cameras being used, contends Tomasso. They look good because of the skill of the cinematographer and of the director because they’ve chosen the right tool for the particular story they are telling.
“It’s having an excellent eye. It’s knowing framing and composition. It’s shooting at the right hours of the day on top of great wardrobe and locations and props which is why the movie looks so good.”
He says filmmakers have faced challenges because of the assumption that the best of the best equipment was needed to make good, successful, high end projects when in fact filmmakers like Garland and Soderbergh are willing to experiment and try things that are cheap, small, or DIY.
“It means the industry is getting off its high horse [with] the idea that you have to have an Alexa or a Sony Venice [or other] prestige cameras to make prestige content.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Alex Garland’s “Civil War” is less interested in assigning blame to the right or left than in asking why we might end up there again.
April 25, 2024
Posted
April 22, 2024
The (Subversive) Storytelling Style for Park Chan-wook’s “The Sympathizer”
HBO’s latest miniseries, The Sympathizer, delves into the Vietnam War through a lens rarely seen in U.S. media, presenting a story soaked in the vivid hues of Vietnamese experience and perspective.
Created by Don McKellar and visionary South Korean film auteur Park Chan-wook (Oldboy, Decision to Leave), the eight-episode series adapts Viet Thanh Nguyen’s Pulitzer Prize-winning novel, offering a new perspective on the war’s historical impact.
When Park first encountered Nguyen’s novel, he was struck by its explosively expressive prose, which he described to Laura Zornosa at the Los Angeles Times as like watching fireworks — an experience he aimed to translate into the visual and narrative style of The Sympathizer.
Park, who also directed the first three episodes of the series, is known for blending stark humor with somber themes, a method he believes intensifies the emotional resonance of his films, reflecting the complex spectrum of human experiences.
“Believe it or not, I’m a director who puts significant importance in humor when I go about making a movie, because I believe — when the humor is combined with tragedy or violence — it actually makes it even more powerful,” Park relayed. “And that composite is what enables you to express in totality what a human being is, or what life is.”
Top of Form
The Sympathizer centers on the life of The Captain, portrayed by Hoa Xuande. Born to a Vietnamese mother and a French priest, The Captain embodies Eastern and Western influences, reflecting the cultural and colonial complexities of Vietnam.
Robert Downey Jr. takes on multiple roles for the series, each embodying a different facet of American hegemony and stereotypes. Downey portrays characters like CIA agent Claude, the myopic Hollywood director known as The Auteur, and other symbolic figures that reflect “the melded faces of American imperialism and colonialism,” Zornosa writes.
This casting emphasizes the thematic exploration of identity and perception, showcasing Downey’s versatility, while critically examining the portrayal of historical narratives through a satirical lens.
Park was partially inspired by Stanley Kubrick, who did something similar with Peter Sellers in Dr. Strangelove, another Cold War-era political black comedy. “Our original novel’s intention was to have one and the same body having different faces,” he said.
This notion is pivotal, as the series explores themes of identity, the lasting impacts of colonialism, and the inherent contradictions within cultural understanding. Through The Captain’s eyes, viewers navigate the murky waters of allegiance and identity during and after the Vietnam War — or the American War, as it’s called in Vietnam — offering a multifaceted perspective that challenges conventional narratives.
The Sympathizer extends beyond the confines of the Vietnam War to reflect broader historical and cultural dilemmas. Through its detailed portrayal of characters and conflicts, the series highlights the persistent echoes of imperialism and cultural misunderstanding that resonate in contemporary global conflicts.
“When I was writing the book,” Nguyen notes, “it was always treating the war in Vietnam as an episode in a much longer history of American imperialism and colonialism.”
The Sympathizer is emblematic of a production greenlit ‟late in the era of so-called Peak TV,” Jia Tolentino writes for the New Yorker. She notes that the limited series ‟is the product of a marriage between two eminent tastemakers, A24 and HBO,” booked three years ago (pre-Discovery’s acquisition of the latter and industry contractions).
Tolentino writes that Park’s ‟gift for sumptuous spectacle [is] underpinned by meticulous preparation,” from detailed storyboarding and on-set precision.
Production required 120 days, and time on set with Park is ‟notoriously calm” and often marked by days that wrap early, per production designer Alec Hammond. However, this resulted in an economy of coverage. “There’s not a lot of latitude in the edit, which is not usual for television executives to see,” Don McKellar, Park’s co-showrunner for The Sympathizer, told Tolentino.
In general, Tolentino observes, ‟Making television is a more bureaucratic process than filmmaking, involving more input from more people on more footage.” (Perhaps Park’s infamous precision is one way to combat this tendency?)
For his part, Park ‟recognizes different constraints and opportunities,” Tolentino writes. However, he told her that “you can waste your time ‘like a millionaire wasting their abundant fortune’” when crafting a TV series. Instead, he aimed ‟not to waste ‘one second, one minute, or even a frame.’”
Nonetheless, Tolentino questions whether Park is perfectly suited to the small screen. She writes, ‟Television, though, may never be quite the right medium for a filmmaker who casts a spell that’s not meant to be broken, and who rewards the viewer through destabilization and discomfort.”
“I tried very hard to make his colorful writing into a visual form,” Park told Salon senior critic Melanie McFarland during a Zoom interview, explaining his approach for the adaptation of Nguyen’s 2015 novel.
Additionally, Park ‟takes certain liberties with the story that invite new interpretations and meanings.”
For her part, McFarland wonders ‟whether parts of Nguyen’s story provided [Park with] a means of commenting on the audience’s tendency to revere certain filmmakers and excuse their excesses” (hallmarked by the portrayal of The Auteur); or is the series considering the work of a critic (one of Park’s prior careers)?
Park told her: ‟There’s some part of me that is very interested in that idea, and perhaps it has to do with my background as a critic too. But what is important for me whenever I go about making any kind of work is preserving the right amount of distance with the subject matter.”
He says that he seeks ‟to have the viewer be engaged in the story and alight with the protagonist and his emotional state.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Film auteur Park Chan-wook is riding the wave of the explosion of Korean culture in the US with his latest feature, “Decision to Leave.”
April 26, 2024
Posted
April 19, 2024
It’s Only the Meaning of Life: Wim Wenders and “Perfect Days”
TL;DR
Wim Wenders talks about his Oscar-nominated film “Perfect Days,” which began life as a commission for a promotional documentary about Tokyo’s unique public toilets.
Wenders strove to make a verité drama that is neither quite documentary nor quite fiction but hopefully offers a truthful encapsulation of its subject.
One metaphor Wenders uses is the Japanese concept of “komorebi,” which conveys the idea of shadow and light and by extension an act of ritual for the film’s janitor character.
The slow, meticulous and mundane routine of a lowly toilet cleaner in Tokyo is the subject of German auteur Wim Wenders’ latest film and it has critics waxing lyrical about its transcendental and poetical exploration of life.
Perfect Days was nominated for an Oscar for best international feature, and won Japanese screen icon Kôji Yakusho the Best Actor award at Cannes 2023 for his turn as the lead character Hirayama.
“I see out of the corner of my eyes,” the director told A.frame. “Films very often tell what’s in the center of your vision, and Hirayama, is a person who sees a lot more and pays attention to some of the little things we forget to. He sees the homeless man who lives next to one of these toilets; he respects, greets, and treats him like everybody else. Hirayama is a person who sees everything that gets lost so often in movies.”
Ahead of the 2020 Tokyo Olympics, The Tokyo Toilet Project invited contemporary designers and architects to create 17 public bathrooms in Japan’s capital. Wenders was initially invited to make a series of promotional documentary shorts about the unique facilities, but on seeing them himself he came up with an idea for a fiction feature.
With co-writer Takuma Takasaki, Wenders wrote a script in three weeks. The verité drama was shot in Tokyo over 16 days, with Wenders embracing a shooting style that exists between traditional documentary and narrative filmmaking.
The director added an arthouse touch by shooting in 4:3, a nod to the legendary Japanese director Yasujirō Ozu, but also a practical choice: “It is essential to see the floor,” he says of the toilets.
“Although it was a fictional story, we had a documentary approach. The camera was always on my DOP Franz Lustig’s shoulder — never on a tripod, never on tracks, never on a gimbal or a Steadicam or a crane. This fictional film with the totally fictitious character, Hirayama, was shot like a documentary.”
To make it feel even more authentic, their approach features mostly handheld camera work, and includes a lot of exterior scenes using available light.
The work of Ozu also inspired Wender’s approach, as he describes in a video featurette. “There was this heightened attention [in Ozu’s films] that looked at every object as if it only existed once and only there and he was so attentive to every detail.”
Since each of the toilet buildings are designed by a different architect, “they all have the specific presence,” he said. “And that is also captivating. It’s not as if Hirayama is treating all these toilets as if there were toilets, but he sees them specifically. When you go to these places, you realize the beauty is that they’re all so different, and that each of them forces you to see the world differently. That is a beautiful thing. So, in that way of paying attention to Ozu was ever present.”
Wenders explained to Mascha Deikova at CineD that he wanted to keep the protagonist’s history secret because “this story is not about any possible drama he had in the past. It’s about here and now and the universal humanity of the character,” he said.
“Generally, routine is considered something boring that you do automatically, without really being ‘present.’ To Hirayama, routine means beautiful processes you love doing and that give your life shape.”
With NPR’s Scott Simon the director goes further, describing toilet cleaning as a “mythical job.” He said, “You see him work cleaning toilets, and so you easily reduce his job to being a toilet cleaner. But slowly, you realize the richness of his life, and you realize that cleaning a toilet is a strangely metaphoric job.”
Komorebi as a Visual Metaphor
Another metaphor Wenders uses is the Japanese concept of “komorebi.” This term describes the dancing shadow patterns created by sunlight shining through rustling leaves and swaying branches. Every day, during his simple lunch in the park, Hirayama takes pictures of a particular tree with an old Olympus film camera.
“Black-and-white pictures might seem the same to an inattentive viewer. Yet, of course, they are all unique,” says CineD’s Deikova. “The whole concept behind komorebi is that it can exist only for a moment. So, this original passionate hobby is probably the most suitable visual symbol for the main character’s attitude about life.”
As the director explained in another video featurette, photographing Komorebi was a way of honoring the light and the trees. “[Hirayama’s] light photography became an act of saying ‘thank you’ to the light. And the simpler his life got, the happier he was. He became a monk without knowing it. I don’t think in his own thinking he was living the life of a monk — he was just living the life he started to like.”
Wenders says he doesn’t like to feel manipulated in the cinema and struck upon the ideal way to make the sort of film he like in Perfect Days. He explained his philosophy in a video featurette.
“I hate it when I see a movie and in everything that happens, I see this is constructed. Or I’m reminded of [something] 10 minutes later that will have [story] consequences. I hate it when story becomes so obvious that it’s a construction.”
He even finds this in some of his own films like The End of Violence and The Million Dollar Hotel. “I made movies where the little bit of story ruined the entire flow of credibility for me,” he says, “sometimes tiny elements of stories can become giant of disruption.”
But with Perfect Days he strove to create a documentary of a fiction, finding that one of the ways to do this was to film without blocking or rehearsing.
“We basically gave up rehearsing after a while,” he said. “Our documentary approach is a little risky for actors. A lot of actors don’t feel secure if they cannot rehearse and know exactly what they’re supposed to do.
“In my fictional films, I am always so happy if I manage just in one shot here or there [that captures] something that is utterly real and only happens at the moment. I’m ready for a documentary moment to appear in science fiction. On the other hand, in my documentary films, I’m often trying to bring in a little element of fiction because I think that’s how we live. We live in reaction to things that actually happen around us, and we live as a reaction to things we imagine.”
He continues, “The stories that happen are very different categories than the story we impose, let’s say, on an actress in a movie. So Perfect Days became a wonderful mélange of documentary and fiction.”
His hit documentary feature Buena Vista Social Club, about a troupe of musicians in Cuba, is a case in point. The very fact of his making a film helped the world “rediscover” their music and give them a platform to stage their first ensemble performance together at New York’s Carnegie Hall.
“I thought I was making a music documentary when I was filming a real life miracle. I know how in fiction and documentary things cross each other and you can never really define what it is. And that’s the beauty of it. In Perfect Days, I didn’t have to make that effort. It just happened that the routine of that man and his daily work and his going to work in the morning and driving a car and listening to music and coming home and taking a bath in the public bath and going to bed and reading.
“I always had to make an effort in fiction to make it feel like documentary and in documentaries to make them feel like fiction. But here it suddenly happened just on its own. Maybe we had created the conditions that that enable fiction and documentary to go so well together.”
Yakusho’s performance is largely silent but he is given a soundtrack to his mind. Often, the film finds him driving the streets of Tokyo listening to the Velvet Underground, the Rolling Stones, and Nina Simone — a soundtrack of Wenders’ own favorite music from the ‘70s and ‘80s.
“The songs are very much part of the storytelling process, Wenders tells A.frame, “to the point that we put them into the script when we wrote it. For instance, the lyrics to Nina Simone’s ‘Feeling Good’ were on the first page of the script. They were not even intended to be used in the film.
“For me, they described how I imagined this character, his philosophy, and his way of living. They were my prologue. It was only in the end that I realized they were the best way to end the film.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
“Shogun” showrunners Justin Marks and Rachel Kondo discuss their painstaking efforts to re-create feudal Japan for their FX adaptation.
April 24, 2024
Posted
April 16, 2024
“Civil War:” The Camerawork to Capture the Chaos
TL;DR
Writer-director Alex Garland describes dystopian action movie “Civil War” as “a war film in the ‘Apocalypse Now’ mode.”
He sheds light on his unconventional filming style and how he crafted the film’s unique political tone by depoliticizing the reasons behind the American Civil War.
The film’s particular focus is on what happens when journalists are silenced and there’s a loss of shared truth.
Perhaps it could only take an outsider to update the American Civil War of the 1860s and imagine what would happen if similar events tore apart the United States today.
British writer-director Alex Garland didn’t have to look far for inspiration: The January 6, 2021 mob attack on the Capitol was a vivid insurrection filmed live on TV in broad daylight. While these events are a thinly disguised template for the finale of his film Civil War, Garland seems less interested in apportioning blame to the political right or left than in asking why we might end up there again.
You could see similar events play out in Britain or any other country, he told an audience at SXSW after the film’s premiere. “Any country can disintegrate into civil war whether there are guns floating around the country or not,” he suggested, adding that “civil wars have been carried out with machetes and still managed to kill a million people.”
It is as much a road movie as it is a war film, offers an alternate reality about what happens when nobody listens to the other point of view. Its particular focus is on what happens when journalists are silenced and there’s a loss of shared truth.
“I’ve known a lot of war correspondents because I grew up with them,” Garland said in the same on-stage interview. “My dad worked [as a political cartoonist] on the Daily Telegraph. So I was familiar with them.”
Garland showed cast and crew the documentary Under The Wire about war correspondent Maria Colvin, who was murdered in Syria. His lead characters are news and war photographers played by Kirsten Dunst and Cailee Spaeny, whose character’s names echo those of acclaimed photojournalists Don McCullin and Lee Miller. Murray Close, who took the jarringly moving photographs that appear in the film, studied the works of war photographers.
“There are at least two [types of war photographer],” said Garland. “One of them is very serious minded, often incredibly courageous, very, very clear eyed about the role of journalism. Other people who have served like veterans are having to deal with very deep levels of disturbance (with PTSD) and constantly questioning themselves about why they do this. Both [types] are being absorbed and repelled at the same time.”
He represents both types in the film. While it is important to get to the truth — in this case, the money shot of the execution of the US President — he questions if that goal should take priority over everything else they come across in their path. At what point, Garland asks, should the journalist stop being a witness and start being a participant?
“Honestly, it’s a nuanced question, nuanced answer,” he said. “I can’t say what is right or wrong. There’s been an argument for a long time about news footage. If a terrible event happens, how much do you show of dead bodies? Or pieces of bodies? Does that make people refuse to accept the news because they don’t want to access those images? Or worse, does it make them desensitized to those kinds of images? It’s a tricky balance to get right.”
In this particular case, one of the agendas was to make an anti-war movie if possible. He refers to the controversial Leni Riefenstahl directed 1935 film Triumph for the Will, which is essentially Nazi propaganda.
Garland didn’t want to accidentally make Triumph for the World, he said, by making war seem kind of glamorous and fun. “It’s something movies can do quite easily,” he said. “I thought about it very hard and in the end, I thought being unblinking about some of the horrors of war was the correct thing to do. Now, whether I was correct or not, in that, that’s sort of not for me to judge but I thought about it.”
Garland establishes the chaos early, as Dunst’s character covers a mob scene where civilians reduced to refugees in their own country clamor for water. Suddenly, a woman runs in waving an American flag, a backpack full of explosives strapped to her chest.
“Like the coffee-shop explosion in Alfonso Cuarón’s Children of Men, the vérité-style blast puts us on edge — though the wider world might never witness it, were it not for Lee, who picks up her camera and starts documenting the carnage,” reviews Peter Debruge at Variety.
To achieve the visceral tone to the action, Garland decided to shoot largely chronologically as the hero photographers attempt to cross the war lines from California to the White House.
After two weeks of rehearsals to talk through motivations and scenes and characters, Garland and DP Rob Hardy then worked to figure out how they were going to shoot it. He wanted the drama to be orchestrated by the actors, he told SXSW. “The micro dramas, the little beats you’re seeing in the background, are part of how the cast have inhabited the space.”
Spaeny, offers insight into Garland’s “immersive” filming technique: “The way that Alex shot it was really intelligent, because he didn’t do it in a traditional style,” she says. “The cameras were almost invisible to us. It felt immersive and incredibly real. It was chilling.”
A featurette for the movie sheds light on Garland’s unconventional filming style, in which he describes Civil War as “a war film in the Apocalypse Now mode.”
While the A-camera was a Sony VENICE, they made extensive use of the DJI Ronin 4D-6K, which gave the filmmakers a human-eye perception of the action in a way that traditional tracks, dollies and cranes could not. They also bolted eight small cameras to the protagonists’ press van.
Aiming to balance both characters’ impulses while giving the audience a visceral sense of the danger, Hardy needed camera systems that were as flexible as possible, he recounted to IndieWire’s Sarah Shachat.
He found that having six Ronin 4Ds (one became a casualty of the shoot) allowed the camera team to get as close as possible to the perspective of the journalist characters in action sequences without needing to truncate or interrupt the action, Shachat notes.
“Since Ex Machina, we’ve very much set the precedent that would create these immersive environments and the cameras almost become secondary; the actors and everybody can walk into that environment and make it feel as authentic as possible. Which sounds like, well, wouldn’t that be a standard thing to do in all aspects of filmmaking? But surprisingly, it’s not,” Hardy said.
The smaller Ronin cameras allowed the DP and his team to switch between handheld and Steadicam work, as well as more composed shots, employing the visual language of both road trip and combat films.
“I could sit back on wheels if I needed to and have another operator in amongst the action and see things from a distance a bit more globally and make decisions about framing,” Hardy said.
“We were always working towards the idea that every single shot could be a still image if you went through each and every frame and picked that singular moment, and so the framing needed to be very important.”
For the film’s harrowing crescendo, a 15-minute siege on Washington, DC, Hardy employed a highly kinetic, verité approach. Emphasizing the journalists’ perspective as caught in the middle of the attack, it is a high-decibel, chaotic sequence, showcasing an array of practical effects, including speeding Humvees, bulldozing tanks, and nonstop gunfire.
“We were, to use a technical term, blowing shit up,” the DP tells Jake Kring-Schreifels at The Ringer. “It had to be that way. Everything was about creating this authentic environment.”
During pre-production, Garland and Hardy worked closely with production designer Caty Maxey and other department heads to map out locations and choreography, as well as which shots would be built by a VFX team. They then built a 20-foot and three-dimensional scale model of the city, labeling where and how each section of Washington, DC, would be filmed.
“The aerial bombing of the Lincoln Memorial, for example, would contain real second-unit shots of the city that visual effects teams would overlay with fiery destruction. But as the sequence swooped down to ground level, production would move to Stone Mountain Park, where Maxey was responsible for furnishing the foreground of a massive assault,” Kring-Schreifels details.
One major benefit of the scale model was that it could be used by every creative department to pinpoint the locations of tactical explosions, as well as the physical dimensions of the space and a general flow of the action.
“We would have many, many meetings like, ‘OK, so this vehicle comes in here, they’re gonna approach here, but these guys are going to stop them, so then this is going to look like that,’” Hardy recalls.
“I remember walking onto that set, and by the time we got to it, it honestly did feel like the eve of the final battle,” Hardy says. “Everybody knew what they were going to do.”
To Matthew Jacobs at Time Magazine, Spaeny likened the road scenes to a play, adding, “unlike theater, or even a typical movie shoot, Civil War changed locations every few days as the characters’ trek progressed, introducing constant logistical puzzles for the producers and craftspeople to solve.”
Dunst’s husband Jesse Plemons makes a brief appearance in the film, but commands the scene as a menacingly inscrutable soldier carrying a rifle and wearing a distinct pair of red sunglasses.
“I can imagine that people might read into or some kind of strange bit of coding into Jesse Plemons’s red glasses,” Garland says in A24’s notes. “Actually, that was just Jesse saying, I sort of think this guy should wear shades or glasses. And he went out himself and he bought six pairs, and we sat there as he tried them on, and when he got to the red ones, it just felt like, yep, them.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Two decades of digital camera technology and software packages have enabled greater access for more people to the possibility of telling stories on film. The contention now is that such prosumer, even consumer, gear is of such high quality that even A-list filmmakers are using it.
The latest talking point is the extensive use of the $5,000-$6,000 DJI Ronin 4D, an integrated camera, lens and four-axis stabilized gimbal, to shoot $50 million sci-fi action thriller Civil War.
The inexpensive price of the camera was not the reason director Alex Garland wanted to use it. As he explained to Ben Travis at Empire Online, “It self-stabilizes, to a level that you control — from silky-smooth to verité shaky-cam. To me, that is revolutionary in the same way that Steadicam was once revolutionary. It’s a beautiful tool. Not right for every movie, but uniquely right for some.”
It enabled DP Rob Hardy to shoot and move the camera quickly without using dollies or tracks and yet without it feeling too handheld.
Instead, the DJI Ronin 4D offered a distinctly human perspective. It was, notes Garland, “the final part of the filmmaking puzzle — because the small size and self-stabilization means that the camera behaves weirdly like the human head. It sees ‘like’ us. That gave Rob and I the ability to capture action, combat, and drama in a way that, when needed, gave an extra quality of being there.”
Gareth Edwards’ $80 million budget sci-fi feature The Creator was shot on the $4,000 Sony FX3 by Oren Soffer, guided by Dune’s Oscar winning cinematographer Greig Fraser, for reasons of compactness and low light capabilities
While neither camera is certified by IMAX as an IMAX camera, both The Creator and Civil War were presented for IMAX screens because they used IMAX post-production tools and a sound design suitable for the giant format. Neither film might look quite as good as Dune Part Two — which was shot on IMAX certified ARRI Alexas — but the quality is as good as there.
And this is the contention of Jake Ratcliffe, technical marketing manager at camera rental house CVP. The gap in image quality between low and high-end cameras is closing, he argues, and the compromises you would previously have had to make with cheaper cameras are diminishing.
With image quality less of a differentiating factor, filmmakers have more and more choice over the tool for the job. RED originally designed the smaller and relatively cheap Komodo as a crash camera, but its lightweight, small form factor and image quality that matches bigger brother V-Raptor cameras has seen it increasingly used on shows like Amazon Prime’s Road House.
Ratcliffe thinks these stories are showing that the process of filmmaking is changing. “The democratization of filmmaking equipment is going to allow more and more people to tell the story in a more engaging way than what would have been possible in the past. I think the industry will go a step further in this regard with Unreal Engine in the future too.”
Has camera technology using glass optics and digital sensors reached its natural peak?
There's some interesting cameras being used for a couple upcoming Hollywood films.
Steven Soderbergh is using the SONY a9iii global shutter mirrorless for 'PRESENCE'
The film is shot like a POV entirely on a 14mm SONY photo lens with a gimbal. The a9iii is the first full frame… pic.twitter.com/NdTzOP6fhV
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Here’s how director Steven Soderbergh shot the supernatural “Presence” entirely from the ghost’s perspective.
April 13, 2024
Posted
April 13, 2024
NAB Show Amplified: Sean Evans Has a Spicy IP Recipe
Sean Evans, co-creator and host of internet talk show “Hot Ones,” will divulge how he’s Heating Up the Zeitgeist on the Main Stage of the 2024 NAB Show.
Evans will share insights into First We Feast‘s IP and creative choices, his interview philosophy, and more at 11 a.m. (PT) on Tuesday, April 16 during a conversation with NAB VP of Content Design & Development Josh Miely. The session is open to all attendees; you can register to attend for free with code AMP05.
Here, he answers questions from NAB Amplify’s Emily M. Reigart about how “Hot Ones” fits into the lineup of modern talk shows and what it takes to make everything sizzle on camera.
Eater calls “Hot Ones” “a talk show for the 21st century” and also references its origins in your “stunt journalism” phase. How do you think about the show’s place in the Western canon? I think “Hot Ones” is the internet’s version of the classic late night talk show that I grew up watching. We have a unique format with spicy wings, but the DNA of the show is rooted in broadcast tradition. “Hot Ones” straddles the line between the familiar and the novel; it’s both mainstream and esoteric at the same time.
I think Ricky Gervais described it best when he called “Hot Ones” “a mix between ‘Charlie Rose’ and ‘Jackass.'”
I think of “Hot Ones” as a shooting star in the constellation of pop culture.
While you clearly don’t mind making your guests physically uncomfortable, you have said in the past that the secret to a good interview is finding what guests want to talk about. Do you think that approach is similar to how Howard Stern and David Letterman would conduct these interviews? I’ve heard they were part of the inspiration for the show. The people who are remembered for truly mastering the art of the celebrity interview are those who understand that the audience needs to see a great show. That’s show business. Every talk show host of note understands that, so in that way we all have the same North Star.
I’m of the opinion that an interview is dependent on the generosity of your guest, but I wouldn’t say that’s necessarily true of the living legends you mention in your question. We all have unique styles, but the direction we’re pulling in is all the same.
Tell me about the kit and crew required to make the show. How many cameras, mics, etc? We use five cameras (Sony FS7s, I believe), which includes a wide shot, and a pair of cameras on me and the guest. For sound, we wire and use boom mics on both sides of the table. There’s a lighting grid as well, but otherwise, just two trays of wings and, of course, the hot sauce lineups.
How many people are involved in making an episode? On a shoot day, there’s usually about nine to 10 people on set between the camera/sound crew, Dom and Victoria producing, and myself. But, it takes a village, from booking the show, editing it, selling it, etc. The brand has gotten so big; there are many more hands involved now than when we started.
“Hot Ones” is now one of six shows First We Feast makes. What qualities make a show a good fit for the brand? First We Feast is a brand at the intersection of food and pop culture. The best way I like to describe our ethos is “dumb stuff for smart people,” and then there has to be some sort of food angle. Naturally.
And if you could switch to a different FWF show, which would you choose? Back in the day, we had a host named Mikey Chen who did a whole show about ice cream. After eating all these hot wings over the years, I think an ice cream show would be a nice pivot for me.
You’ve now made 23 seasons of “Hot Ones,” eating more than 3,000 chicken/vegan chicken wings in the process. How has the show evolved since 2015, and how have you changed as a host/interviewer? We definitely take the interview a lot more seriously now than when we first started, but otherwise the show has mostly stayed the same. From the format to the set, and even the bald guy hosting it, we’re nothing if not consistent. The fundamentals of making the show have gone mostly unchanged for the nine years we’ve been churning out episodes.
Casey Neistat, digital creator and cofounder of BeMe, is joining NAB Show this year for “Do What You Can’t with Casey Neistat.” You can register to attend for free with […]
April 11, 2024
Posted
April 11, 2024
Decentralized Pictures: Rethinking the Film Business Model, From Script to Screen
TL;DR
Leo Matchett, co-founder and CEO of Decentralized Pictures, says the blockchain has the potential to revolutionize the entertainment industry.
Decentralized Pictures is a blockchain-based platform where filmmakers can submit movie pitches and pay a submission fee in the project’s native token, FILMCredits. It was launched in 2021 by Matchet, American Zoetrope’s Michael Musante and Roman Coppola.
The writers’ and actors’ strikes last year offered clear evidence that the current operating model in Hollywood is not sustainable. Could blockchain technology address the challenges by opening access to independent filmmakers and sharing reward more equitably?
“The blockchain has the potential to revolutionize the entertainment industry,” says Leo Matchett, co-founder and CEO of Decentralized Pictures. “It can help create a more transparent and equitable industry, and it can also connect fans with their favorite artists in new and innovative ways.”
Decentralized Pictures, a blockchain-based platform where filmmakers can submit movie pitches and pay a submission fee in the project’s native token, FILMCredits, was launched in 2021 by Matchet, Michael Musante, VP of production and acquisitions at American Zoetrope, and Roman Coppola, and with $50,000 in documentary funding from The Gotham Film & Media Institute. Since then, it’s been helping to transform the future of independent cinema.
“We are taking a novel approach to providing opportunities to filmmakers, and we’re relying on audiences to tell us which content and which filmmakers are most deserving,” explains Matchett of their Web3 finance platform.
Rather than taking a retrospective data driven approach that studios use to base decisions on which projects to greenlight, DCP polls its communities of members in real time about which of projects submitted to it should receive further finance or support. Members, or “reviewers,” give each project a ranking score in value sets around metrics like script, characters or plot lines and social impact.
“This is done on blockchain as a peer to peer payment between the submitter and the reviewer,” Matchett says. “So it is literally the opposite of Kickstarter. A person who is looking for financing pays a fee into a smart contract that dynamically pays reviews. They are peer-to-peer paying for peer review and the data derived from those payments and the incentivized behavior of review is what indicates the most deserving filmmaker to move forward with. All the data is preserved, auditable and mutable on the chain making it a fair and transparent way of deciding who should get opportunities to tell stories.”
He says the current system is limiting opportunities for filmmakers. “It’s a lot about who you know and who you get in the room with. It’s about access to money and your proximity to the industry or your past experience. However, Decentralized Pictures are taking unsolicited material which we outsource to the world to help us with the decision making. It is audience driven content in which the audience determines, which content they should be presented with.”
DCP’s approach is paying off. They have already produced and financed several documentaries and a pair of narrative features, including Calladita, a satire directed by Miguel Faus, which utilized NFTs (digital collectibles) to finance production and increase audience participation.
The thriller Cold Wallet directed by Cutter Hodierne premiered at SXSW 2024 and was judged by director Steven Soderbergh to be a “smart, spiky, off-center take on the vigilante genre.”
Holy Smokes at NAB Show
Its most recent project is the short comedy film Holy Smokes created by Fiszman and Ares and starring Suvari, all of whom will share their involvement in the project at NAB Show following a special screening of the film.
“Filmmaker Kevin Smith read the top scripts that were submitted to us and curated by the community, and with his producer [Liz Destro] they selected Holy Smokes, which was the number one rated script.”
At a time when studios are concentrating on very high-budget, franchise-led, merchandisable tentpoles “built on known IP,” they are locking out indie filmmakers, says Matchett.
“What has also contributed to the decline of indie cinema is that streamers have significant power in the industry,” he says. “With that comes no real need to make offers to purchase content that has any reflection of what the budget was. You can spend $10 million making a film and get a million dollars from a streamer for an exclusive two year deal but you are stuck with a tough decision. Your film could get shelved and never seen be anyone or if you don’t take their money you risk a net loss. It’s a difficult distribution environment right now.”
He continues: “There used to be secondary release window with DVD and home video sales but that has pretty much been destroyed by streaming. Part of the reason there are larger budget tentpole film is to mitigate the risk of needing to have a significant portion of the revenue come through box office. You don’t have the secondary punch of revenue that you used to get. Instead, you are forced into a deal with a streamer.”
By highlighting their experience on the DCP platform, the panelists will shed light on the transformative shift towards greater democracy within the film and TV industry.
“As the film industry continues to evolve,” Matchett says, “discussions like these pave the way for a more accessible and inclusive future where the voices of audiences hold considerable sway in determining the success and recognition of films and filmmakers.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
“Star Trek” and the Strange New Worlds of Spatial Computing
TL;DR
Once confined to the realms of Star Trek and the visionary mind of its creator Gene Roddenberry, the holodeck is edging closer to reality, according to Jules Urbach, founder and CEO of cloud graphics company OTOY.
Growing up in Los Angeles with the Roddenberrys as neighbors, Urbach is best friends with Gene’s son Rod, the CEO of Roddenberry Entertainment. His cloud graphics company OTOY has teamed with the Roddenberry estate and Apple to deliver new 3D and interactive experiences for users of Apple Vision Pro.
On Tuesday, Tuesday, April 16 at 3:00 PM at NAB Show, Urbach will share the making of “The Archive,” a multi-decade collaboration between OTOY and Roddenberry Entertainment that aims to capture Gene Roddenberry’s lifetime of works with historical accuracy and holographic immersion.
Urbach will also reveal long-term plans to build the tools that will enable spatial content creation experienced through headgear and, eventually, as holograms in a holodeck.
Imagine stepping into the Holodeck, a concept once confined to the realms of Star Trek and the visionary mind of its creator, Gene Roddenberry. That future is edging closer to reality, according to Jules Urbach, founder and CEO of OTOY, who will showcase the latest developments in the technology at NAB Show.
These breakthroughs are “a major step towards realizing that goal,” Urbach says. “This is an exciting inflection point, and we are just at ground zero.”
Urbach’s cloud graphics company has teamed with the Roddenberry estate over many years and, more recently, with Apple, to deliver new 3D and interactive experiences for users of Apple Vision Pro, and will share their findings at the NAB Show session “Boldly Go: Star Trek’s Voyage in the Age of Apple Vision Pro.” Part of the Core Education Collection: Create Series, the session will be moderated by entertainment industry futurist Ted Schilowitz from 3:00-4:30 PM on Tuesday, April 16 in Rooms W210-W211.
Urbach grew up in Los Angeles with the Roddenberrys as neighbors, and he is best friends with Gene’s son Eugene “Rod” Roddenberry, the CEO of Roddenberry Entertainment. Urbach says it is been a long-term vision of his not only to recreate Star Trek experiences with full visual immersion, but to build the tools that will enable spatial content creation experienced through headgear and, eventually, as holograms in a holodeck.
“One of the fundamental messages I want to talk about at NAB is that there is a long term plan to find a way to create the tools that will eventually render a holographic display inside of a room — and we will get there,” Urbach says.
Urbach says was inspired to start OTOY by Jon Karafin, who has vast experience in holographic engineering, including at specialist camera developer Lytro and as CEO of Light Field Lab. Karafin — who will also appear on the panel at NAB Show — is already commercializing panels capable of holographic display. Urbach was so impressed that OTOY invested in Light Field Lab.
“You could cover a wall with these panels today as a step toward the holodeck. This is the future,” Urbach says. “Costs will come down and technology will advance.”
At NAB Show, Urbach and Roddenberry will present OTOY concept videos and documentary films from recent Roddenberry Archive releases, which have been remastered specifically for the Vision Pro. These include unique interviews with George Lucas and Stan Lee exploring Gene Roddenberry’s influence on Star Wars and Marvel.
“There is nothing else like it out there,” Urbach says of the Vision Pro. “The quality of ray tracing and resolution is incredible. And this is just Version 1. Just think of where we are now with the iPhone 15. There is so much potential.”
He points to the Universal Scene Description format as an important standard for building the ecosystem of spatial content creation. OTOY is a member of The Alliance for OpenUSD — a group formed by Pixar and Apple — which is working to standardize the USD format across the industry to help artists and developers create and deploy complex real-time 3D experiences at scale.
Featured in the Roddenberry archive’s native spatial content for Apple Vision Pro are 1-meter by 1-meter light field cubes that are displayed at 90-frames-per-second in 4K resolution per eye, pushing the boundary on visual fidelity.
“We can scan an actor in 3D, storing that spatial content in a mezzanine format then render at any resolution. This could be to the Vision Pro or a holographic display; the output is different, but the fundamentals are the same. You can move through the video seeing light and reflections with fully path traced real time lighting, including dynamic glossy reflections and shadows, bringing new levels of photorealism to immersive content. This is what we have cracked.”
Urbach recalls interviewing his friend’s father, at age 12 for the school newspaper. “It’s been a dream come true to digitize all of Gene Roddenberry’s inspirations, interviews and concept artwork into the archive,” he says.
“There are things in the archive that blew me away, such as the notes that Gene shared with [science fiction writer] Arthur C. Clarke.”
When the original television series was canceled in 1969, Roddenberry lobbied Paramount Pictures to continue the franchise through a feature film. He was encouraged to pursue this by Clarke, who had seen his own short story made into 2001: A Space Odyssey by Stanley Kubrick in 1968.
“Without the inspiration and encouragement of Arthur C. Clarke, the continuation of Star Trek might not have happened,” he explains.
Urbach, Roddenberry and Karafin will be joined by Richard Kerris, NVIDIA’s head of Media & Entertainment, to highlight how diverse technological advances — ranging from light field displays and virtual production to generative AI and decentralized computing — are transforming the media and entertainment industries.
When considering where we are today with spatial entertainment experiences and discussions about the holograms — which are no longer hypothetical — it is worth recalling Arthur C. Clarke’s Third Law: “Any sufficiently advanced technology is indistinguishable from magic.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
New research reveals that consumers are eager for more advanced augmented and mixed reality experiences (particularly if Apple is involved).
April 16, 2024
Posted
April 10, 2024
Caleb Ward: The Possibility and Probability of AI Filmmaking, Part 1
TL;DR
Curious Refuge CEO Caleb Ward is a strong believer in generative AI’s potential to democratize creativity. He and his wife/COO Shelby founded the platform to offer education and community to AI storytellers and aspiring AI storytellers.
Read the first part of Ward’s interview online here; check out his thoughts on genAI controversies in part two; or watch the full conversation, below.
NAB Show “has had a very important role in my career,” says Curious Refuge CEO and cofounder Caleb Ward.
Ward says he always looks forward to walking the show floor and “having conversations with people that work for all of these incredible brands” because the folks in “the booths are really knowledgeable, and they actually really want to help you.
“So I have countless conversations that I’ve started up with people at the NAB Show, and that’s echoed into either business opportunities or just creative questions that get answered.”
At this year’s NAB Show, Ward says, “I want to have conversations with a lot of these brands about how they’re thinking about AI, and how that integrates into everything from the types of features that they’re implementing and cameras, and broadcast equipment all the way to of course, the software software manufacturers.”
He adds, “But I would be lying if I didn’t say that another big part of NAB that I’m really excited about is just the side split off parties that happen. So funny to just like, you know, be in a place where everybody understands just that nerdy niche that you live in.”
This year, Curious Refuge will host a party for the AI filmmaking community at the HyperX Arena on April 15. NAB Amplify readers can purchase their tickets at 50% off using code NAB50.
At NAB Show, Ward says, “It feels like your guard is down. And you’re like talking in, like, this language that you don’t normally get to talk with other people.”
“I can’t imagine not trying to make it out every year because it really is an essential conference for us,” he explains. “If you guys run into us at the show floor, please say ‘hello.’”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
“It’s very important for people to be having conversations about the impacts of AI tools” on the industries and workers, says Caleb Ward.
April 10, 2024
Posted
April 10, 2024
Caleb Ward: The Possibility and Probability of AI Filmmaking, Part 2
TL;DR
Curious Refuge CEO Caleb Ward is a strong believer in generative AI’s potential to democratize creativity. He and his wife/COO Shelby founded the platform to offer education and community to AI storytellers and aspiring AI storytellers.
Read the first part of Ward’s interview online here, or watch the full conversation, below.
In addition to creating educational curricula for AI users, Curious Refuge offers a weekly newsletter and web series, hosted by CEO Caleb Ward, to help followers keep up with the latest and most important developments in this space. It’s interesting to hear his thoughts on some of the biggest conversations about generative AI.
First, it should surprise no one that Ward isn’t overly worried about AI taking over Hollywood.
“I don’t think it’s a zero sum game,” Ward says. “I think that we’re going to see completely new AI assisted stories, and I think we’re going to have the Hollywood stories be elevated in quality because of the use of artificial intelligence.”
However, at Curious Refuge, “We believe that that transition time can be, you know, challenging,” Ward says, even while people embrace new tools and opportunities.
It’s true that “storytellers and creative people now have creative potential and the ability to tell more stories, bigger stories, and really, to take the power away from just studios commissioning you.”
For Ward, “It’s been really interesting to think about how these tools can transform each aspect of the pipeline.”
Ethics and Industry Impacts
But he acknowledges that “it’s very important for people to be having conversations about the impacts of AI tools” on the industries and workers who they’re targeted to.
In addition to recognizing the law of unintended consequences, Ward says, “It is very important to use these tools in an ethical way.”
He says, “We have a digital language in the ethics of using tools like Photoshop right now. And that’s because we’ve had a lot of experience to know the pros and the cons and to know what as a society and what as legislation and laws have popped up, as it relates to using those tools. We’ll see the same thing pop up with AI.”
While the dust may not have settled yet, Ward says, “We think you need to start with ethics before you begin creating. And you need to find out where you personally live as it relates to using these tools. Some people — they won’t even use tools like ChatGPT because of the ethical concerns that they have. And I totally understand their perspective on that. But I think it’s really important for everyone to arrive at what that means for them.”
Ward knows that his lines won’t be the same others draw. But he isn’t starting from scratch when drafting the Curious Refuge framework. “We use some visual, I guess, ethical cues from society, some cues that we see in other places, as kind of a litmus test for how we approach our AI tools.”
For example, they’ve released some parody trailers and looked to SNL for guidance as to how to ethically play in this space of representing actors and IP.
And on the flip side, Ward says, “As it relates to my work being ingested by an AI and someone being able to, you know, spit out [AI-generated] work that seems like my work, I’m fully expecting that to happen. And it did happen” when the Wes Anderson-esque video went viral.
His feeling was “Oh, that’s cool. I got to inspire somebody to create on their own.”
After all, “It’s always a back and forth with AI… You have to treat it like a creative assistant. It’s not a creative replacement. It doesn’t have taste,” Ward notes.
However, Ward says, “I know that there are a lot of IP holders in Hollywood that don’t share that same vision of a future in which storytelling is collaborative, and not just owned by certain entities. But I think that for me personally…that’s how I’m viewing AI for myself.”
Setting up the Guardrails
In addition to personal responsibility, Ward is adamant that AI toolmakers need to do their best to get it right and to understand how their actions are likely to impact the marketplace.
“I think really, the responsibility needs to go to these companies,” Ward says. “And they need to be mandated to do their ethical research around how they release these tools.”
He believes this should be formalized, as well. “These tool companies need to have ethics teams that are thinking about the bad ways in which people can use their tools and the negative impacts that it could have on society. But even then, they’re not going to get it right.
“Google released their image generator and totally botched it. Google has a huge ethics team, they really think about their steps before they take them. If they can’t get it right, I don’t really see a scenario where the companies are going to get it right before they release.”
But that doesn’t excuse them from making an effort. And some companies certainly are.
Ward points out that OpenAI likely has a Sora-equivalent AI video generator waiting in the wings. He explains, “They probably have the ability to release that right now. But they’re intentionally not doing that because they want to think about the long term impacts and how they can roll it out in a way that supports people.”
For those keeping an eye on AI regulations, Ward says, “I do not think that it is even reasonable to think that legislators would have any idea as to what is happening. I think they need to be talking with creators and with the people developing these tools, and they need to be informed in the process. I do like that we have some legislation and mandates that are requiring pitfalls and red flags to be talked about and addressed before tools are released. I think that’s very, very important.”
Nonetheless, he says, “I think there are some proactive things you can do, like the EU’s recent laws related to AI. But the government is really in a Catch-22 because if you start pumping the brakes on the development of AI tools, other countries that are less inclined to slow down the development are going to take over, and we know that a week in AI is like a year in previous industries. And so the economic cost that it could play out to your society could be just super detrimental.”
Recalling the infamous original red flag legislation, Ward says, “You don’t want to end up like the car manufacturing industry in the UK. But you also don’t want AI tools to just come through and decimate existing industries and change the way that we do things.”
How Hollywood Is Handling All This
Utilizing generative Ward says, involves “a restructuring of the way in which you understand the process of storytelling, but ultimately, I think it’s going to result in bigger stories, more stories, more diverse stories, and I think that is a good thing. But there are going to be growing pains.”
Specifically for the Hollywood studios, the obstacles are primarily “legal challenges and copyright challenges. So if a studio creates a film, using artificial intelligence, most studios are under the impression with the legal precedent that’s been set up to this point that they don’t actually own that film.”
However, Ward says, “I really don’t think the precedent is really going to continue for you to not truly own content that’s created with the assistance of artificial intelligence.” He notes that ruling was related to the creation of a wholly AI-generated image library, rather than utilizing AI tools and the back-and-forth creative prompting process.
Currently, Ward points out, all of the big studios have R&D teams dedicated to learning how to use AI, but the majority of the creative teams are banned from using it, with the exception of the studios that have created custom models. (“They’re not great at this point,” Ward says.”)
He thinks, “The industry is going to have to answer those questions, and there’s gonna have to be legal precedents that make them feel more comfortable — or studios, just probably a smaller studio is going to be brave and release their own thing, and it’s going to be a big commercial success. And the big studios are going to be like, ‘OK, here we go!’”
In the interim, Ward says, “Studios in the larger industry do themselves a disservice by not exploring these tools.”
And once studios decide to dive in, new challenges will have to be tackled.
As we all know, “Workflows with AI, they change very frequently. And the quality of the AI generations is really improving day after day,” Ward says. “So how do you build a pipeline whenever you have these innovations popping up like this? And the answer, usually, in Hollywood, is you lock in your tech.”
But will that be feasible or even practical with genAI tools? Unclear.
“I think there’s huge opportunities for those that embrace it,” Ward says. “But if you’re sitting back and you’re just too scared to touch it, or you know that that paradigm shift causes paralysis, then I would say, really think about these tools and just go into it with a little bit of an open mind to seek out what creative opportunities there might be there for you.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
TL;DR NAB Show “has had a very important role in my career,” says Curious Refuge CEO and cofounder Caleb Ward. Ward says he always looks forward to walking the show […]
April 11, 2024
Posted
April 10, 2024
The Method to the Madness in Guy Ritchie’s “The Gentlemen”
TL;DR
DP Callan Green talks to NAB Amplify about lensing “The Gentlemen,” Guy Ritchie’s action-comedy crime series for Netflix.
A boxing scene presented a major lighting challenge, requiring a 360-degree rig and precise control as the actors moved around the ring.
Green recounts how he got his start climbing the cinematography ladder by cleaning Peter Jackson’s glasses on “Lord of the Rings.”
Guy Ritchie’s signature crime caper genre wouldn’t be complete without some boxing, and scenes in Episode 6 of his Netflix series The Gentlemen presented one of the biggest challenges for the camera team on the show.
The location was a venue called The Magazine, that overlooks Canary Wharf in London. “It looks awesome but it is also just an event space with massive windows that we had to turn into a big boxing arena using a 360° lighting rig,” says DP Callan Green, ACS, NZCS, who shot this and three other episodes.
The New Zealand-born filmmaker is now established as a main unit DP, having begun his career as a clapper loader and assistant camera on Peter Jackson’s Lord of the Rings trilogy.
“I like to backlight or sidelight boxing scenes as much as possible but what made it tricky here was that we wanted to get our fight camera operator right in amongst the action on a 21mm wide lens,” he tells NAB Amplify.
“With the actors dancing around inside the ring it seemed almost impossible to keep the lighting looking good and consistent unless there was some way of rotating the percentage of the values of the LED backlight as the boxers move around in relation to where the camera is.”
The idea struck Green just a day before the lights were to be rigged, but gaffer Jack Powell and desk operator Charlie Stallard weren’t fazed.
They pixel mapped the moving lights around the ring and established the best lighting levels for the fighters. They created a grey scale blend in photoshop to overlay the pixels.
“This photoshop image would then rotate over the pixel map itself which created the smoothest possible dimming whilst rotating the light levels during the fight,” says Green.
There were nearly 2,500 instances and pixels within the lamps reacting to the change in position of the fighters. “This gave us the ability to continuously rotate as the fighters did for several rotations.”
Virtual masks were added within the levels around the ring to counteract any camera shadow. The opposing sides of the venue were lit symmetrically for aesthetics and ensured all the lights had full-color control.
Around 500 lights were rigged in total, including a truss rig directly above fighters that was the same size as the boxing ring and had three separate fixtures totaling 140 lamps.
“Impression X4 Bars were used to give a general ring backlight and ambience as the boxers moved around,” Powell explains. “Robe MegaPointes were deployed for the cage feel for the ring entrance and Fusion X-Par 12 to push lights out to the crowd during the fight.
The trusses above the crowd had a mixture of P5 wash lights to augment the ambience. Ayrton Perseo wash lights enabled background interactive lighting.
“For the ring walk we went with Chauvet Strike 4 for the background and Impression X4 on the floor, then we programmed different effects and colors to suit individual ring walks,” Powell adds.
A set of Robe Spiiders backlit the boxers in the ring from around the stage edges. They created a program to track the backlight and kill the frontlight as the fighters moved around the ring.
“We also added MegaPointes on the perimeter to allow us to flair the lens on the ring walks whenever we suited,” Green says.
“That was our biggest set to manage lighting wise as we only had a short window to rig the location. We had four days total in and out, using two rigging crews over two days, and then a pre-light day with 20 sparks and 12 riggers. Plus a derig. This was a military operation led by Farrow and installed in 24 hours.”
Inspired by the director’s 2019 movie of the same name, featuring a new cast of characters, The Gentlemen is set in the same heightened and often hilarious world of aristocrats and gangsters; one with the breeding and the birthright, the other with the brawn and the belligerence.
Although mostly real-life backdrops, the production also used Alperton Studios to create some interiors — including a council flat in Croydon, South London where Eddie (Theo James) gets physical with a goon.
Lead DP Ed Wild had created a four-page pdf detailing the show’s look and feel. Green also got to watch early cuts of the first two episodes. “I was pretty scared, watching those, since the bar was set high,” he acknowledges.
He watched Ritchie’s original film as well as Snatch and Lock Stock and Two Smoking Barrels, pulling out a few shots to “nod to” in the series. Sexy Beast, Rocky, Creed and Amelie were other cinematic reference points.
“We were given quite a lot of scope to do what we wanted as long as we didn’t get too crazy,” he says. “I collected as many stills for reference as I could find that I felt resonated with the color and tone of what we were about to do.”
The series is shot at 6K using a Sony VENICE camera equipped with Tokina Vista Primes, typically with a quarter-black satin filter, “which takes the edge off [the resolution sharpness] a little bit and gives the highlights a bit of halation.”
Sony FX3s were also used to cut into the A camera and were placed on props like guns, whisky bottles, pigeon cages, and a traveler’s caravan.
All episodes worked from one show LUT, which was versatile enough to work for night exteriors and day interiors. “When I first started working with it, it freaked me out a little because it was quite heavy and deep in the blacks and darker areas. That paid off in the long run because you had so much more information in post.”
Following his experience shooting two episodes for director Eran Creevy, Green jumped at the chance to continue shooting Episodes 7 & 8 for director David Caffrey.
“Having just come from Masters of the Air and gone on to work on Gangs of London Season 3, I feel very lucky to have had three awesome jobs in a row,” he says.
Green grew up in a suburb outside of Wellington, NZ, and began taking stills when his mom bought him a camera. When his brother got into acting, his interest in filmmaking was sparked.
In 1993 he helped shoot a commercial for a peanut butter brand “voted the year’s worst ad in New Zealand,” he smiles, but on set encountered an ARRI film camera for the first time. Asking the key grip how he could break into the industry the advice that came back was “A lot of hard work, mate.”
Green studied photography at high school and following graduation got a job as a video split operator (now known as a VTR op). Shortly afterward he found himself part of the rapidly growing local filmmaking scene jumpstarted by Weta and LOTR.
“Peter Jackson used to get me to clean his glasses for him. He was really lovely to me. He’s one of many people I’ve met along the way who took me under their wing.”
He won a place at Sydney’s prestigious national film school, leaving in 2003 with a masters in arts and cinema and never looked back. Based in London since 2015, Green’s work has included second unit work on Christopher Robin, The Witcher, Fast F9 and Fast X, as well as on all nine episodes of Masters of the Air. He also recently served as DP on four episodes of the latest season of BBC crime drama Guilt.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Director Steven Zaillian and DP Robert Elswit discuss the meticulous black-and-white visuals for Netflix psychological thriller “Ripley.”
April 10, 2024
Posted
April 10, 2024
Shadow and Light: Cinematographer Robert Elswit’s New Noir for “Ripley”
All eight episodes of Netflix’s limited series, adapted from Patricia Highsmith’s novel The Talented Mr. Ripley, are directed by writer and creator Steven Zaillian, and they’re all lensed by cinematographer Robert Elswit.
“This singular vision gives Ripley both an impressive aesthetic cohesion and a radical kind of ambition,” says Vanity Fair’s David Canfield, who has the filmmakers dissect a dozen shots from the psychological thriller.
“When we had a chance we, tried to create a kind of chiaroscuro, a feeling of very strong shadows and very strong highlights,” Elswit explains. “I kept thinking while we were shooting, ‘I’ll fix all this later.’ And we didn’t fix any of it.”
Ripley leans into the noir of the original story in a more literal way than the 1999 feature adaptation starring Matt Damon. “What follows is a dizzying saga of lust, murder, impersonation, and deception, all captured in radiant black-and-white,” says Canfield.
The story is set in Italy, including in Naples and the Amalfi Coast, and the locations were a key part of the look of the 1999 movie.
“I knew from the beginning that I wanted to have this high contrast film-noir style,” Zaillian says. “We didn’t want to do anything that was familiar to us… I didn’t want to make a pretty travelogue.”
Elswit says that lead actor Andrew Scott has such an expressive face, “it dominates the series in a way. In all the different lighting setups where we did medium close-ups and tight close-ups, it was always fun to find an interesting way of creating contrast and shadows on his face.”
For lighting co-star Dakota Fanning, photographs of Grace Kelly served as inspiration. “It’s this hot, bright contrast between light and dark,” Elswit explains of one shot of her in a police station.
A lot of the show’s action takes place in an elevator. “It’s a symbol of dread for Tom Ripley when people come up in this elevator,” Zaillian says. “It’s a very important location for us. We shot it, basically, every way you could; from inside, from outside, from low, from high. But I had something very specific in my mind. We reached a point where we I started to see ways of shooting this location in a way that could be really fascinating, with this open staircase.”
Other classic noir lighting shots included looming giant shadows cast onto a wall, recalling Orson Welles’ entrance into the film The Third Man.
They used half lighting to evoke the idea that Ripley is two people almost all the time. In other shots Ripley is in total silhouette: “You know exactly what he’s thinking without seeing his eyes,” Zaillian says.
They also used the texture of buildings and cobblestones in ways that have been done since the 1920s. Zaillian added, “It doesn’t look nearly as interesting, by the way, in color, it just doesn’t.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Director Todd Haynes uses stylized visuals to examine how little people understand about perception (and reality) in “May December.”
April 9, 2024
AR, VR and MR Storytelling: It’s a Whole Different Visual Language
TL;DR
Spatial storytelling has yet to take off, but advances in hardware and trailblazing storytellers like Emmy Award-winning immersive director Michaela Ternasky Holland are changing that.
“Nobody has it figured out,” Ternasky Holland says, noting that the industry is still in its early days and that people are still making mistakes as well as good work.
She says the “beauty” of XR storytelling is that it can proximate the embodiment of day-to-day life in ways that traditional mediums are unable to replicate.
The rise of virtual reality storytelling is inevitable due to the fact that human beings are innately tuned to spatial 360-degree three dimensional experiences. Slowly but surely the hardware and the development ecosystem is catching up with the storytelling possibilities that are already being explored by trailblazers like Emmy Award-winning immersive director Michaela Ternasky Holland.
“Nobody has it figured out,” she says. “It’s early days for the medium and people make mistakes, as well as good work. But because the hardware keeps improving, we’re going to start to see a rise in spatial storytelling.”
She adds, “We all know what it’s like to move in space from the moment we are born. The beauty of XR storytelling is that it can proximate the embodiment of what you feel in your day-to-day life in ways that you would never be able to replicate with a traditional medium.”
Ternasky Holland and fellow directors, producers, and creators at the NAB Show session, “Creative Lens on Compelling Content: Artistic and Commercially Successful AR, VR, and Mixed Reality,” have been working with emerging technology since the early days. They have already shattered boundaries to deliver impactful, immersive, and commercially successful experiences in AR, VR, and mixed reality (collectively categorized as XR), and will share the secrets of crafting world-class experiences that captivate audiences while generating profit.
“A lot of people who would love to do this type of work but perhaps feel that it doesn’t have a distribution platform like traditional film/TV or there’s not really a clear publishing platform, like traditional journalism,” says Ternasky Holland. “It’s true that there is not yet a robust sales pipeline but that doesn’t stop great work from being made.”
Allan-Blitz was the first VR director for Time magazine, and has since partnered with Van Jones to create The Messy Truth VR Experience, starring Marvel actors, and also directed the first AR short film for Disney+ called Remembering.
Named “The Godmother of Virtual Reality” by Engadget, The Guardian and others, De La Peña is now working as the founding director at Arizona State University’s center for Emerging Media and Narrative. As founder and CEO of Emblematic Group, she uses cutting-edge technologies to tell stories — both fictional and news-based — that create intense, empathic engagement on the part of viewers via immersive virtual, mixed and augmented reality.
She has been on the cover of the Wall Street Journal magazine as a WSJ “Technology Innovator of the Year” and Fast Company named her “One of the People Who Made the World More Creative” for her pioneering work in immersive journalism, a field she is widely credited with establishing. She is also one of CNET en Español’s 20 most influential Latinos in tech, and a Wired Magazine #MakeTechHuman Agent of Change. A former correspondent for Newsweek, she has more than 20 years of award-winning experience in print, film and TV, and her virtual-reality work has been featured by the BBC, Mashable, Vice and Wired.
Krueger is head of production for the Metaverse Entertainment Team at Meta, and is constantly working on how to translate big commercial IP into the VR space, with projects ranging from Wallace & Gromit to Darth Vader.
Ternasky Holland herself is an Emmy winner for her work creating the first VR climb of Everest for Sports Illustrated. She has partnered with Meta to create multiple VR projects, examples of which she will present in the session. These include the reimagined series made in collaboration with writer-director Julie Cavaliere, which revisits lesser-known folk tales and animates them for VR.
“We don’t think of things the same way as a filmmaker or a journalist does. We almost think like a choreographer,” she says.
Questions around camera placement and location as a character will sound familiar to a filmmaker, but in the context of 360 video the answers will be very different.
“What is the activity around the camera?” poses Ternasky Holland. “We’re not just thinking about creating amazing environments, we’re also thinking about potential interactivity and how the camera moves through 3D 360 space. We’re thinking about how people are going to be depicted and whether they’re going to exhibit the ‘uncanny valley’ as avatars or whether we’re going to capture them in real time with volumetric 3D video,” she says.
“What we’re really trying to do is define a new media of creativity and a new visual language for storytelling.”
During that production they found a stronger reaction among viewers to the 3D immersive version of the film than a similar 2D animated version.
“VR is not necessarily an empathy machine, it just lends an immersion quality that heightens emotional responses. In both 2D and immersive 3D cases the participants felt connected to the story and connected to the characters, but just on that level of emotional impact we saw the difference VR can make.”
One goal of the panel is to act as a rallying cry for others to come and explore XR. “You don’t have to turn your back on traditional mediums in order to be a part of this industry. Nor do you need to have a technical background. From VR animation platforms to games engines there are so many products helping to make XR experiences accessible to people without an engineering or coding development background,” she says.
“We want to see an industry that has true diversity and inclusion. We want artists, creatives, and storytellers and we want producers, we want good lawyers to build up an ecosystem for XR experiences.”
She describes XR as a sandbox environment where serendipitous events should be allowed to happen. “Traditional filmmakers come from a background where they control and edit every single frame and as a result they are constantly in control of the viewer. There is no denying there is tremendous power in that but if you want to get involved in this more immersive, interactive storytelling landscape then you have to let go of that control.”
She adds: “This medium is more like immersive theater where you create the rules of the world but you have to recognize that your audience has agency and will make decisions based on what interests them.
“That is both the secret sauce and the powerful part of this process. If traditional mediums are a passive experience, XR allows people to explore the embodiment of being inside the story.”
The latest advancements in VR/AR headsets such as the Apple Vision Pro are unlocking a new era of immersive experiences, but Ternasky Holland is cautious about jumping too soon.
“I always hesitate to say the word ‘explosion’ because it seems we’ve been on the cusp of that for the last 10 years. Every new headset that comes out makes a slight improvement whether that’s in the weight distribution of the headset or in the pixel count of the display. We now have external cameras streaming live video into your vision for better mixed reality and with the latest headsets we can take advantage of eye-tracking technology,” she says.
“I do think though that one day we will all have some sort of mixed reality headset, whether that’s for work, similar to the way we all have laptops, or whether it’s an entertainment device,” she continues.
“I don’t think we’re going to get there any time soon, but that’s just fine because I’d prefer us to slowly build and grow the hardware and the content in parallel to be able to manage expectations. For me it’s a little less about an overnight change and more of a gradual transition.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Designer, strategist and worldbuilder Rachel Joy Victor employs “spatial economics” to explore how AI can revolutionize storytelling.
April 9, 2024
Posted
April 9, 2024
How AI Will Advance Personalized Video Content
TL;DR
Viaccess Orca CTO Alain Nochimowski shares his perspective on how generative AI is changing video personalization and offers advice for working with GenAI.
He is bullish on generative AI, but does not shy away from discussing some of the growing pains and challenges of leveraging the technology. He encourages everyone to get hands-on with GenAI tools and learn how to utilize ones that are specific to their needs.
NAB Senior Vice President of Emerging Technology John Clark sat down virtually with Viaccess Orca Chief Technology Officer Alain Nochimowski to discuss how generative AI is likely to affect content personalization. Watch their full conversation (above) or read on for some of the highlights.
First, Nochimowski says, generative AI “will definitely impact UGC.” In the near future, he predicts, much of what we today think of us as user-generated content “will be automated and created through LLMs.”
Additionally, “premium content” creation processes will be aided by generative AI and should “definitely have a big impact.”
“You’re going to witness,” Nochimowski says, a “change of paradigm in terms of how video gets personalized.”
Currently, he explains, “You segment your audience and then you basically send the right video or what video advertising functions to the right audience.” But with generative AI tools, we will be able to “go one step further. We’ll be able to craft the message specifically and possibly, even sometimes online, you know, to this specific segment that you’re going to address.”
Nochimowski says, “The other opportunity, I think, relates to the knowledge that broadcaster or video service provider will get from its own audience.” He explains that companies will be able to “extract even more insights” from viewership data. It will help to foster “a new level of engagement with the audience.”
Despite all of this, Nochimowski admits, “The challenge is manifold” in working with generative AI. He emphasizes the importance of crafting guardrails, noting that the concept of brand safety and brand identity. He says it’s important that the guardrails evolve “in sync” with the technology.
GenAI has created “a lower entry into technology” for many individuals and Nochimowski suggests that many start by experimenting with these tools. After all, the technology is progressing incredibly quickly, but it still takes human effort and back-and-forth to get the best product from an LLM or other tool.
VR Production Isn’t an Experiment… It’s a Business
TL;DR
VR Production Workshop instructor Nick Harauz will guide attendees through a two-day on- and off-site workshop during the upcoming NAB Show.
A longtime educator in the post-production space and director of marketing for Boris FX’s Continuum products, Harauz will walk workshop attendees through some of the most cutting-edge tools and techniques in the VR and 360 space.
With Apple’s Vision Pro headsets, new iPhones with the ability to record to record spatial-based video, and newer cameras from Insta360 (including the X3, which will be available at the workshop), Harauz says this unique type of filmmaking is becoming more of a business and less of an experiment.
VR Productions Workshop Day 1 on Friday, April 12 will be held off-site but meets at the Las Vegas Convention Center in Room S226 at 9:00 AM sharp. VR Productions Workshop Day 2 will be held on-site in Room S226 from 9:00 AM – 5:00 PM on Saturday, April 13.
VR Production Workshop instructor Nick Harauz will guide attendees through a two-day on- and off-site workshop during the upcoming NAB Show where attendees will gain hands-on training in both the production and post aspects of this rapidly growing field.
A longtime educator in the post-production space and director of marketing for Boris FX’s Continuum products, Harauz will walk people through some of the most cutting-edge tools and techniques in the VR and 360 space, which includes capturing 360 imagery for VFX work in projects designed to be shown flat.
VR Productions Workshop Day 1 takes place 9:00 AM – 5:00 PM on Friday, April 12; it will be held off-site but meets at the Las Vegas Convention Center in Room S226. VR Productions Workshop Day 2 will be held on-site in Room S226 from 9:00 AM – 5:00 PM on Saturday, April 13.
In a video interview (above), he recounts his own, and the industry’s, evolution since the watershed moment when GoPro introduced Odyssey, a 16-camera 360 rig the sports camera company developed with Google.
“That was one of the first workshops that we held,” he recalls. “and I’ve seen that industry shift from that time, which people originally referred to as the ‘Wild West’ to where VR, AR and 360 productions are today.”
He points out that with Apple’s much-discussed Vision Pro headsets, new iPhones with the ability to record to record spatial-based video (a form of 360 video), and newer cameras from Insta360 (including the X3, which will be available at the workshop), this unique type of filmmaking is becoming more of a business and less of an experiment.
Likewise, the growth of post tools to stitch, manipulate and edit 360 material has also contributed to this evolution, and he plans to provide a solid lesson to attendees in many such features in NLEs including Adobe Premiere Pro, Blackmagic Design Fusion (within Resolve) and Apple’s Final Cut Pro and Insta360 Studio, now in its V4.6.0 release, with its own proprietary iPhone app for their content.
“We’ll also look at some higher-end workflows,” he promises, such as using MochaVR from BorisFX within Adobe After Effects to perform roto type effects within 360 content and apps such as 3dvista’s to create virtual tours and prep 360 footage for creating interactive experiences.
Harauz talks about apps such as 3dvista to create virtual tours and 360 content and actually mark certain portions of your footage to allow for interactive video. They make 3DVIsta Stitcher 4, Virtual Tour Pro and hosting services optimized for this kind of content.
The use of 360 video for applications such as headset-based real estate tours and many other types of production, he says, “is becoming more and more prevalent.”
No longer the “Wild West,” this type of work has become much more mainstream, Harauz says, and as more people access newer and cooler headsets, the entire field promises to further expand significantly in the coming year.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
When Vision Pro was announced, brands began to experiment with visionOS-native apps. It’s early days, but none appear to revolutionize immersive content.
April 7, 2024
What’s Advancing the Adoption of Virtual Production?
TL;DR
Andy Jarosz, virtual production supervisor at Smash Virtual Studios, compares VP to digital cinematography, which took a decade to fully become the norm.
As the technology continues to mature, “more and more productions can see themselves utilizing it,” Jarosz says, highlighting new tools such as ARRI’s Virtual Twins, which allows ARRI lights to be incorporated into Unreal Engine.
“If you’re going to be shooting in someone’s living room,” he says, it makes much more sense “to just go film in a living room. But if you want to shoot on Mars, it’s way easier to film on the LED volume.”
Virtual production has come a long way just this past year and the NAB Show 2024 attendees want more information about ways to do actual work in this exciting arena. Andy Jarosz, virtual production supervisor at Smash Virtual Studios, will be leading three sessions at the Post|Production World training conference designed to enhance attendees’ understanding about this multifaceted topic.
In a video teasing his presentations, Jarosz notes that Smash has become the largest LED studio in the Chicago area since its 2022 founding and that it’s providing services for a large number of projects, primarily TV shows.
The 18,000-square-foot building near downtown offers a variety of LED screen configurations and Unreal Engine playback services for high-end commercials, TV, and film, including Dark Matter for Apple TV+, Chicago PD (NBC) and a number of feature films. In his capacity at the company, he has become expert in the technology and workflow of virtual production and his sessions will impart much of this acquired knowledge.
People in the film industry “don’t always like new things,” he offers, observing that it takes a long time for new technologies to become adopted. Likening virtual production to digital cinematography, he says that novel technologies are frequently met with disbelief. “People just assumed it was a gimmick, “he says of the early examples of digital cinematography. “It took a decade to fully become the norm.
“As the technology matures” Jarosz continues, “more and more productions can see themselves utilizing it.” He says that the industry is arriving at a point now where cinematographers “understand that there are certain considerations when shooting on an LED volume — that you have to treat it in a specific way. But those considerations are not the end of the world. That those concessions are just technique and just process.”
He points to new tools, such as ARRI’s Virtual Twins of their lights that can be incorporated into Unreal Engine, “so now cinematographers are able to say, ‘I want a Sky Panel and I want it set to these settings inside the virtual space.’ And they’re going to get exactly what they expect. The tools are catching up.”
Further, he adds, “We’re going to be talking about things like exposure. We’re going to be talking about color science. We’re going to be diving into specifics about Unreal Engine because a lot of times, people just don’t have a good understanding as to what Unreal can do and what it can’t do.”
He’s also going to provide a whole section just on car process shots, which have been among the quickest type of work to be embraced for feature film and TV applications. “We’re going to be getting fairly technical into the process of virtual production and shooting against LED walls as a backdrop. “
The core of this approach starts with creating elements within Unreal Engine that can interact with camera attributes such as movement and optics. “There are specific nuances that you need to consider when you are working on an environment for virtual production
specifically,” he cautions. “Unreal Engine is a gargantuan piece of software. It’s designed for massive teams of game developers to use. It’s not meant to make movies. And so, there are certain caveats to it — things that do and don’t work on an LED volume.”
His experience suggests it’s easier for people from the film industry to pick up Unreal Engine than the other way around. “The games industry works in a completely different way,” he notes. “They have completely different requirements. And they work to different standards. Games have all different kinds of art styles. They can be cartoony. They can be realistic. They can be anything in between. Film has one art style. It has reality. And anything beside that is just not acceptable.”
“Often, what we’re finding as a studio is that we’ll get environments and levels designed from outside companies that aren’t used to creating content for this specific use,” he explains. “Then we need to go in and redo a bunch of stuff, redo a bunch of settings. Or they’re just not constructed in a way that’s conducive to filming. This class is about communicating those requirements that we have as a stage and going through all of those little nuances and making sure that the levels that people are designing are up to scratch when it comes to a more cinematic workflow.”
Aimed squarely at producers, directors, UPMs, agency producers and people in similar capacities, this session is an overview that breaks down where this powerful technology actually benefits a production and when it’s just unnecessary.
These are the types of questions Jarosz fields as part of his job. “If you’re going to be shooting in someone’s living room,” he says, it makes much more sense “to just go film in a living room. But if you want to shoot on Mars, it’s way easier to film on the LED volume.” Between those two [extremes] is the diverging place at which utilizing the services of a company like Smash does or does not make financial sense.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Jarosz will provide some success stories exemplifying how the use of a virtual production environment (which obviously involves some relatively significant expenditure) has proven cost effective for specific projects, such as saving the cost of company moves when a show needs to shoot six locations in a day or making the most of a situation where a celebrity can only offer an hour or two to a production.
“These are all niche use cases that virtual production can solve and we’re going to dive into all of them,” he says in anticipation of his exciting presentations. “We’re also going to go over specific budget comparisons: shooting scenes practically versus on an LED volume to show cost breakdowns as to what people should expect when they book an LED stage.”
Anyone interested in getting into virtual production or understanding more about the nuts and bolts of the process, should consider adding Jarosz’s sessions to their NAB Show calendar.
Streams and Screens: IMAX Is Delivering Something New
TL;DR
IMAX to debut a groundbreaking 15-perf/65mm film camera at the 2024 NAB Show, marking a significant innovation in large-format cinematography. The new camera stands out even as digital cinematography dominates the industry conversation.
IMAX’s distinctively taller aspect ratio — 1.90:1 or 1.43:1, depending on the venue — isn’t about cropping and blowing up a standard 2.39:1 anamorphic or 1.85:1 flat image.
In partnership with Disney and other content platforms, IMAX Enhanced Home format and streaming technology aims to bring the immersive IMAX experience into viewers’ homes.
Watch: SHOOTING “OPPenheimer” IN IMAX
IMAX Corporation, whose large-format cameras rely on motion picture film, has been seen as a significant driver of Best Picture winner Oppenheimer’s theatrical success, and are even unveiling a long anticipated new 15-perf/65mm IMAX-format film camera at the NAB Show.
Paul Masi, Academy Award-winning sound mixer, will speak about how he optimizes his mixes for IMAX’s proprietary sound format. Evan Jacobs, head of finishing for Marvel Studios, will discuss the multi-phased process of creating IMAX deliverables for their films. Large format still photographer Tyler Shields is set to speak about his preferences for large format film and its advantages over other photographic media. Vanessa Bendetti, head of motion picture & entertainment for Eastman Kodak will be on hand offering her perspective on the importance of IMAX to the motion picture film business. IMAX’s Greg Ciaccio, VP of post production for original content & image capture, will moderate.
While digital cinematography cameras from heavy hitters such as ARRI, Sony and RED (recently acquired by Nikon) will certainly be at the forefront of many conversations at NAB Show, IMAX’s much-anticipated new 15/65mm film camera will also attract a lot of attention.
“We’re planning to have a camera there,” Markoe says. “And we’re getting very close to having the prototype out in the hands of DPs very shortly. So we thought it was a good opportunity to give a little more info and about what the new cameras are going to be.”
Along with the camera, the company will also be unveiling some new digital tools specifically built for finishing IMAX movies.
“People may not realize that basically every movie you see in an IMAX theater is being completely remastered for IMAX,” Markoe notes. “We’re not just using the DCP that’s showing in regular theaters.” Whether shot in 15/65 or digitally, films released for IMAX theaters go through unique post processes, up to and including the film-out stage for exhibition.
IMAX’s distinctively taller aspect ratio — 1.90:1 or 1.43:1, depending on the venue — isn’t about cropping and blowing up a standard 2.39:1 anamorphic or 1.85:1 flat image.
“We are applying proprietary technology to enhance that movie, to look as good as it can on our bigger screens that are brighter and have higher contrast. So, we are making a unique master, both for picture and for sound, because our sound format is not Dolby,” he explains.
“It’s our own proprietary sound format. Done with the filmmakers in complete control of picture and sound. When it’s shot at the [taller] aspect ratio, that’s what you see. So you are really seeing a unique version of the movie in our theaters.”
Markoe, who has an extensive background in post, observes that the increase in resolution between traditional and IMAX can mean that aspects of the image that look great in a standard film or digital presentation might become obvious problems in a large format screening.
“Often,” he says, colorists or VFX artists will add digital film grain in post, “which on most theater screens will look good, but then they see it in IMAX, and it’s too much! We are able with the DMR [proprietary digital remastering tool] to feather that amount back to the director’s taste, but we also do that both for our laser and our xenon projector theaters separately to make sure it looks exactly the way they expect it to look in all of our theaters.”
Markoe also touts IMAX’s 5.0 and 12.0 sound system as providing a unique audio experience. “It is sub-bass managed so it doesn’t use a discrete subwoofer channel,” meaning that there is no audio channel specifically dedicated to the lowest frequencies as there is in “.1” systems, but the entire spectrum of frequencies is contained in all channels and then everything below the cutoff frequency of 70Hz is sent by the playback system to the subwoofers.
“We actually have more powerful subwoofers than any Dolby Theater out there. We have bigger subwoofers and more of them with more amplification and the whole system allows for a kind of low. The low end in our art theaters can perform in a way that other theaters can’t, which is another thing that Chris Nolan loves about [IMAX]. “
While IMAX has been in the news recently because feature films released in the format — such as Oppenheimer, shot in 15.65, and Dune, shot digitally but remastered and released for IMAX theaters — they are still committed to overseeing the kind of documentary specialty film they built their reputation on.
“We have documentaries, original documentaries, film for IMAX program,” Markoe explains. “Our Blue Angels documentary is coming out this year. We went through and tested eight different cameras to make sure that the camera in the cockpit looks best in IMAX.”
Cacciao adds, “Documentaries are something that we’ve made from the very beginning. There were a few years recently where we made very few. We’re now making more documentaries and we’ve started producing more content and acquiring and distributing a lot, too.”
While IMAX’s primary claim to fame is based on super-sized theatrical experiences, the company has also been developing digital tools for enhanced home viewing. They will have a presence at the show where people can learn about developments with IMAX Enhanced Home format.
In addition to its focus on large neg and prints, IMAX will also be at the show with developments for home entertainment viewing, including IMAX Enhanced Content for optimized 4K HDR presentation at home, and its IMAX Streaming technology designed to optimize streaming of audio and video used by Disney and other content delivery companies.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Studios are increasingly mining videogames for characters and stories to bring to life in TV shows and films, particularly as audiences grow tired of story lines based on comic books. A key part of success is appealing to die-hard fans and new audiences without betraying the original game.
An article by Sarah Krouse in the Wall Street Journal explores this trend with comments from several executives in gaming and film. Gamers are a highly engaged bunch, Helene Juguet, managing director of Ubisoft Film & Television, tells the WSJ. “If they don’t like something, they will tell you.”
Hollywood is commissioning more video game adaptations, and the reasons are clear: Movies based on video game adaptations and released broadly in theaters grossed $712.2 million at the domestic box office last year, more than double what they brought in the prior year, according to ComScore. Superhero film adaptations, meanwhile, brought in about $1 billion domestically, down 42% from the prior year.
Video games offer fresh characters and new worlds that can appeal to young children and their parents — like Sonic and Mario’s adventures — or teenagers and adults seeking mature story lines, such as those in HBO’s hit series, The Last of Us.
In addition, a generation of gamers are now in creative positions across Hollywood. Yet failure to get the translation to screen right does no one any favors. Paramount’s first stab at bringing Sonic the Hedgehog to the screen faced a backlash over the trailer, prompting the studio to quickly conduct focus groups and hire a new animator to alter the character’s appearance so that it appealed to die-hard fans.
“Every design now is vetted within an inch of its life,” Marc Weinstock, president of worldwide marketing and distribution at Paramount, tells the WSJ.
It’s why game developers and fans are often now deeply involved in adaptations, ensuring that the end product honors the source material.
But it’s not exactly a two-way street. Film and TV producers want games more than game developers want Hollywood. While few films make $1 billion, hit video games can generate several billions of dollars in sales over their lifetimes, which means that for some game makers, film and TV adaptations are more trouble than they are worth.
“In failure, we run the risk of compromising the underlying intellectual property. So, it’s a high bar,” said Strauss Zelnick, head of Grand Theft Auto developer Take-Two Interactive Software.
The success of HBO’s “The Last of Us” has streamers and studios alike jumping into the video game adaptation game.
April 5, 2024
Posted
April 5, 2024
Co-Opting AI to Streamline Costs and Supercharge Creativity
TL;DR
Pinar Seyhan Demirdag, co-founder of creative AI production company Seyhan Lee, is one of the leading figures at the forefront of using AI as filmmaking tool.
Seyhan Lee has developed and released Cuebric, which uses generative AI to generate photoreal environments in an instant and far more affordably than current methods.
Seyhan Demirdag is adamant that AI tools like hers do not spell the end of physical soundstages or deep human involvement in the craft.
The excitement surrounding AI tends to mask the practical benefits that tools using it can have on film and TV production today. A session at NAB Show, “Ask Me Anything: AI Post Production Workflow Experts Tell All,” intends to separate fantasy from reality.
Among the speakers is Pinar Seyhan Demirdag, co-founder of creative AI production company Seyhan Lee, and one of the leading figures at the forefront of using AI as filmmaking tool. Seyhan Demirdag will be joined by moderator, Michael Cioni, CEO and co-founder of Strada; Austin Case, Strada’s director of engineering; director Paul Trillo at Trillo Films; and Colourlab AI CEO Dado Valentic. The session will be held on at 10:30 AM on Sunday, April 14 in the Create Zone Theater (SU4087).
“I would like to show a practicable application of GenAI which results in the production of meaningful and emotional storytelling and that can reduce budgets,” Demirdag asserts.
Seyhan Demirdag will attempt to dispel the mystique and even fear that the industry has around the new technology. “I spend a great deal of time contemplating the power of humanity in the age of AI and for me, AI is a tool that benefits human workflows. That ethos is very much at the core of what we do.”
She continues, “AI did not invent itself. Humanity collectively decided that the time has come for us to adapt our workflows and the way we make meaning of life around us. The time has come for us to expand our capacity. Since AI works with parallel processing it processes information differently to linear computers. So, it has also come time for us to adapt our workflows, our creative thinking and creative execution with the same, multi-dimensional and multifaceted thought process.
“Inventions are parallel to the needs of their time. There’s nothing to fear about something whose time has come.”
Seyhan Lee has developed and released Cuebric, which uses generative AI to generate photoreal environments in an instant and far more affordably than current methods.
“AI is open source, so there are AI models on top of AI models that everybody is releasing but there are very few solutions that take the open-source research and make it cinematically appropriate or packaged to be usable by the end consumer. To my knowledge, there are even fewer tools that are geared toward professional use,” Seyhan Demirdag explains.
“There’s definitely a problem in the film industry which is the astronomical cost of environment building,” she continues. “When an actor changes direction in a Volume the virtual background needs to render and move correctly with them. The background cannot have a life of its own where the actor moves. But in order to do that current virtual production workflows essentially require the making of a game. That’s why the cost of creating background equates to the cost of building a game.”
And yet, Seyhan Demirdag says, if you analyze the background environments that are actually used in production up to 75 percent of them are not required or scenes can simply benefit from a simpler background.
“If productions were to adopt a solution like Cuebric they would not only have more options to use but they could also save the astronomical costs of virtual backgrounds.”
Seyhan Demirdag says production designers can use a tool like Cuebric to quickly upload sketches without needing to know how to use code or complicated tools.
“They can make a sketch, upload it, and visualize what they would like to build later on in a matter of minutes. The whole production can instantly understand the vision.”
Maybe the director can shoot a scene in Alaska that previously didn’t fit the budget. Or a writer on episodic TV needing to meet repeated tight deadlines could use the tool to quickly ideate and present what they have in mind.
She says several short films are using Cuebric , along with a promo for a well-known musician and several A-list studios.
Director Quinn H. and his team recently produced a pair of high production value shots — looking like Paramount’s Yellowstone or a scene from Dune for a fraction of the cost by using the background solution of the software.
“I would dare to say that Cuebric makes the world’s most collaborative art form even more collaborative,” she contends.
Seyhan Demirdag is adamant that AI tools like hers do not spell the end of physical soundstages or deep human involvement in the craft.
“Just as I love to read tangible, physical books I believe that the advent of AI will always give us options,” she says. “There will be a possibility for people not to use any actors if they want. It will be a choice. But I know that I will always be watching movies that speak to my heartstrings. That’s why there will always be a world where an actual camera will track an actual actor and the background will be tracked as a relationship.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Watch “Creator Economy Amplified: AI Tools for Creators.”
TL;DR
From automating mundane tasks to amplifying creative innovation, artificial intelligence is transforming the creator economy. Jim Louderback, editor & publisher of “Inside the Creator Economy,” joins veteran journalist Robin Raskin,The Prismatic Company’s Abe Feinberg, and Rebecca Xu from Opus Clip to unpack the profound influence of AI on content creation.
AI, says Raskin, will be seamlessly incorporated into “pretty much every part” of the lives of AI natives. AI tools are not replacing creators, Louderback says, but serving as co-pilots, enhancing their ability to produce content more efficiently and creatively.
Xu describes how tools like Opus Clip that simplify the video editing process, transforming long-form content into engaging, short videos with a single click, cater especially to the preferences of Gen Z audiences.
Feinberg explains how creator-designed platforms such as Prismatic will enable creators to design and produce content that’s modular, composable, and remixable, significantly reducing the time and effort required to update and adapt content across various formats and platforms.
AI’s integration into content creation tools is not just a glimpse into the future; it’s a present reality that’s enhancing the way creators produce, manage, and distribute content. From automating mundane tasks to amplifying creative innovation, artificial intelligence is transforming the creator economy, and as AI continues to evolve, its integration into the creative process heralds a new era of efficiency, personalization and engagement.
As part of NAB Amplify’s “Creator Economy Amplified” series, we sat down with industry veterans Jim Louderback, editor and publisher of Inside the Creator Economy, veteran journalist Robin Raskin, The Prismatic Company’s Abe Feinberg, and Rebecca Xu from Opus Clip to unpack the profound influence of AI on content creation.
These industry pros shared their insights into leveraging AI tools to revolutionize content creation, enhance efficiency, and foster connections with audiences. Whether you’re intrigued by the prospects of AI-driven content generation or seeking strategies to refine your creative workflow, this discussion promises a forward-looking perspective on integrating AI into your creative toolkit. Watch the full conversation in the video at the top of the page.
This chat offers a preview of the all-new Creator Lab at NAB Show, a dynamic space dedicated to exploring the newest trends and technologies driving the creator economy. Led by Louderback and Raskin, the Creator Lab will host an extensive lineup of discussions and interactive workshops, with industry experts including Feinberg and Xu offering valuable insights and practical skills to attendees.
The AI Revolution in Content Creation
Louderback uses a simple yet profound analogy to describe the impact of AI on creators: the Slinky helical spring toy. Pre-AI, he says, creators were like a closed Slinky, limited by what they could do by themselves. With AI, possibilities expand and suddenly there’s a lot more ground they can cover.
“The nice thing that happens is now with AI tools, you’re not replacing the creator, but there’s so much more that they can do, cover a lot more ground in that same amount of time, because AI really helps them be a better creator,” he says.
This visualization captures the essence of AI in content creation — expanding the potential of creators without supplanting the human touch that lies at the heart of creativity.
Using AI to generate creative content, such as scripts and even videos, results in stale, generic content that disappears in the blink of an eye, Feinberg agrees.
“The more interesting part,” he asks, “is really how do you build a system where that the AI is functioning as the supportive assistant to the creator, and the creator is still the person who’s leading that creative decision process, they are still the creative decision maker?”
As an example, Feinberg points to mobile editing apps. “Some of the motions are very tedious,” he explains. “You have to zoom in and, you know, make these little micro-edits. If an AI can be there as your assistant and be sort of learning the pattern and saying, ‘Hey, I think this might be the kind of edit you want to make, would you like to make it with a single tap,’ in the end, you’re doing exactly the same thing you would have done as the creator. But you’re turning out a lot more of the those videos edited videos in the same amount of time.”
Productivity, Xu argues, is where AI offers the most value to creators. “Ultimately, the best AI tools help creators enhance creativity and efficiency through helping them better manage their time,” she says.
“For instance, coming from the AI video editing world, there’s a lot of AI video editing tools to really help creators streamline the editing process by, say, automatically identifying and correcting errors, enhancing visuals or audio quality.”
Raskin, who comes from a background in magazine publishing during the print era, provides a broader perspective, suggesting that AI’s integration into content creation is part of a generational shift.
“Digital natives grew up in a video-first environment where video became like the lingua franca that everybody speaks and communicates with,” she notes, “and the new generation will be an AI generation.”
AI, says Raskin, will be seamlessly incorporated into “pretty much every part” of the lives of AI natives. “Whether it’s the car that you drive, or whether it’s the events that you produce or whatever that is, AI will be infused and the hardest challenge will be to use it wisely and creatively.”
Personalization and Audience Engagement
Another critical aspect of AI in content creation is understanding and engaging with audiences. By analyzing feedback and preferences, Louderback says, AI tools can help creators identify their most engaged fans and tailor content to meet their interests. “It helps you understand your audience better,” he explains.
“If you’re a creator, and you’ve got lots of people commenting and saying things, whether it’s on your videos, or other places, it can go in and summarize all of those comments and help you find those people that are your biggest fans and figure out what they want, and help you figure out how to give your audience and your community even more of what binds them to you.”
Historically, Feinberg notes, information has been presented in what he calls a one-size-fits-all approach. But AI is driving a shift to a more tailored content delivery.
“You know, you get a lecture for everyone, or everybody watches the same video in the same format, or everyone reads the same blog post,” he details. “But now, there’s a huge amount of potential for AI to help you take really the same core value that you’ve created, but present it in different ways — even on the fly — to an individual person to give them what they most want.”
Practical Applications and Future Prospects
Tools like Opus Clip, which uses to AI to automatically edit long-form videos into short, high-quality sharable bites, and content creation platforms like Prismatic aimed at helping creators design and build for scale, are the wave of the future.
“At its core, I believe AI is a tool that helps creators tell a better story.,” says Xu. “A lot of people, especially Gen Zs, have changed their habit of consuming content from TVs, or long videos or articles into short videos. So short videos are increasingly becoming a main medium for creators to tell their stories as well as for a user or audiences to consume content.”
Opus Clip, she says, was designed to help creators. “And we’re not only helping individual creators, but also a lot of businesses who want to break into a bigger market through short videos.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Creators, Feinberg recognizes, have a lot of value to share. “But it can be kind of locked up, because of the amount of time that it takes to transform your content.”
That’s why, he says, his team is building Prismatic, which uses AI to generate graphics, diagrams, videos, podcasts, blog posts and other content with customizable, reusable components.
“We’re working on a platform to help build a content in a way that’s modular, composable, remixable,” he explains. So you can more easily deploy things in different formats and different modalities. And then what we’re where we’re going with that is the idea that we know that creating high quality overtake or high quality content takes a lot of effort. And we want to help creators, not circumvent that effort, but get more out of the effort that they’re putting in.”
For creators just beginning to dip a toe into AI, Raskin recommends AI aggregator There’s an AI for That, a community of AI founders and users boasting what the developers call the largest database of AI tools and tasks.
“I would not go in there thinking you’ve got the answer to the world,” she cautions. “But it’s so interesting, because it lays [everything] out in categories, law, contracts, events, scheduling, and you can look up an AI-specific thing. And the deeper secret is, most of them are free for a trial. So you can experiment, decide whether it’s for you or not.”
Watch “Creator Economy Amplified: Building the Creative Stack.” TL;DR Navigating the creator economy is akin to exploring a vast digital ecosystem, where content is the currency and creativity knows no […]
April 4, 2024
Roberto Schaefer, ASC, AIC: The Challenge of Cinematography Is “Figuring Out How to Make It Work”
TL;DR
The work of a cinematographer begins and ends with guiding a director’s vision to screen, but the path is rarely clear, says BAFTA-nominated Roberto Schaefer, ASC, AIC.
Successful interpretation of a director’s vision, he says, requires relationship management and persistence for a DP to be able to get their point of view across.
To Schaefer, the most interesting part of the job is the creation of images from scratch, but he believes there is a risk that this will be superseded by GenAI.
Schaefer will be leading a three-hour interactive workshop at NAB Show, “Script to Screen – The Cinematographer’s Process,” walking attendees through a cinematographer’s creative process of crafting iconic cinematic images.
Cinematography 101 is all about interpretation, Schaefer explains.
“Some directorial communication can be so obtuse you really need to work hard to discern what it is they wish to say.”
“Many issues are a matter of management such as liaising with VFX and keeping on top of post-production.”
Schaefer is a BAFTA-nominated cinematographer who began his film career in Europe working for Martin Scorsese, Joe Pykta and Nestor Almendros. He eventually moved to Los Angeles to shoot for director Chris Guest on films including Best In Show and Waiting for Guffman.
In an extensive collaboration with director Marc Forster they made Monster’s Ball, Finding Neverland, The Kite Runner and Quantum of Solace. Schaefer has also shot The Host, Red Sea Diving Resort and episodes of Amazon sci-fi The Peripheral.
During his three hour workshop at NAB Show, Schaefer will show clips from The Paperboy (2012) explaining how he worked with director Lee Daniels to find the visual language of the movie.
“I like to get general ideas of the tone and feeling from a director and if there are any specific ideas they have in mind. There are typically references from stills or other movies to lean on but I like to be given a free hand to interpret the movie in a way that I think fits the visual context of the story. It’s not okay for me to copy a shot. from another movie. It is fine to use these as an influence but let’s find a way to integrate it into our language.”
Schaefer will also show clips from Stay (2005) directed by Forster which involved some detailed visual effects sequences. He will share elements of the original script along with pre-production diagrams and previs material to show the progress of bringing a concept from page to screen.
The best laid plans are at risk of being undermined by budget, so compromises or alternative solutions may need to be found.
“You might have this great vision and you might want to translate it in a certain way but then the budget people come in and say ‘Sorry, you can’t do that.’ So, you have to figure out another direction.
“Even if you have $250 million they want $400 million worth on screen. If you have $100,000 they want a million dollars’ worth. That’s never going to change. You start with your wish list and then try to figure out how to make it work,” he says.
“Ultimately, you want to be a participant and a collaborator so finding a way of deciphering what the director says is really important — and some of them you really have to decipher.”
Schaefer’s warning when it comes to working with VFX will be familiar to many DPs. It’s all about who has control and authorship of the final image on screen.
“In my experience, sometimes VFX are not really ready to collaborate that much with you. It’s almost like, you’ve done your job photographing, now it’s our turn. What can happen when you get to the DI is that the front plates don’t match your original photography,” he explains.
“It’s a problem especially for many young DoPs who don’t know how to work with VFX at a time when the work is getting more complex. The bigger budget shows will often have five or more VFX vendors onboard. The key is to have a dialogue at the beginning in pre-production. You want to get talking with them to understand their process and for them to understand yours.”
Successful interpretation of a director’s vision is also about relationship management and how persistent the DP is in getting their point of view across.
“The director might tell you what they want and the DoP will go ahead and do it but then realize that the director really doesn’t have a clue visually. This is rare, but it happens, so you have to be prepared to take that burden on and even challenge the director. You’d be diplomatic saying something like: ‘I think this is what is best to bring your ideas across’; and ‘let me show you what we can do,’ but you might have to be persistent. Some young DoPs are afraid to question the director too much,” says Schaefer.
“Of course, it is the director’s movie in the end, especially if they are also the writer but if you’re hired as the DP you have an agreement with them. It’s like a contract and you want them to stand by it and honor your side of the contract which is essentially a respect for your craft in shaping the color and the composition and the light and the story.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
To Schaefer the most interesting part of the job is the creation of images from scratch but he believes there is a risk that this will be superseded by GenAI.
“There are already so many ways to affect the image, not only in the camera, but afterwards in any number of software programs. Even FilmLight’s grading system Baselight and DaVinci Resolve has incredible tools where you can make the image look as if it were shot with an anamorphic lens. You can shape the aberrations. Make distortions, add flares. There are so many things that can be done to the image after you’ve done your job — and you don’t even know they’re going to do it,” he says.
“I’ve seen editors change the color in the Avid and then present to the director as if that’s what it should be. As cinematographers we have to be aware of all of these things and try to keep them under our control if we’re to guarantee that the final image is what you had discussed with the director.”
He adds, “AI is definitely something that we have to be really wary of. With AI moving so fast I’m not going to predict how many years it will be before we lose control but it’s looking like it’s going to happen eventually.”
The Spice Splice: “Dune: Part 2” Editor Joe Walker, ACE on Worldbuilding and Workflows
TL;DR
For editor Joe Walker, ACE, “Dune: Part 2” was a delicate juggling act to balance the intimate love story with the spectacular VFX.
Walker talks about layering sound to create scenes such as the gladiatorial battle of the minds featuring Austin Butler’s villain.
Dropping clues to the simplicity at the heart of the storytelling, Walker compares parts of the film to James Bond movies and animated Road Runner cartoons by Chuck Jones.
The love story between Paul Atreides (Timothée Chalamet) and Chani (Zendaya) is at the heart of Dune: Part Two. The same set of filmmakers, led by Denis Villeneuve and including cinematographer Greig Fraser, composer Hans Zimmer and editor Joe Walker, ACE continues to tell the complex saga with confidence and boldness.
“Denis once said to me, this film should be less like Lawrence of Arabia and more like Chuck Jones’ Road Runner,” Walker noted to Steve Hullfish in the Art of the Cut podcast. “And in fact, if you look at my cutting room, I’ve got a picture of Road Runner. It’s on the wall.”
Walker is speaking at the 2024 NAB Show at “American Cinema Editors Present: The Cut from Rough to Art,” on Sunday, April 14 at 2:00 PM. Moderated by ACE president Kevin Tent, this interactive workshop will explore the craft of editing, career paths, working with the director, and the editorial storytelling process.
In a film that covers so much narrative ground, flashing forwards and backwards, it’s remarkable how little exposition there is.
“What I think we did well in Part One was disguise how much setting up there was for this second film,” Walker told Daron James at the Motion Picture Association’s The Credits. “It meant many of the characters had limited screen time in the first film, but with Part Two, they are given a little bit more space for the drama to unfold.”
Walker won the Academy Award for the first film, which ended with Paul meeting Chani and going into hiding with the desert people, the Fremen.
“Their relationship was the most important thing to get right in the film. We are leaning into action adventure and dazzling sequences, but if the heart isn’t in the right place, then it’s not going to work. We spent a lot of time taking care of that relationship,” he says.
“It’s interesting to see how Paul must transform from a young adult who, when we first meet him, is a guy dreaming about a girl who doesn’t want to practice with his mother at the breakfast table. But through the course of it, it becomes, first of all, a man, and then this superpower in a way.”
Indiewire’s Bill Desowitz describes the transformation of Frank Herbert’s classic sci-fi novel as “a high-octane Lawrence of Arabia in space: an epic love story and political adventure.” It’s the action that takes center stage in Dune: Part Two, requiring a faster pace, more compression of time and less exposition over the two hours and 46 minutes of runtime.
“We wanted the film to shift as nimbly as possible between the ‘bignormous’ and the intimate, while still devoting time to craft Paul and Chani’s relationship,” Walker told Desowitz.
“In terms of our editing process,” he said, “the most significant focus was on scene positioning. Editing a big ensemble film is like making an Alexander Calder mobile. Lean too heavily on one aspect and the whole thing tilts; spread the storylines too evenly and it’ll damage the impact of the design. It’s a delicate juggling act.”
Villeneuve conceived of Part One as “the appetizer” with the sequel being the main course, “where everything’s set up, and you can just enjoy a damn good action-adventure story,” Walker explained to Hullfish.
“That’s not to say I don’t appreciate great dialogue, which I really do,” he said. “We spend hours and hours, finessing, improving, trying to get the best clarity. Generally, in this story, clarity is a hyper narrative. Dune is a vast ensemble piece with a complex story and complex backgrounds and Frank Herbert’s almost fractal approach to storytelling [so] we had to have utter clarity and delivery of ideas.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
The scene, which introduces the film’s villain, Feyd-Rautha (Austin Butler), is a standout and not just because it is shot in stark black and white using infrared photography. It’s a gladiatorial fight in a huge arena that is both a display of spectacular power and ambition and a duel between the minds of Baron Harkonnen (Stellan Skarsgård) and his ruthless nephew.
“What was fun cutting that sequence was creating a world not just visually, but in sound terms, something that doesn’t sound like a 21st Century sports event but has its own unique flavor,” Walker told The Credits. “We spent a long-time developing layers upon layers of different Harkonnen sounds.”
These includes sounds based on the [native New Zealander] Māori Haka chant. The sound team gave Walker stems of all sorts of audio that he could layer into his timeline.
“In the middle of the timeline was the pre-mixed dialogue and spot dialogue,” he told Hullfish. “They recorded at some point, a huge group of heavy metal rockers to kind of amplify the kind of psychotic nature of the crowd.”
The audio was so complex he would turn it off at times and cut the visuals first. “I’ve got 36 sound channels, all staggered and overlaid. If you are adjusting one image, you’re adjusting 36 cuts. So the most efficient way to adjust that kind of sequence is to turn sound off, cut the sequence and then work on the sound to kind of knit it back together again. Sometimes I just want to feel the rhythm of things in my head in silence, and then then you can kind of complement it with the sound.”
The film’s most euphoric sequence is Paul’s triumphant ride of a colossal sandworm. It’s a scene that has been 40 years in the making since Villeneuve first drew storyboards of it as a teenager and required three months on location.
“The very first thing I saw of Dune: Part Two was previs for that scene,” Walker tells IBC365. “It was meticulously worked out and shot by a dedicated unit under the command of producer and second unit director Tanya Lapointe but in the cutting room it was like a jigsaw.
“Denis described the effect he wanted as being a kid on the back of a school bus, the axle bumping,” Walker continues. “There was to be the sense of there being no purchase on the worm. You can’t just lie there because it will throw you off. Then, Denis said, ‘it was like being on a skyscraper — a skyscraper turning.’
“When he used those words, Chris [Christos Voutsinas, additional editor] and I dug into our archive of sounds for girders grinding and massive ships moving to match with the huge strength of this worm coursing through the sand.”
The first cut of the sequence was so cacophonous that it dissipated the overall impact. Then they began to deconstruct it. “When everything is noise, the music’s pounding and people are screaming, there is no shape,” Walker says. “So, we turned off the music in the first part of the scene. As Paul sees the worm and begins to run towards it, we just play sound FX to build the anticipation and anxiety of this unpredictable unstoppable beast.”
To emphasize the major story point of this scene, we hear Dune’s signature tune. “It’s our Bond theme,” Walker says. “We’ve deliberately starved the film of that particular piece of music until the point that Paul stands up on the worm. There is something religious about that moment.”
Cinematographer Grieg Fraser used infrared cameras to shoot the black-and-white sequence in director Denis Villeneuve’s “Dune: Part Two.”
April 2, 2024
Posted
April 2, 2024
The “True Detective” Editors Want You on the Case
TL;DR
Editors Mags Arnold, Brenna Rangott and Matt Chessé discuss the making of “True Detective: Night Country,” directed and written by showrunner Issa López and starring Jodie Foster and Kali Reis.
Speaking to Matt Feury on “The Rough Cut,” the editors discuss about how “clumping” is better than cross-cutting, and how they leave the audience “breadcrumbs” of clues to able to almost but not quite solve the puzzle at home.
One chief task was to chief task was to unpack a story-within-a-story told over time — the backstory between the two lead detectives using flashbacks and audio/video of Matthew Broderick’s Ferris Bueller singing “Twist and Shout.”
A clue to the popularity of HBO anthology series True Detective is that viewers themselves enjoy turning sleuth. In the latest hit season, Night Country, directed and written by showrunner Issa López, the task was to solve the mysterious disappearance, and even more mysterious discovery, of eight men from a research station in the Alaskan ice. The law enforcement officers on the case are Detectives Liz Danvers (Jodie Foster) and Evangeline Navarro (Kali Reis). Mags Arnold, Brenna Rangott (who previously worked with López on the 2019 series Brittania) and Matt Chessé, ACE all served as editors of the season.
A chief task was to unpack a story-within-a-story told over time — the backstory between Danvers and Navarro using flashbacks and audio/video of Matthew Broderick’s Ferris Bueller singing “Twist and Shout.”
“Some flashbacks organically made their way into the edit as the series progressed,” Rangott told The Rough Cut’s Matt Feury. “Those were places where we felt like it was helpful to nudge the audience. But as far as ‘Twist and Shout,’ that was all very much in the script. It wasn’t an afterthought but there were moments where we needed to add more of it to give the audience an idea of its link to everyone’s past traumas.”
She adds, “There’s a sort of cross-pollination of traumas coming into reality and ‘Twist and Shout’ is a good example of it. We wanted to hit on it to give the audience an idea that Danvers associates that song with her past trauma. There were a few more moments where we added it where it wasn’t scripted.”
The drama reaches its pivotal moment at the end of Episode 6. For Chessé this was “the control point that told us how much we needed, how much we knew, and when we knew it. So, we reverse-engineered it from Episode 6,” he said.
“It was a cool process. People would slip stuff under the door, and I’d have to craft it into my episode. I think that’s the way it has to be on something like this. It’s like you’re having a dialog throughout the show with these elements and you have to work it where it happens.”
The trio could access a pool of shared shots to help assemble their individual episodes.
“Sometimes you need a shot that acts like a palate cleanser,” says Chessé. “But it’s got to resonate with whoever you’re leaving or who you’re going to. And they would double up sometimes. The assistants had to go through the episodes and spot, ‘We’ve seen that snowman too many times. We can’t repeat ourselves.”
Fans of the franchise enjoy playing detective, so the filmmakers pepper the series with clues and red herrings giving them enough rope to solve the puzzle but not too much as to be confusing.
“We’re not beating things over the head, leaving things for people to interpret and talk about after the episode,” Chessé says. “I think Issa had a great sense of playing with that. She knew what to hang on to, what to hold back, what to pay off, and what to let go. It seems like she had great taste because everybody loved the show. I didn’t have people afterward asking me to explain things to them. I think the takeaways were solid.”
On reading the script, Arnold cites a scene between Danvers and Navarro in Episode 1 where they talk about something that’s happened in the past. “What they said made me think that there was some sort of hinterland and that they’d had a prior relationship. I thought, ‘Not only is this Jodie Foster, True Detective, and Issa Lopez, but now Jodie is going to play a gay person as well. I was super excited. Of course, it didn’t go that way. I didn’t misunderstand, but it could be interpreted either way.”
To help them assemble all the story elements in order that ultimately made sense and that kept audiences engaged they employed a process they call “clumping.”
Arnold explains, “It’s when, instead of cross-cutting between characters, you go to two different characters playing their scene and then you come back, which is how it was written, just to keep it exciting. What we started doing was clumping things together so that you could get a sense of being with these characters a little bit more.”
Chessé details this further, “A lot of times you watch a show that has multiple characters and there are certain things you’re more into than others. But you don’t want the audience to have that feeling of, ‘I don’t want to be with this person right now. I want to stay with that person.’ You want to feel like you’re cutting away at a point that seems organic in terms of interest level and comprehension of the story,” he says.
“You’re on a journey collecting little clues so you’re going through the episode thinking, ‘I should remember that. That seemed important.’ You have to lay that breadcrumb trail out for people in terms of their interests and allegiances to the characters so that they’re bonding with this entire town that they have to meet. What is their relationship? We don’t say it overtly. You have to have those ‘reveal’ scenes close enough together that you can track them. If you put them too far apart then you’ll forget how everything’s connected.
“That’s the strength of being the editor, getting to be the first responder. You’re selling it to yourself first. So, whether the director feels a certain way or not, you can use that as an excuse to say, ‘As the audience member, I feel like I need to know this now. Can we try this over here?’ You have to make it make sense to you before it can make sense to other people.”
For those following along at home, that word again is “clumping.”
“I’m working on having it trademarked,” Chessé remarks to Feury, “Are we doing a clinic on clumping at the next ACE gathering?”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Raisani and Spates oversaw an enormous amount of effects work for the otherworldly series, and made extensive use of Ferstl to complete some of this work as part of the color grading process.
Ferstl and finishing editor Mike DeLegal employed Blackmagic Design’s DaVinci Resolve for this unusual workflow, which bypassed the standard VFX protocol of pulling each shot and sending it out to a dedicated effects house to do the work.
“Visually, the show remains loyal to its source material, impressively recreating the stunning landscapes and dynamic action that initially captured fans’ hearts,” says Karama Horne in her review of Netflix’s Avatar: The Last Airbender, a live-action adaptation of the immensely popular animated Nickelodeon series.
“The top-notch production design brings settings like Ozai’s eternally flaming throne room and the windswept vistas soaring over Aang’s sky-bison Aapa’s fur to life [see above]. Some characters, like the Kyoshi Warriors, appear lifted straight from the iconic frames of the cartoon.”
The series debuted at number one and was almost immediatedly renewed by the streamer for two more seasons.
Executive producer/VFX supervisor Jabbar Raisani and VFX supervisor Marion Spates oversaw an enormous amount of effects work for the otherworldly series and made extensive use of senior colorist/finishing artist Siggy Ferstl of Company 3 to complete some of this work as part of the color grading process, rather than going through the standard VFX protocol of pulling each shot and sending it out to a dedicated effects house to do the work.
This method, which considerably expanded what could traditionally be accomplished in color grading sessions, was possible because Ferstl’s dedication to using the tools within his color corrector of choice, Blackmagic Design’s DaVinci Resolve, and because of that manufacturer’s pronounced efforts to constantly expand the toolsets within Resolve. Raisani, Spates and Ferstl will speak about this unusual workflow during the discussion, “Avatar: The Last Airbender – Expanding the Role of the Colorist into VFX Artist.” Moderated by colorist, image scientist and educator Cullen Kelly, the session is part of the Core Education Collection: Create Series, and will be held at 4:00 PM on Monday April 15 in rooms W216 and W217.
The VFX supervisors and Ferstl will explain how this workflow came about, inspired in part by some shots Ferstl had done with Raisani on Netflix’s hit series Lost in Space; how the workflow provided the series’ creatives a significantly more immediate and interactive way of iterating many of the visual effects shots, sometimes saving days in the process; and how the combination of VFX and final color grading helped to facilitate discussion and bring a unified feel to some of the complex imagery that could normally involve days of back-and-forth notes.
Ferstl will also drill down into some of the most challenging aspects of this type of work that he and Finishing Editor Mike DeLegal (sharing media and timelines within Resolve) accomplished and which of Resolve’s plethora of relatively new tools they used and what he did during his extensive preparation for this project to hit the ground running as scenes started coming in to Company 3. Effects he created, either wholly or in part, included altering foliage and surroundings to create certain fantasy environments, building and integrating digital lighting effects, creating digital “lens” and diffusion characteristics over the finished image and enhancing a number of key time-travel related transitions in the series.
Additionally, panelists will also share their thoughts about where this expansion of the color grading process into aspects of VFX creation could be of creative value for more shows in the future and discuss what type of skills current and prospective colorists should consider developing in order to be prepared.
The one-hour event, complete with illustrative scenes from the series and a special breakdown reel prepared by Ferstl, promises to be fascinating and informative for anyone currently working in the fields of color, VFX, overall post or production and direction of VFX-heavy shows, as well as those who hope to be soon.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
While assembling “Dune: Part 2,” editor Joe Walker, ACE referenced both James Bond movies and animated Road Runner cartoons by Chuck Jones.
March 31, 2024
Research: Peak TV Has… Peaked. So Here’s What’s Coming Next.
TL;DR
2024 represents the dawn of an uncertain new era for the TV business. After many cycles of seemingly limitless growth, an unmistakable decline has begun, and where it goes from here is anybody’s guess.
Prestige drama output has contracted, with multiple factors behind its demise. But what comes next? Variety’s VIP+ analysis offers some clues.
Post-Peak TV streamers are expected to lean more heavily on international content, take fewer risks, focus on sports and unscripted shows, and release episodes weekly in a return to network TV format and scheduling.
Born circa 2013, died 2023 — 10 glorious years marking the rise and death of Peak TV — is disinterred and examined in a new report, which also predicts what Hollywood and viewers can expect of the next phase of home entertainment.
In a nutshell: the rise and the demise of Peak TV (roughly the period between House of Cards to the finale of Succession), was the result of a number of factors not least being an oversaturation of prestige drama and a content spent that jumped from $139 billion in 2014 to $243 billion by 2022.
One thing we can be sure of going forward: fewer premium series, more content like the network cable TV of old.
2022 was actually the year that TV finally actually peaked. Luminate data shows 2023 saw a drop in original series output across all major streaming platforms, plunging from more than 2000 titles in 2022 to just over 1600 last year (20% drop year over year) — following two decades of almost continuous growth.
Arguably, as Aquilina outlines, the boom in TV began in the early 2000s, when cable networks realized that they could make themselves more valuable to viewers and therefore gain bigger audiences, by creating slates of original programming that viewers wouldn’t be able to find anywhere else. Think The Sopranos,Sex in the City and Mad Men.
But 2013 was when Netflix entered the equation, bringing originals like House of Cards and Orange is the New Black.
“This was a key tipping point in the history of TV because 2013 was the first time when the FCC measured an annual drop in US pay TV subscribers,” said Aquilina. “In other words, that’s when cord cutting really got going for the first time started being whispered about in Hollywood circles.”
Skipping forward to COVID-era 2021 and 2022, the amount of original content on streaming “just balloons” — but early 2022 also marks the first quarter when Netflix began losing subscribers as macroeconomic conditions bite.
“It’s enough to make everybody freak out and pretty much changed the calculus around streaming overnight. All of a sudden, Wall Street investors are paying a lot more attention to how much money these companies are spending and how much they’re losing on the streaming platforms.”
Studios like Warner Bros. were pouring billions of dollars into original content in order to compete with Netflix and racking up massive deficits in the process. This led to cuts in spending and that meant less TV shows.
VIP+ data shows that 2023 was the first year since 2013 when the number of original shows released on SVOD fell — from 1,000 in 2022 to less than 800 last year.
But that doesn’t quite paint a full picture. VIP+ and Luminate expect aggregate spend on TV content in Hollywood to go up this year. Only by 2% or $4 billion to around $247 billion, but a rise nonetheless, with Disney expected to be the biggest spender at around $33 billion.
“More significant than just how much these companies are spending is where this money is going. Because companies have shown they’re still willing to shell out for the right programming, which at this point mainly means sports broadcasts.”
Analysts and industry observers expect that spending on general entertainment TV is going to be flatter over the next few years meaning less money for TV and consequently fewer shows are going to be produced.
“The industry is headed for a contraction, there’s just not going to be the level of output that we had over the past decade,” said Aquilina. Factor in time spent by younger audiences on TikTok and other social media has a detrimental impact on time spent with streaming TV.
So how is TV going to change in this post-Peak TV environment? Apart from fewer originals there are likely to be a lot more shows based on popular IPs (like HBO hit The Last of Us) — shows that cater to existing fan bases.
“In other words, TV is going to look a lot more like current film studio slates with a lot of IP based blockbuster content, there’ll be a handful of prestige awards bait titles thrown in there to keep the Emmys coming in.
Shows like Mad Men, Atlanta or The Bear will have a much tougher time getting greenlit in this environment, he suggests.
The days of a large number of expensive to produce or creatively risky shows are likely over. “You can get more for your dollar with unscripted content like reality shows,” he said citing Netflix series Love Is Blind.
In tandem with that, expect more reruns of existing content — a trend that already supports a lot of streamed video consumption (think Friends, Grey’s Anatomy, The Big Bang Theory).
There’s going to be a greater reliance on international non-English language TV. For one major reason, it’s cheaper.
“These shows can be produced for less money internationally than shows typically cost in the US. They can be acquired for much less than you’d spend to create a new drama or comedy series shot here.”
Korean drama is the logical place to look for a hit but VIP+ points to shows emanating from Sub-Saharan Africa and India. That’s because with the Europe and US markets pretty saturated, major streamers like Netflix are targeting growth in other territories and are doing so by investing in local content.
“I wager that an Indian series may very well become the next Squid Game,” Aquilina said. “I think that in the next few years an Indian series really becomes the next big breakout international hit.”
How else will post-Peak TV change? The rise of AVOD and FAST, plus new provisions in the new writers and actors guild contracts, will see ratings reported from streamers who were previously black boxes when it came to exposing how their content fared.
That might be taken as a plus, at least for media companies with broadcast divisions. VIP+ highlights other silver linings too, although they are fairly thin.
For example, if peak TV was defined by supply far exceeding demand, “too many shows being produced and more options than anybody needed that could actually really benefit the health of the business in the long run,” Aquilina contends.
Because of that consumers may have an easier time finding something to watch — if there are fewer new options coming out every single week.
There may be fewer prestige dramas and comedies being produced, but this could benefit everybody in the industry. That’s because the post-Peak TV landscape will look a lot like the network TV of old: shows with broader appeal, shows with less extravagant budgets, shows with longer seasons, and shows with episodes released weekly.
“This is going to necessitate a less artisanal approach than many signature shows that the PTV era took and it’s going to mean a return to broader sensibilities that define network television. The new era of TV may result in far fewer classic shows, far fewer experiments, less daring shows, but there is a chance it could put Hollywood on a path to getting its business model back on track.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Cinematographer Patrick Capone, ASC, director Mark Mylod and senior colorist Sam Daley consider what made the series look and feel like that.
March 31, 2024
Cinema Audiences Want That Engagement and Emotion. Here’s How to Rethink Release Strategies.
TL;DR
The movie theater business has now entered the true digital age. That can and should mean more content supply, better rates, and more flexibility for target audiences, says Jackie Brenneman, founding partner of cinema industry consultancy The Fithian Group.
The reason discussion surrounds Taylor Swift’s concert film was that the marketing strategy was so effective, states Brenneman.
Since almost all movies exist to allow viewers an emotional experience, a movie theater is the best place to experience that catharsis of emotional release, she emphasizes.
This year, audiences flocked back to theaters for movies like Barbie and Oppenheimer, with recent research from Omdia predicting that theatrical releases will generate close to $50 billion globally this year.
The panel, moderated by Carolyn Giardina, senior entertainment technology & crafts editor at Variety, will also feature Laurel Canyon Live president John Ross and independent director Sam Wrench.
“It’s still a very active and viable distribution platform for content owners and a place to experience something unique for consumers,” Brenneman says. “We just need to get a lot smarter at marketing it.”
“It’s not that music is specifically the future of cinema, but that the future of cinema is increased diversity of content appealing to all audiences,” says Brenneman.
It just so happens that Taylor Swift’s Eras tour concert film is tailor-made as a case study.
“It’s about marketing and awareness. So, the big differentiator and the reason we’re having this discussion surrounding Taylor Swift, was that the marketing strategy was so effective. In a single tweet she was able to alert hundreds of millions of people to her new movie, at a time when fans either couldn’t afford tickets or found her tour sold out.
“She used cinema as a means to go direct to fans. And it was just perfect timing,” says Brenneman.
“Of course, not everyone has such a great connection with fans as Taylor Swift but there are a lot of lessons to be learned. It showed that there is a way to really tap into fan desire and to do so affordably and effectively.
“With Barbie and Oppenheimer, the same idea applied. Fan-driven attendance and awareness is really key. It meant that marketing is both the challenge and the opportunity,” Brenneman elaborates.
It’s also important to understand that the movie theater industry has undergone a seismic change in the last couple of years.
For virtually the entire history of the movie theater business there’s been a large, fixed cost in getting any piece of content into a theatre. It would cost a thousand dollars or more to make and deliver a movie reel to a theater.
Studios had to be selective about which theaters they picked so as not to play too many competing films in certain markets. Since the switch to digital the industry introduced the virtual print fee (VPF) where the studios subsidized the cost of transitioning to digital projection equipment. Now those VPFs have expired.
“To all intents and purposes, they are gone so all of a sudden, we are in a true digital age of cinema,” Brenneman says. “That can and should mean more supply, better rates, and more flexibility to target audiences.
“We are just at the dawn of this. What the Eras concert showed is that you can make an offer to the entire market and allow exhibitors greater control.”
Prior to forming The Fithian Group, Brenneman was EVP and General Counsel to the National Association of Theatre Owners. In those roles, she was a frequent speaker and panelist at global exhibition events on the importance of data and optimism in shaping the narrative and future of the industry.
Recent research has shown how important social media (especially influencers or creators) is in driving awareness of new film and TV shows. Whilst this should be part of the media marketing mix, a reliance on social could exclude large parts of the population.
“We risk getting into this vicious cycle where we think only certain types of people go to the movies, because those are the only people that we are actually speaking to,” Brenneman says.
When you break down the demographics of big blockbusters its clear older audiences are coming in the same proportion that they were coming before the pandemic.”
Movie theaters, she will argue, have such a special place in the community. “Not all communities exist online. We know how to start to reach online communities but what about real communities of real people? A lot of influencing is done in the real world. Movie theaters are right there in the hearts of their communities. They can actually speak to real communities, offer promotions to local schools, or local senior centers, charities, and community groups.”
“If you’re able to tap into your local in-person influencer groups and market to them you can find the groups that would be most likely to see something like Taylor Swift or the Metropolitan Opera or whatever narrative feature or alternative content you are showing.”
A lot of recent discussion about the future of cinema pitted it against streaming which she says she never understood. “Streaming is an in-home option and far more of a competition to cable, but it got into consumer’s minds that home viewing was a replacement for cinema. Which is bizarre, because when people were going to Blockbuster every weekend and renting three movies and still going to the theaters no one thought to question its viability.”
Brenneman continues, “Post-COVID, and cinema has come back stronger. It’s clear they are not going to kill one other. Even while consumers have had their streaming habit entrenched while they were stuck at home when movie theaters re-opened, they started coming back in record numbers.”
That has to be because going to the movies is different than watching a movie at home. It is a completely different experience. This is not only because movies played back in IMAX-style large format screens and in enhanced properly calibrated surround sound are in fact a better and different experience than watching at home; there is also neurobiology research to back it up.
“Human beings didn’t evolve to actually regulate and feel and process emotions alone. We look to others to validate our response,” Brenneman says. “We don’t know how to feel when we are alone. We don’t laugh all by ourselves. We need to read the room and learn how to feel. The human emotional experience is a shared experience not an individual experience. We didn’t evolve that way.”
“Since almost all movies exist to allow you to have an emotional experience a movie theatre is the best place to experience that catharsis of emotional release,” asserts Brenneman.
“In addition, there’s still no cheaper way to get out of your house and do something chill, than going to the movies. People can talk about price all they want, but find me a cheaper, chill alternative to take a bunch of friends or family out of your house. There’s nothing better.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Through “The Eras Tour,” Taylor Swift is “passing a $100M screen test,” according to Variety, and other critics agree the concert film is worth a watch.
March 31, 2024
Creator Economy Amplified: The New Studio Frontier
Watch “Creator Economy Amplified: The New Studio Frontier.”
TL;DR
The rapid evolution of production technologies is making advanced studio tools and techniques accessible to creators at all levels, enhancing the quality and impact of content across Media & Entertainment.
Video production wizard Ryan Grams from Studio Upgrade joins Jim Louderback, editor & publisher of “Inside the Creator Economy,” and veteran journalist Robin Raskin to share insights into building versatile, efficient, and technologically advanced studios that cater to a range of creative needs.
Backed by his “start with what you have” ethos, Grams founded Studio Upgrade to address the burgeoning need for professional-grade virtual communication during the pandemic, providing online production courses alongside everything from pre-built kits to bespoke high-end studios.
Louderback and Raskin point to the proliferation of virtual production tools in our phones and on social media platforms that are enabling solo creators to produce high-quality content from anywhere, breaking down traditional barriers in the creator economy.
The rapid evolution of production technologies is revolutionizing the creator economy, democratizing content creation by making advanced studio tools and techniques accessible to creators at all levels. This transformation is not only enhancing the quality and impact of content but also expanding the creative possibilities for storytellers, helping to shape the future of Media & Entertainment.
As part of NAB Amplify’s “Creator Economy Amplified” series, video production wizard Ryan Grams, CEO & Founder of Studio Upgrade, sat down with Jim Louderback, editor and publisher of Inside the Creator Economy, and veteran journalist Robin Raskin, founder and CEO of Virtual Events Group, to share their insights into building versatile, efficient, and technologically advanced studios that cater to a range of creative needs. From immersive environments to flexible, home-based setups, discover how the new studio frontier is expanding the horizons for creators everywhere. Watch the full discussion in the video at the top of the page.
The conversation offers a preview of the all-new Creator Lab at NAB Show, a dynamic space dedicated to exploring the newest trends and technologies driving the creator economy. Led by Louderback and Raskin, the Creator Lab will host an extensive lineup of discussions and interactive workshops, with industry experts including Grams offering valuable insights and practical skills to attendees.
The Transformation of Content Creation
With more than two decades of experience creating compelling content for powerhouse brands like Google, UPS and Walmart, the onset of the pandemic marked a pivotal moment for Grams.
“A big thing that I noticed, of course, was all the virtual meetings that we were having,” he says, describing his background up until then serving as a shooter, editor and producer.
“Quite frankly,” he shares, “the way that I was able to keep putting food on the table was to begin helping professionals and leaders and executives level up the way that they were showing up in their virtual content.”
Backed by his “start with what you have” ethos, Grams founded Studio Upgrade to address the burgeoning need for professional-grade virtual communication during the pandemic, providing a wealth of online production courses alongside everything from pre-built kits to bespoke high-end studios.
The traditional model of showing up to a location with heavy equipment gave way to a more democratized approach, where creators could build their own studios with guidance from experts like Grams. “Instead of what used to be me showing up to some place with a bunch of cameras and lights, I was teaching people how to start their own — sometimes small, sometimes large — personal studios, and that’s now become my livelihood,” he explained.
Cost-Effective Studio Technologies for Solo Creators
Grams’ transition to teaching others how to build virtual studio setups mirrors a wider shift in the creator economy, highlighting how accessible virtual production technologies are enabling creators to produce high-quality content from anywhere. This change is breaking down traditional barriers, making it easier for more creators to share their stories and connect with audiences globally.
Whether enhancing virtual meetings or creator-led productions, “there’s just such great technology out there,” says Louderback. “The ability to put up a green screen and do compositing and make it look really good, with a much better camera, and just a slightly faster computer, can really add so much to what you do.”
Expectations around video quality and audio clarity have raised, Raskin notes, a trend accelerated by the pandemic. “It was enough that the dog could dance… and we were just happy seeing postage-sized stamps of people on a video call,” she recounts. “We kind of all fell into this together, I think, and came out learning simple techniques about lighting.”
Raskin doesn’t come from a production background, she says, but her experience during the pandemic helped her build a toolkit — an external camera and mic combined with an OBS camera system — that she now carries wherever she goes. “The stakes have gone higher, so much faster,” she comments, “and video has become something of a lingua franca in how we communicate.”
Grams stresses the foundational importance of high-quality audio. “The most important thing to start with is, of course, a better microphone,” he advises, suggesting that even modest investments in audio equipment can drastically improve content quality.
“Buy a $30, $40, $50 microphone, whether it’s a little clip-on lavalier or a podcaster-style microphone,” he urges. But while having a good mic is a great first step, he notes, it still needs to be properly employed. “Having it three feet away is not going to sound good. You gotta have it closer to you. It’s going to make such a world of a difference.”
Quality lighting has also become a lot more accessible to creators, Grams points out. “The whole world has changed with what you can do now with LED lighting,” he says, “and you don’t have to spend thousands of dollars, you can do it for just one or two hundred.”
The Rise of Virtual Production Tools
Virtual production has moved well beyond the pie-in-the-sky realm of The Mandalorian, Grams, Louderback and Raskin agree, with an expanding toolkit now available to creators at all levels.
“If you use TikTok…that’s basically virtual production,” Louderback points out, explaining how the platform’s built-in filters allow users to employ a variety of backgrounds for their content, including news stories. “It’s amazing,” he says of how widespread this type of virtual production technology has become. “And the creators using it on TikTok are doing amazing things with it.”
Our phones, says Raskin, have a range of built-in effects such as focus pullers and color correction that should be explored. “It’s worth thinking of your phone as a camera, and really immersing yourself in that first, and then thinking about a path that you want to grow,” she counsels. “I’m using an external camera now, not a webcam, because I find that picture clearer and just more vivid. And so you will start to see when a tool gets limited and move on to the next one.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Grams shares his enthusiasm for multi-camera setups and AI-generated backgrounds, which have revolutionized the way content can be produced. “It’s changed the way that video can be produced, which I think it’s just so much fun,” he says, highlighting the creative possibilities these tools unlock.
“For me, having a wide shot and a tight shot is a really fun and interesting way to add a level of production to streams that I’m doing myself, and just being able to switch between them with some intention,” he details.
Using an overhead camera for product demos to enable picture-within-picture formats adds even more value, says Grams. Real-time workflows can also streamline collaboration and reduce the need for editing. “Treating it like a live production, even if you’re not streaming somewhere, to be able to edit in real time at the push of a button allows you to be more efficient.”
The Future of Studio Spaces
Studio spaces are evolving as the creator economy continues to expand. To keep up with trends, says Louderback, “follow some of the top creators who are out there, who are doing it, take a look at what they’re doing, and take their advice.”
Raskin highlights the rise of pop-up studios, which offer creators the flexibility to produce high-quality content without the need for significant upfront investment. “You are seeing studios pop up everywhere,” she observes, pointing to a trend that supports the growth of the creator economy by making professional-grade facilities more accessible.
As an example of the versatility of pop-up studios, Grams recounts a recent Willy Wonky parody created for an event using a pop-up facility in St. Paul, Minnesota. “it’s not quite a giant 360 at all, but it’s still a massive screen, and for a much, much more affordable price,” he says.
The technology has completely transformed traditional video production, he says, describing how his crew was able to complete shooting 10 scenes with different backdrops in just eight hours. “We were able to do that, very, very quickly… using AI-generated backgrounds and making changes in real time,” he says.
“That would have just been unheard of [before]. For the size project that it was, we were still able to accomplish a lot using that style of production. But even scaled way down it’s still is changing the way that video can be produced, which I think is just so much fun.”
Going beyond the gear, the most important thing to learn, says Grams, is how to show up and get comfortable, both behind and in front of the camera. “You can do that with your phone and no lighting,” he says. “You can start practicing with whatever you have. Don’t use not having the right gear as an excuse to stop you from starting.”
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
March 28, 2024
Posted
March 28, 2024
Navigating the New Era of Social Commerce: Creator, Organic and Paid Content Strategies
TL;DR
The social commerce landscape has evolved from prioritizing follower counts to valuing the intrinsic quality of content, necessitating a strategic overhaul in social media approaches.
Dash Hudson’s 2024 social media trends report, “The Next Phase of Creator, Organic and Paid,” categorizes content into Creator, Organic, and Paid, each excelling in specific areas — Creator for engagement, Organic for community building, and Paid for extending reach.
Strategically boosting content, especially Reels for impressions and static posts for engagement, significantly enhances brand visibility and audience engagement.
The integration of generative AI across platforms like TikTok, Meta, and YouTube is transforming brand engagement strategies, offering personalized and interactive user experiences.
TikTok Shop is redefining social commerce, combining sales potential with social engagement in a democratized marketplace, underscored by its rapid growth as a major e-commerce player.
A seismic shift has occurred in the ever-evolving landscape of social media, moving us from a world where follower counts reign supreme to one where the content’s intrinsic value dictates its success. This transformation is a complete overhaul of how brands, creators, and marketers approach social media strategy.
Social media management platform Dash Hudson examines this shift in its 2024 social media trends report, “The Next Phase of Creator, Organic and Paid.” The report dissects current trends and offers a roadmap for leveraging the unique strengths of Creator, Organic and Paid content to forge a path to increased engagement and brand growth.
“In recent years, a new set of rules have emerged,” the report notes. “Short-form video has taken over, and there’s been a significant shift from socially-driven to content-driven feeds, as platforms deemphasize follower counts in favor of the popularity of posts.”
At the same time, audiences are becoming increasingly niche in the pursuit of their personal interests. This means that the traditional one-size-fits-all approach content creation is fading into obsolescence, replaced by content that speaks directly to highly specific demographics, maximizing engagement and ROI in the process.
Creator, Organic and Paid Each Excel
The report divides content into three distinct pillars — Creator, Organic, and Paid — and explains how each pillar excels in its domain. Creator content, with its authentic voice, drives engagement; Organic content builds community through genuine interaction; and Paid, or boosted, content extends reach beyond traditional boundaries, ensuring that messages penetrate the noise of crowded feeds.
Creators reach niche audiences through existing community relationships while generating more engagement than both paid and organic. On average, brands had seven creators partnerships in 2023, with most creators posting roughly eight pieces of content for each brand partner, the report found. But, most importantly, content posted by creators generated 16 times more engagement than content posted by brands themselves.
Organic content provides a regular cadence of fresh content to build brand loyalty and maintain an engaged community, serving as a good indicator of what resonates with a given audience. Nearly 40% of organic content is static, while 23% are carousels and 38% are Reels, and brands tend to post an average number of 11 pieces of organic content each week.
Paid, or boosted, content enables brands to get already high-performing content in front of highly targeted audiences. Boosted posts earn significantly more impressions than creator and organic content, highlighting its vital role in building brand awareness. Reels are the most common format of boosted content, at 50%, followed by Static (32%) and Carousel (18%). Around 9% of brands boost an average of one in every five posts, the report found. Some brands are even boosting up to 70% of their posts, but the brands that boost the highest percentage of content tend to have smaller followings.
Each of these pillars, Dash Hudson argues, “is even more impactful when working in concert with the others as part of a holistic social media strategy.”
This cross-pollination, the report finds, can drive meaningful results. “IAB tracked over 1,000 consumer purchase journeys, finding that advertising alongside creator content can accelerate the purchase funnel, showing a greater impact on building brand loyalty and a 1.3x greater impact on inspiring brand advocacy.”
A Synergistic Approach
More than ever, a siloed approach to content strategy is a recipe for mediocrity. The Dash Hudson report advocates for a synergistic model where “the interplay between Creator, Organic, and Paid content is not just encouraged but essential.” This holistic strategy amplifies brand presence, weaving a narrative that resonates across all platforms and demographics.
Dash Hudson conducted an analysis on the performance of creator, organic, and paid content across a variety of metrics to discern each content type’s strengths and their most effective roles within the content lifecycle.
Creator content shines in engaging niche audiences and boosting interaction rates, achieving a +34% higher engagement rate than organic content and a staggering +316% higher rate than paid content. This underscores its potency in fostering deep connections and stimulating audience participation.
Organic content, on the other hand, is pivotal in cultivating brand loyalty and nurturing a vibrant community. It outperforms paid content with +358% more comments and +104% more likes, and also surpasses creator content with +53% more comments and +18% more likes, highlighting its value in sustaining active and engaged user interactions.
Paid content is instrumental in expanding brand visibility, delivering three times more impressions and six times more video views than organic content, irrespective of the investment size. When compared to creator content, paid content generates seven times more impressions and video views, showcasing its unparalleled capacity to broaden reach and attract new audiences.
The power of paid content cannot be overstated. The report highlights how “strategically boosting content — particularly Reels for impressions and static posts for engagement — can significantly elevate a brand’s visibility.” Moreover, “entertaining content, when boosted, sees exponential gains,” underscoring the importance of not just what you share, but how it captivates your audience.
Overall, boosting content builds brand awareness, “breaking through the algorithms to place your brand directly in front of the eyes that matter most.”
Boosting Reels grows impressions, and boosting static content grows engagement rate — “the metrics each format craves, tailored to maximize the inherent strengths of each content type.”
Boosting entertaining content drives much higher performance across the board, proving that “entertainment is not just king but the ace in the deck for social media strategy.”
Leveraging AI for Content Optimization
Dash Hudson outlines significant trends and developments in social media platforms themselves driven by new technologies and changing user behaviors. The report highlights three key shifts: the integration of generative AI into platform experiences, the resurgence of social commerce with a focus on TikTok, and the growth of direct messaging as a crucial engagement tool. It notes that 64% of marketers currently utilize AI, recognizing its value and planning continued investment.
Specific AI enhancements across platforms are transforming how brands engage with audiences, Dash Hudson finds. TikTok has introduced an AI-powered “Creative Assistant” designed to aid in campaign creation, complemented by custom AI Chatbot Creation tools developed by its owner, Bytedance.
Meanwhile, Meta is unveiling generative AI tools that facilitate video and photo creation/editing from text prompts, alongside expanding AI chat personas to include celebrities like Kendall Jenner and Tom Brady, and is exploring new chatbot creation tools. This is part of Meta’s broader strategy to integrate AI across its advertising products.
Instagram is in the process of testing generative AI features, including custom sticker creation and visual editing tools for uploaded content. YouTube has launched “Dream Track,” an experimental generative AI tool that enables users to create music in the style of various famous artists.
These advancements underscore a significant shift towards more interactive and personalized user experiences, offering brands novel ways to capture attention and stay ahead in the ever-evolving social media landscape.
TikTok Shop: Winner Takes Alland Anyone Can Win
In the dynamic realm of social commerce, TikTok Shop emerges as a groundbreaking platform, blending the immense potential for sales with the power of social engagement. The Dash Hudson report illuminates this platform as “a democratized marketplace where authenticity and entertainment are paramount for direct sales success.”
A standout revelation from the report is TikTok Shop’s meteoric rise in the e-commerce domain, “ranking as the 12th largest e-commerce retailer in the US market and the 5th largest in the UK market in 2023.” This rapid ascent highlights TikTok Shop’s substantial impact and its capability to captivate both users and brands, solidifying its position as a formidable force in e-commerce.
Insights into consumer behavior reveal how TikTok Shop’s unique integration of shoppable videos and livestreams significantly influences purchasing decisions. The platform’s innovative approach to social commerce sales, supported by detailed audience buying behaviors and sales metrics, offers brands a clear blueprint for engaging potential customers effectively.
Technological innovations within TikTok Shop, such as AI-driven personalization and AR features for virtual try-ons, are enhancing the shopping experience, making it more interactive and personalized. These advancements are pivotal in attracting and retaining users, offering them a seamless and engaging shopping journey.
Looking ahead, the report provides a glimpse into the future of TikTok Shop and the broader landscape of social commerce. As we navigate this new era of social media, the fusion of creativity, strategy, and technology becomes increasingly crucial. This report not only sheds light on the current trends and strategies but also offers a glimpse into the future of social media marketing — a future where engaging content, powered by sophisticated AI tools and platforms like TikTok, leads the way.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Generative AI uses very powerful machine learning methods such as deep learning and transfer learning on such vast repositories of data to understand the relationships among those pieces of data — for instance, which words tend to follow other words. This allows generative AI to perform a broad range of tasks that can mimic cognition and reasoning.
One problem is that output from an AI tool can be very similar to copyright-protected materials. Leaving aside how generative models are trained, the challenge that widespread use of generative AI poses is how individuals and companies could be held liable when generative AI outputs infringe on copyright protections.
When Prompts Result in Copyright Violations
Researchers and journalists have raised the possibility that through selective prompting strategies, people can end up creating text, images or video that violates copyright law. Typically, generative AI tools output an image, text or video but do not provide any warning about potential infringement. This raises the question of how to ensure that users of generative AI tools do not unknowingly end up infringing copyright protection.
The legal argument advanced by generative AI companies is that AI trained on copyrighted works is not an infringement of copyright since these models are not copying the training data; rather, they are designed to learn the associations between the elements of writings and images like words and pixels. AI companies, including Stability AI, maker of image generator Stable Diffusion, contend that output images provided in response to a particular text prompt is not likely to be a close match for any specific image in the training data.
Establishing infringement requires detecting a close resemblance between expressive elements of a stylistically similar work and original expression in particular works by that artist. Researchers have shown that methods such as training data extraction attacks, which involve selective prompting strategies, and extractable memorization, which tricks generative AI systems into revealing training data, can recover individual training examples ranging from photographs of individuals to trademarked company logos.
Legal scholars have dubbed the challenge in developing guardrails against copyright infringement into AI tools the “Snoopy problem.” The more a copyrighted work is protecting a likeness — for example, the cartoon character Snoopy – the more likely it is a generative AI tool will copy it compared to copying a specific image.
With respect to model training, AI researchers have suggested methods for making generative AI models unlearncopyrighted data. Some AI companies such as Anthropic have announced pledges to not use data produced by their customers to train advanced models such as Anthropic’s large language model Claude. Methods for AI safety such as red teaming — attempts to force AI tools to misbehave — or ensuring that the model training process reduces the similarity between the outputs of generative AI and copyrighted material may help as well.
Role for Regulation
Human creators know to decline requests to produce content that violates copyright. Can AI companies build similar guardrails into generative AI?
Given that naive users can’t be expected to learn and follow best practices to avoid infringing copyrighted material, there are roles for policymakers and regulation. It may take a combination of legal and regulatory guidelines to ensure best practices for copyright safety.
For example, companies that build generative AI models could use filtering or restrict model outputs to limit copyright infringement. Similarly, regulatory intervention may be necessary to ensure that builders of generative AI models build datasets and train models in ways that reduce the risk that the output of their products infringe creators’ copyrights.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Adobe CEO Shantanu Narayen discusses how the company is incorporating AI to assist creators and its work to tackle misinformation.
March 27, 2024
Posted
March 27, 2024
GenAI Is Good for Artists, So What’s the Problem?
Video created by sculptor/artist Alex Reben using OpenAI’s Sora
TL;DR
OpenAI executive Peter Dengexplored the role of humans in the age of AI in a provocative talk at SXSW.
He defends the company’s accelerated release of AI tools and its ongoing development of AGI as being for the benefit of humanity.
Asked directly whether creators should be compensated for the use of their work as AI training data, Deng as good as says no.
OpenAI is voluble about its mission to deliver all the benefits of AI to humanity, but is non-committal at best on whether it should be paying creators for the work its machines are trained on.
Quizzed on this, Peter Deng, OpenAI’s VP of consumer product and head of ChatGPT, told SXSW, “I believe that that artists need to be a part of that ecosystem, as much as possible. The exact mechanics I’m just not an expert in. But I also believe that if we can find a way to make that flywheel of creating art faster, I think we’ll have kind of really helped help the industry out a bit more.”
Generative AI then should be viewed as a definite plus to the creative community and they should all be thankful for it for speeding up their process, and quit moaning about being compensated.
Asked directly whether artists deserve compensation, Deng avoids a direct response.
“How would I feel if my art was used as inspiration [for an AI]? I don’t know,” he said. “I would have to ask more artists. I think that, in a sense every artists has been inspired by some artists that have come before them. And I wonder how much of that will just be accelerated by AI.”
Nothing to see here then, creative community. Move along.
Deng’s main message in the provocative hour long moderated debate was that AI and humanity are going to “co-evolve,” so get used to it.
“I actually believe AI fundamentally makes us more human,” Deng declared. “It’s a really powerful tool, it unlocks the ability for us to go deeper and explore some of the things that we’re wondering about,” he said.
“Fundamentally, our minds are curious and what AI does is lets us go deeper and ask those questions.”
In his example, someone learning about Shakespeare might struggle to get past the language or understand the play’s context. But they could boost their appreciation of the text by quizzing an AI.
In a similar way Deng imagines everyone having a personal AI that they could interact with for any number of reasons such as bouncing around ideas, problem solving or answering questions.
In this sense AI is an evolution of a printed encyclopedia, of Wikipedia or a internet search engine.
“We are shifting in our role from being the answers and the creators to more of the questioners and the curators,” he said. “But I don’t think it’s a bad thing. If you take a step back, what’s really interesting about AI is that it gives us this tool, this new primitive that we can start to build on top of.”
The calculators is another analogy. Instead of spending time doing arithmetic, we can now think about higher level mathematical problems. Instead of spending time recalling every single fact, we have Google or databases where knowledge resides allowing us to ask higher level questions.
“The level of skill that humanity has just keeps on getting pushed up and up and up with every sort of big technology. Since AI is such a foundational technology we’re going to be able to push our skill level up and up.”
Kids, he suggests, could use AI to program, learning how to code even before they learn how to write.
You can’t rely argue with this sort of vague and optimistic approach to AI. It’s Deng’s job, after all, to promote OpenAI’s development.
He goes on to talk about the how the mission of the company inspired him to join it from his previous role at Meta. He claims to want to help create “safe” artificial general intelligence, or AGI, which is the next level of the technology that OpenAI is working on. He wants to “distribute the benefits to all of humanity.”
Deng said, “I’ve never seen a technology in my lifetime that’s this powerful, that has this much promise. Just to be a part of something that’s going to be so beneficial to humanity if we get it right. And I just want to not mess it up.”
However, interviewer Josh Constine, the former editor at large of TechCrunch and now a venture partner at early stage VC firm Signal Fire, is no fool. He does ask the probing questions of Deng. Such as whether bias in training data sets are a concern and what is OpenAI going to do about it.
Deng essentially says it’s up to the user to decide, seemingly absolving OpenAI of responsibility
“My ideal is that that AI can take on the shape of the values of each individual that’s using it. I don’t think it should be prescriptive in any such way.”
Constine tries to get Deng to agree that giving AI a standard set of ethical values must be a good thing for all of mankind, not just an AI which is super intelligent but one which is “empathetic.”
Deng ducks the topic with more platitudes. “The beautiful part of humanity are different parts of the world have different cultures and different people have different values. So it’s not about my values that I want to instill, I would just hope that we’re able to find some way to take the world’s values and instill it.”
Later in the interview he gives this revised approach: “How do we find ways to instill the values that we have and [impart that] learning to AI so that AI can kind of be a part of our coevolution?”
Would Deng trust an AI to defend him were he theoretically in court?
“[If] I were ever to be falsely accused of a crime I would absolutely like to have AI as a part of my legal team. One hundred percent.” AI would act as a an assistant to the legal counsel “just listening to the testimony and in real-time, cross-checking the facts and the timelines, being able to look at all the case law and the precedent, and to suggest a question to a human attorney. I think there’s absolutely human judgment involved. But that level of sort of super power assistant is going to be really powerful.”
That said, Deng wouldn’t yet trust AI for everything. Just as one might use the autonomous functions of your car, it will take to build up trust in the machine. A key part of the evolution for Deng and OpenAI is real-world learning. OpenAI argue that the reason they release ChatGPT and other large language models into the world is to test and trial and adapt and improve them with constant iteration outside of a lab. Deng argues this makes the AI better for humans in the long run.
“I think that the path of how we get there, the repeated exposures and experiencing of it is a huge part of the coevolution. We’re not developing AI and keeping it in the lab. We’re trying to making it generally accessible to other people, so that people can try it out and can gain that literacy, and can get a feeling for what this technology can do for you.”
Literacy or education about how to use and work with AI and its potential threats, weaknesses and strengths is, he says, very important. He advocates education schemes that do this and says OpenAI and its investors at Microsoft are already paying for some of these programs.
One way to ensure AI remains a tool for mass use and mass literacy is to make it free. Deng commits to the idea that a version of OpenAI will always be free.
“There should be there should always be a free version. Absolutely. That’s part of our mission — to distribute the benefits to all of humanity. It just so happens that it costs a lot to serve right now.”
He says enterprise users are paying to use OpenAI tools at a price “commensurate with their use,” but some of that value is able to trickle down.
OpenAI wants to push the boundaries of the tech, “but also make sure that we’re developing it in a very safe way,” he claims. “And the way that we build product on the inside is very much a combination of multiple people with multiple different perspectives on what could be.”
Pushed on whether there is a threat from deepfakes and other AI generated information in this election year, Deng agrees that it is does matter. He points to OpenAI’s support of content credentialled initiatives like C2PA. But will this matter in the longer term? He is not so sure.
“In the future, I don’t know if people will care,” he said. “Walking down the street here in Austin, I’m not sure how much we care that a billboard ad was created using Photoshop or not. Or indeed what tools were used to create that content. I don’t know how people will care [about AI generated content] in future but I do know that if people will care, then it will be corrected for.”
In other words, let the market decide.
Having warmed his subject up with some easy lobs, Constine gets down to the meat of questioning. Where does Deng stand on how fast AI development from OpenAI and others should be? Should AI development be slowed in order for all its implications on society and industry — and regulatory guardrails — to catch up?
“I’m somewhere in the middle. With any new technology, there’s going to be really positive use cases and there’s some things that we need to really consider. My personal viewpoint is the way that we actually figure out what those challenges are and how we actually solve them is to release at a responsible rate in a way that gives society a chance to absorb and make sure we have the right safeguards in place.”
He adds, “I don’t think that AI will be safely developed in a lab by itself without access to the outside world. Companies are not going to be able to learn how people want to use it, where all the good is, and also what are all the areas that we need to be very cautious about [without release in the wild].”
Constine probes; If an AI makes a mistake, who is responsible? Should that AI model be changed or pulled back? Should the engineer be held liable? Should the company?
Deng reiterates that releasing product is the best way of seeing the good and the bad.
“AI will make mistakes, but it’s important that we release it so that the mistakes that are made are ones we’ve already baked in some of the mitigations [safety features]. That iterative deployment is my best bet of how we can kind of advance this technology safely.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Jamil will address the importance of authenticity for content creators at 3 p.m. (PT) on Tuesday, April 16 during a conversation with Really Famous host Kara Mayer. You can register to attend for free with code AMP05.
Here, she answers questions from NAB Amplify’s Emily M. Reigart about showing up on a plethora of platforms, breaking through in the Creator Economy, the impact of AI on self-image and her work in podcasting.
Tell us about your social media philosophy. How do you approach creating content and balance the need to “show up” on so many different platforms?
I make and post content that I would want to watch. That way I create a community online of people who have similar interests to me. This makes it far less draining to be consistent. When you’re not being yourself, it is far more taxing.
Regarding showing up across all the platforms, I mostly am able to use the same content, so it’s a low lift. These apps all have such different users and audiences that it has not drawn fatigue from my followers yet!
What advice would you give to someone who wants to work in the Creator Economy?
Find something sustainable and honest. People are sick of over-produced, empty content. This is constant work; it requires planning, thought and vulnerability. Do not fake it, or it will feel like swimming upstream all the time.
Also ask your audience what they like and don’t like. Doing so for me, has helped me grow my podcast, and my online following, because it makes my page feel like a community, rather than a stage just for me. I thrive on honest feedback.
You’ve long been outspoken about the impact of edited photos and Facetuned social media. Now, with the popularization of generative AI tools, we’re facing new conversations around content authentication and personal authenticity. How are you thinking about this?
I’m just terrified of it all. Terrified that now people won’t know that they’re comparing themselves to literal AI digital perfection. This is a crisis beyond our understanding. It also opens the doors for so much hostility online, rumors, deep fake incriminating videos, revenge pornography. It’s a nightmare.
You have recorded more than 200 episodes of your podcast, “I Weigh.” What have you learned from working on this project consistently for four years?
I have learned that I have so much still to learn!!!
The arrogance of our current society when we look down on others who are openly learning is something that really concerns me. There are so many fascinating subjects and outlooks on this planet. We cannot possibly expect to know it all. Learning is so much fun and can be such a bonding communal experience.
This podcast has repeatedly blown my mind and expanded my horizons. I’m a more tolerant, humane, humble and thoughtful person for it.
Does your background as a presenter and actor impact how you function as a podcast host?
Absolutely, because my job is to zoom out and think about the show at large. Not just myself. I am constantly producing in my head while I’m interviewing. I’m carving a full picture of someone with my questions, which makes for hopefully a multi-dimensional and thorough conversation.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Casey Neistat, digital creator and cofounder of BeMe, is joining NAB Show this year for “Do What You Can’t with Casey Neistat.” You can register to attend for free with […]
March 26, 2024
London Calling: The Complex Production on “Criminal Record”
TL;DR
French cinematographer Laurent Barès discusses the lure of a London shoot and the challenges of bringing a complex narrative tothe small screen.
One challenge was to realistically portray working-classneighborhoods and depict a London off the beaten track.
Barès explains why he dislikes formulaic camera work, saying it feels like you’re not telling a story but “shopping for the edit.”
Like any good DP, Barès devised a visual grammar to fit the story, and shot the series on the ARRI Alexa Mini LF equipped with Zeiss Supreme FF lenses.
Peter Capaldi and Cush Jumbo star as detectives drawn together by an anonymous phone call to right an old miscarriage of justice in Apple TV+’s Criminal Record.
Written by BAFTA nominee Paul Rutman and directed by Jim Loach, the series touches on issues of race, institutional failure and the quest to find common ground in a polarized Britain.
“I loved the complexity of this narrative,” French DP Laurent Barès (Gangs of London) informed British Cinematographer. “It’s a real challenge to convey this to the audience without them feeling lost. Too many shows today are simplistic, obvious. Life isn’t like that. Criminal Record is a good reflection of the complexity of our lives.”
The Frenchman says he loves London and this helped him portray a different side to the city than tourist cliches.
“There’s a significant character in Criminal Record that irresistibly attracted me – London. A multicultural, immense city. I love London. I’ve been fortunate to spend several months there because of my profession.”
During research, Barès discovered the work of British photographer Ray Knox, whose color photos of London seemed close to the universe of Criminal Record.
“He captured a modest London, far from tourist spots. The light guides his graphic composition. I also [draw on] photos from each of our location scouts. It was important to choose locations that, in some way, offered a perspective on the city.”
For instance, a lengthy discussion between Hegarty (Capaldi) and DS Cardwell (Shaun Dooley), is set in a bar with large windows. “Behind them, you constantly feel the hustle and bustle of the street, adding an extra dimension to their conversation.
“When DS Lenker (Jumbo) talks with a phone seller, Hasad (Sia Alipour), we moved his stand a meter onto the pavement. This way, for the two-shot, you can see the perspective of Kingsland Road.
A related challenge of Criminal Record was to realistically portray working-class neighborhoods.
“I dislike miserablism,” Barès says. “We strived to maintain a balance between reality and poetry. I drew inspiration from Don McCullin’s photos of Liverpool in the late 1960s — beautiful, realistic, moving, and respectful. The framing is slightly distanced enough to understand where we are but not so much as to ignore the drama of those who live there.”
He shot the show on the ARRI Alexa Mini LF equipped with Zeiss Supremes FF lenses, and, as any good DP will do, devised a visual grammar to fit the story.
“Filming an investigation is capturing a thought in motion,” he says. “In every investigation, there is progress, mistakes, setbacks, dead-ends, and successes — all of which evoke camera movements. The approach shouldn’t be illustrative but attentive.”
He says he didn’t want a didactic approach to camera such as opening a scene with a wide and a forward tracking shot, then shot/reverse-shot during dialogue, and a few inserts for editing convenience.
“When you do that, it feels like you’re not telling a story but shopping for the edit. It’s not creative; it’s purely technical. Paul Rutman’s text deserved much better. It alternates action and investigation scenes with their consequences on the characters’ daily lives. There was no way to film them the same way.”
There is camera movement in introspective scenes — such as slow tracking shots accompanying the characters’ contemplation. This helps create an intimacy between the viewer and the characters.
“Filming this show required a lot of sensitivity. There’s no replicable model. Each actor, each scene is different.”
Barès follows the French filmmaking tradition in declaring a hatred for aesthetics for the sake of aesthetics. “Framing, composition only exist if they tell the story,” he declares. “This doesn’t exclude elegance and beauty, but there must be an alignment. Each project dictates its own aesthetics.
“I keep an eye on the second units. I don’t want a Terry Gilliam shot in the middle of a Michael Mann film, or vice versa. Each in its own style. What matters is the coherence from the first to the last shot.”
This consistency of image across the story applies to his work in the grade too. In this case, the colorist is Anthony Daniel (All Quiet on the Western Front).
He talks about his work on this project and approach to colorist collaboration in general during the Frame & Reference podcast, hosted by Kenny McMillan.
“Memories from the shoot help me explain what I want,” he said. “Weather conditions, the sun’s position and so on. I always remind my colorist of the shooting conditions. I don’t understand why sometimes DoPs are asked to work on grading remotely via video from their homes. Physical presence seems indispensable [to create the best work]. Thanks to my producers for respecting that.”
In the podcast, Barès discusses his experience attending a prestigious film school in France, highlights the challenges of entering the industry, including the need to learn and expresses frustration with film students’ lack of attention to storytelling and photography.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Showrunner, writer and director Issa López discusses her approach to the new season of anthology series “True Detective: Night Country.”
March 25, 2024
Posted
March 24, 2024
Don’t Treat AI Like Pandora’s Box, Warns Jaron Lanier
TL;DR
Treat AI like an “entity” that is already actively “intelligent” and you risk the world actually descending into “The Matrix,” says tech guru and Microsoft adviser Jaron Lanier.
Because we have large language models that seem to work in the same way that natural biological neurons do, we have erroneously assigned machine and human to the same category.
There’s no magic in the black box of LLMs, Lanier says. We are in charge and can shape AI any way we want.
If you believe Jaron Lanier, there’s no intelligence in our current AI but we should be scared nonetheless. The renowned computer scientist and virtual reality pioneer is a humanist and says he speaks his own mind even while on the Microsoft payroll.
“The way I interpret it is there’s no AI there. There’s no entity. From my perspective, the right way to think about the LLMs like ChatGPT is, as a collaboration between people. You take in what a bunch of people have done and you combine it in a new way, which is very good at finding correlations. What comes out is a collaboration of those people that is in many ways more useful than previous collaborations.”
Lanier was speaking with Brian Greene as part of “The Big Ideas” series, supported in part by the John Templeton Foundation. He argued that treating AI as “intelligent” gives it an agency it technically does not have while absolving us of our own responsibility to manage it.
“There’s no AI, there’s just the people collaborating in this new way,” he reiterated. “When I think about it that way, I find it much easier to come up with useful applications that will really help society.”
He acknowledges that anthropomorphizing AI is natural when confronted with something we can’t quite comprehend.
At present, because we have large language models that seem to work in the same way that natural biological neurons do, we have assigned both machine and human to the same category. Erroneously in Lanier’s view.
“Perceiving an entity is a matter of faith. If you want to believe your plant is talking to you, you can you know. I’m not going to go and judge you. But this is similar to that like it.”
The risk of not treating AI as a human driven tool is that the dystopian fiction of Terminator will be a self-fulfilling prophesy.
“I have to really emphasize that it’s all about the people. It’s all about humans. And the right question is to assess could humans use this stuff in such a way to bring up about a species threatening calamity? And I think the clear answer is yes,” he says.
“Now, I should say that I think that’s also true of other technologies, and has been true for a while. The truth is that the better we get with technologies, the more responsible we have to be and the less we are beholden to fate,” he continues.
“The power to support a large population means the power to transform the Earth, which means the power to transform the climate, which means the responsibility to take charge for the climate when we didn’t before.
“And there’s no way out of that chain that [doesn’t] lead to greater responsibility.”
Ultimately, the way to prevent The Matrix from ever happening is to frame AI as human responsibility.
“The more we hypothesize that we’re creating aliens who will come and invade, the less we’re taking responsibility for our own stuff.”
Lanier adds, “There are plenty of individuals at Microsoft who wouldn’t accept everything I say. So this is just me. But at any rate, what I can say is that Microsoft and OpenAI and the broader community that does seriously work on guardrails to keep it from being terrible. That’s the reason why nothing terrible has happened so far in the first year and a half of AI.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Tech guru Jaron Lanier argues that we may need to adopt a new perspective to understand the strengths and limitations of AI.
March 24, 2024
How XR and AI Can Deliver True Transmedia Storytelling
TL;DR
Rachel Joy Victor, co-founder of fbrc.ai, emphasizes the shift towards immersive, interactive narratives enabled by AI and XR technologies, offering new opportunities for audience engagement beyond traditional formats.
Drawing on a diverse academic and business background with a focus on computational neuroscience and “Spatial Economics,” Victor has designed multiplatform narratives, immersive experiences, tools and platforms for a broad range of household-name clients.
Victor highlights the role of AI in optimizing asset movement across platforms, with companies like ModTech using machine learning for asset optimization and Playbook XR enabling cross-format creation.
Rachel Joy Victor aims to explore how AI can revolutionize storytelling in the digital age, particularly in terms of content creation efficiency.
“Traditional formats will always have their place, but immersive storytelling offers unique opportunities for audience engagement,” she says. “We’re witnessing a shift towards interactive narratives and spatial experiences, where viewers have agency in shaping the story.”
Victor is a designer, strategist and worldbuilder, working with emergent technologies and mediums (XR, AI and Web3) to create cohesive narrative, brand, and product experiences. At NAB Show, she will be moderating a panel discussion, “Harnessing AI-Driven Storytelling For Efficiencies in Content Creation,” on Monday, April 15 at 4:00 PM in the Capitalize Zone Theater (W2149). The session, which includes Jean-Daniel LeRoy, co-founder and CEO at Playbook XR, Mod Tech Labs CEO Alex Porter, and Emmy-winning immersive director Michaela Ternasky Holland, will focus on generative video workflows and procedural content creation.
She’ll also be conducting show floor tours at NAB Show focused on AI technologies and innovations. These tours will offer attendees a primer on the technical aspects of AI, emerging production workflows, and new content formats focused on the backbone of tooling for new content production pipelines (see below.)
Victor draws on a diverse academic and business background with a focus on computational neuroscience and “Spatial Economics.” Her designs range from multiplatform narratives and immersive experiences to tools and platforms and spaces and cities for clients including Disney, HBO, Vans, Ford, Havas, Meow Wolf, Niantic, and more.
“I’ve always been passionate about understanding human behavior and how it interacts with technology,” she says. “Over the years, I’ve worked on various projects, from creative direction for events like the Dubai World Expo to consulting for major brands like Nike and Crocs. Now, as a co-founder fbrc.ai, my focus is on developing AI-enabled tools for content production.”
She says AI plays a crucial role in optimizing asset movement across different platforms and points to the work of ModTech, a company that utilizes machine learning to optimize assets, ensuring they’re in the right place, at the right time, and in the right format.
“Additionally, tools like Playbook XR facilitate cross-format creation by embedding behaviors into spatial design engines, allowing for seamless adaptation across various mediums,” she says.
“We’re developing a vocabulary for immersive storytelling, leaning into immersion while keeping entry barriers low. For example, this session also welcomes the insight of artist Mikaela Ternasky-Holland, who is pushing the boundaries of immersive storytelling, combining 2D and 3D elements to create captivating experiences.”
Spatial Economics is an increasingly important field which dovetails media with science and entails understanding how spatial factors influence decision-making. For Victor, this is about leveraging real-time spatial data to personalize experiences. “For example, using data from IoT devices at a theme park to guide visitors towards water stations based on their location and environmental conditions,” she says.
With the rise of XR headgear like Apple Vision Pro a new battleground is developing for advertising and data collection around the real estate and sensory signals of a wearer’s face, such as data collected from eye-tracking.
“XR devices offer unprecedented access to personal data, raising concerns about privacy and data ownership,” she says. “It’s crucial to establish robust data policies to protect individuals’ privacy while still enabling immersive experiences.”
A little further out and some commentators predict a merging of our own biology — our neural pathways — for controlling AI-driven computers and experiences.
“It’s a complex topic,” she agrees. “While there’s potential for incredible advancements in brain-computer interfaces, we are also still grappling with fundamental challenges, such as capturing and interpreting neural signals accurately. The portrayal of brain-computer interfaces in the public imagination is often oversimplified. It’s essential to approach these developments cautiously and prioritize ethical considerations.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
One of contemporary art’s most innovative creators, Anadol considers generative AI as his artistic collaborator.
March 27, 2024
Posted
March 24, 2024
Don’t Put Profit Before Ethics, SMPTE’s Renard Jenkins Warns AI Developers
TL;DR
Renard T. Jenkins, president of the Society of Motion Picture and Television Engineers, is concerned that the proliferation and sophistication of large language models are being embedded with bias, unconscious or otherwise.
That said, bias is not inherently a bad thing, says Jenkins. Erasing misogyny from LLMs would be a good thing, for example.
He calls on companies to ethically source data, to employ a diverse group of decision makers and developers and to educate, educate, educate.
SMPTE president Renard T. Jenkins has flagged concerns about bias in development of AI tools and says diversity among decision makers is a practical means to prevent it.
“We should be fighting against bias in everything that we build and the best way I believe for us to do that is through inclusive innovation,” he told the Curious Refuge Podcast. “If your team looks like the world that you intend to serve, and to develop these tools for, then they’re going to notice when something is biased towards them or towards others.”
Jenkins expressed concern that the proliferation and sophistication of large language models are being inbuilt with bias, unconscious or otherwise.
He suggests that bias is not inherently a bad thing because certain forms of bias are there for our protection.
“As a child you learned very, very early not to put your hand on the stove because it’s hot. You develop a bias towards putting your hand on hot things. That’s good. That helps protects us as human beings. So there is that innate ‘bias’ that is born in us to protect us. The problem is when that bias is led by fear and inaccurate understanding of individuals or cultures That’s what leads to the bad side of things.”
That goes for AI as well. Fear or misunderstanding of others can actually make its way into the development of a tool, he said, and once it makes its way in, it’s very hard to get it out.
He advocates a system of “good bias” that is not going to be xenophobic, misogynistic, racist, or homophobic. “I want all systems to be that way,” he said. “But I also believe that it can’t go into hyper overdrive because then it’s going to harm us. That’s why we have to understand bias and we have to remove bad bias from these algorithms.”
Aside from inclusive innovation, removing bias requires “sourcing ethically, cleaning [data] and monitoring your data,” Jenkins said. “That’s how we get to the point where we can hopefully one day not have to worry about the bad bias because it’s sort of been wiped out. That’s my goal.”
The problem is, as moderator Caleb Ward points out, the competing pressure to make money from AI product risks ethically sources models being relegated behind the drive to monetize. Not even new AI regulation in Europe or the US might be sufficient to stop it.
“It’s an arms race right now,” Jenkins agreed. “There’s a lot of money being thrown around and that sometimes drives product that it not ready for primetime, without being fully vetted for what their impact will be. It’s not just about the tool in itself in the sense of helping the creative, it really is about the impact that it has on the user and on our society as a whole. That should be one of the primary things that all of these companies take into account when they’re doing this.”
Jenkins says he is saying to executives that there’s a way for them to continue to make “all of the wonderful financial gains that you’re making and for you to continue to create phenomenal tools, but there’s also a way for you to protect users.
“Because in truth, if you’re doing something that’s harming your users, that’s bad business. You know it’s bad business because over time you you’re going to run out of users.”
Everybody in media and business generally from the C-Suite on down needs to be educated about AI risk and reward.
“The more that you’re educated about it the more that you’ll understand. When you see something that could actually go in the wrong direction then you have the responsibility to say ‘let’s slow this down’ and try and make sure that we’re helping,” he says.
“You got to protect the people and truly you shouldn’t be creating anything that’s going to cause harm.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
A new SMPTE report explains how artificial intelligence and machine learning are being used for production, distribution and consumption.
March 24, 2024
Posted
March 24, 2024
Superfan Connections are Key to the New Creator Economy, Contends Patreon’s Jack Conte
TL;DR
Patreon CEO Jack Conte explains how the current internet algorithms are killing the traditional “follower” for creators, threatening their creative freedom and livelihoods.
Conte advocates for the new spaces on the internet (hey, like Patreon) where, he says, creators can always connect with their communities, create what they want, and control their own destinies.
The hallmark strategy of these businesses is the focus on deeper connections, as opposed to just more connections.
The internet may have started as a platform that democratized creative distribution for creators who could build legions of followers, but Patreon CEO Jack Conte says that model is broken. Rather than stand by witnessing the demise of the follower, platforms like his are offering a new way for creators and fans to connect in deeper, more fulfilling online communities.
“The next decade of professional creativity on the internet will be organized around the concept of the true follower in an effort to build a better way that art can exist on the Web,” Conte said in a presentation at SXSW.
Once upon a time creators could upload their work to platforms like YouTube and immediately have it accessible to millions of people. After that came the “subscribe” button, which enabled creators to go beyond reach. Now they could build a following and find their true fans who would support them to build a creative business.
But with the rise of platform-focused algorithms (Facebook’s ranking, TikTok’s “for you” curation), creators cannot reach their following and true fans. This shift has had a devastating impact on creators’ creativity and ability to support themselves doing what they love.
“Ranking was great for Facebook’s business, and people started spending even more time on the platform, so the other platforms had to compete. Now I think of the 2010s as the decade when the original promise of the creator-led community, the true follower, was broken,” Conte said.
“What it meant for creators was that your followers might not necessarily see your posts. It’s not really a direct true connection between a creator and their fans if the channel of distribution is broken.”
TikTok’s arrival shifted eyeballs from Facebook but didn’t fundamentally alter the broken fan-creator contract, in Conte’s view.
“TikTok’s algorithm ‘chose’ what videos to serve you in your feed and completely abandoned the concept of the follower,” he said.
But it worked, and TikTok hit a billion users by 2021. As traffic started flowing away from legacy social companies and toward TikTok, Facebook, YouTube and Twitter have been forced to launch their version of shorts, reels, or feeds to compete.
The result, said Conte, is that “the whole system of organization for the internet, the creator-led community, started to fade into the past.”
Conte started out as a creator himself. The result is that “my fans don’t see as much of my stuff anymore. It’s harder to sell tickets to a show. It’s harder to reach people with my new work. It’s harder to build community. It’s harder to build a business. It’s harder to energize my fans,” he said.
“The single most important problem that faces creative people today is the weakening of creator led communities of our distribution channels. To our fans, this is the hardest, most challenging and most painful issue threatening the present, and the future of creativity on the internet.”
Conte doesn’t actually believe that the “death of the follower” will happen because there are a new breed of creator-led social platforms coming to the rescue. These include Discord, Kajabi, Fourthwall and Gumroad, but it should come as no surprise that he positions Patreon as the leader of the pack.
Conte said the hallmark strategy of these businesses is the focus on deeper connections, as opposed to just more connections.
“The follower is too important, too valuable to ignore so the next wave of internet and media technology companies are going to try to solve this problem. The incumbent social platforms are not gonna be able to fight it because their revenue relies on maximizing attention to drive their businesses. They are being forced towards discovery, towards reach, personalization and algorithmic curation. These are the levers that drive attention and therefore drives their strategies.”
He argued that real value for creators is to be found in the real fan, or super fan. Just 5% of these fans drive 90% of the community. “This is a direct to fan business. This is an ads business. This is about depth of connection, about maximizing attention. This is about deeper fans,” he said.
“Creators just need a thousand true fans who really connect with you and believe in you. This is different than just reaching people. It’s even deeper than followers. These are super fans, true fans, real fans,” he continued.
“The idea is that this group of people is your core. If ‘reach’ means people see it, and ‘follower’ means people want to see more, then ‘true fans’ are the people who go to the shows and buy the merch and download the record and pay for the course and get the live stream tickets. This idea really resonated with me.”
To that end, Conte said the next decade of creative and media technology companies will focus on building direct to fan connections and community strength.
“As creators, we still need the social platforms for discovery and reach. But those companies will be one component of the many tools that we have as creative people to help us run our communities and businesses.”
Patreon was founded in 2013 and now employs 400 people and supports more than 250,000 creators, who have made over $3.5 billion dollars on the platform, according to Conte.
He says he longer thinks of Patreon as a membership platform but more of a “true fan company, a creator company, where we’re building a better way for art and community to exist on the internet.”
Perhaps it is subscription fatigue or financial squeeze, but he says that many fans no longer want to pay to subscribe to content on his site.
“Rather than having those true fans leave the creator we want to give creators a way to start forming deeper connections with those fans to build businesses.”
It now offers a way for creators to sell digital products like videos, podcast episodes, images, and other files directly to customers, whether they’re a member or not.
“Fans can now participate in the creator’s business and community while the creator can build an awesome business along the way. The logic is very similar for free membership.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
March 21, 2024
Creator Economy Amplified: Building the Creative Stack
Watch “Creator Economy Amplified: Building the Creative Stack.”
TL;DR
Blackmagic Design’s Bob Caniglia joins Jim Louderback, editor & publisher of “Inside the Creator Economy,” and veteran journalist Robin Raskin to discuss the essential hardware that powers today’s creator economy, offering a roadmap for assembling a creative stack that aligns with both vision and budget.
The pandemic has made production equipment more accessible, enabling home-based, professional-grade content creation and lowering entry barriers in the creator economy.
Advances in LED lighting and microphones have transformed production, significantly improving content quality through better lighting and audio.
AI integration and virtual studios are reshaping content creation, with metadata and shoppable videos poised to enhance engagement and monetization.
Choosing scalable and modular equipment for new studios is essential, supporting future growth and avoiding the need for frequent, expensive upgrades.
Navigating the creator economy is akin to exploring a vast digital ecosystem, where content is the currency and creativity knows no bounds. Advancements in technology have leveled the playing field, equipping creators with the tools to turn visions into visuals and ideas into impact like never before.
This shift towards democratization has opened doors for creators at all levels, breaking down traditional barriers and offering new opportunities to engage and captivate audiences. However, the path to creating content that truly resonates with audiences is not without its challenges. Beyond creativity and storytelling, it’s crucial to use the right tools. They not only enhance a creator’s vision but also ensure that the final product stands out in a crowded digital space, and in today’s fast-paced creator economy, leveraging these tools effectively can significantly influence a creator’s ability to connect with their audience.
As part of NAB Amplify’s “Creator Economy Amplified” series, Bob Caniglia, director of sales operations at Blackmagic Design, sat down with Jim Louderback, editor and publisher of Inside the Creator Economy, and veteran journalist Robin Raskin, founder and CEO of Virtual Events Group, to share insights on the essential hardware that powers today’s creator economy. These industry pros offer a roadmap for assembling a creative stack that aligns with both vision and budget — watch the full conversation in the video at the top of the page.
The Affordable Production Revolution
The pandemic has sparked significant changes in content creation, shifting how and where creators bring their ideas to life.
“What has happened over the last couple of years since the pandemic is a lot of people have been able to purchase equipment that allows them to do productions at home that they may not have considered in the past,” Caniglia, who has worked in the film and television industry since 1985, recounts, reflecting on this evolution and highlighting a broader trend.
“At Blackmagic,” he continues, “we’ve been able to create some of those products in a price range that makes it very affordable. So people are starting to buy little switchers and some cameras, and the next thing you know, they’ve set up an entire studio.”
This trend towards democratization has significantly impacted the creator community. It has leveled the playing field, allowing emerging creators to produce content that can stand alongside that of more established names.
The result is a more vibrant and diverse ecosystem, enriched by a wider array of voices and perspectives. By making professional-grade production more accessible, the industry isn’t just changing the tools creators use; it’s transforming who gets to tell stories and how those stories reach audiences around the globe.
Lighting and Audio: The Pillars of Quality Content
When it comes to producing quality content, lighting and audio are non-negotiable, Louderback and Raskin agreed. The duo is behind the all-new Creator Lab at NAB Show, a hub for exploring the latest trends and technologies with a full schedule of talks and hands-on workshops featuring industry pros.
“Lighting is just so important,” Louderback emphasizes. “There are all sorts of inexpensive LED lights out now that can do amazing things,” he says, noting that the advent of affordable LED lighting solutions has revolutionized the way creators approach production, allowing for professional-quality lighting setups on a budget.
“You can have lights that plug into your smart home so that you can say ‘I want to turn on the studio lights,’” he adds as an example, “So look into lights again, because they get cheaper and cheaper and better and better. And there’s so much more you can do with them.”
Audio, says Raskin, is another element that is often overlooked. The pandemic underscored the necessity of high-quality audio as creators sought to improve their production values. Upgrading from built-in laptop speakers and webcams to superior microphones can dramatically enhance the clarity and fidelity of audio, she advises, elevating the overall quality of the content.
“People don’t realize how important it is,” she says, sharing that while she employs a range of solutions she’s currently using a podcasting mic from Shure. “The sound is crisp and accurate, and so much better than my laptop.”
Connectivity and Mobility: The New Frontiers
In today’s creator economy, the ability to produce content remotely and on the go has become invaluable. “Think about your home networks,” Louderback urges, stressing the importance of robust Wi-Fi connectivity for seamless content creation and live streaming.
“If you’ve got an old wireless setup, and you can’t run a wire from your router to your desktop or your notebook, upgrade your internet, think about some of the newer versions of Wi-Fi, think about running multiple different base stations, think about any way that you can go out and do a better job with Wi-Fi.”
To boost connectivity, Louderback recommends Wi-Fi 6-supported products in particular, along with employing multiple base stations to your home router. “All of these things can make sure that you don’t get dropouts, and that everything that’s going into your computer gets up into the cloud without losing lots of fidelity.”
Wireless microphone technology has also improved drastically, he says. “Anker and DJI and Rode, and a couple of others, all have these really cool wireless mic kits, where you get a really small little pod that you can stick on yourself or the person you’re interviewing.”
Using wireless audio in the field “sounds really good,” and makes production “so much easier,” Louderback says. “That little microphone you stick on the lapel, it actually will record the audio and save a version of it on that as well. So if something goes wrong in the field, it always does, you’re going to have an ISO of that audio so that you can fix it and post.”
Caniglia touted the Blackmagic Camera app for iPhone as a game-changer for mobile content creation, offering professional camera settings and cloud integration for easy file transfer and editing.
“[The Blackmagic Camera app] creates a better version of your iPhone. In terms of recording for video, it has a lot more of the settings that you would see in a Blackmagic camera,” he says, explaining that the app is also tied to Blackmagic Cloud, which allows users to send files directly to the cloud so they can be edited from the field. The app also allows creators to use their phones as a second camera and still achieve high quality results. “We’re going to see a lot of people on the uptake of doing that.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
Beyond hardware, software also plays a crucial role in the creative stack. Caniglia discussed how DaVinci Resolve, the company’s free editing and color grading software that has become a cornerstone for post-production, is enabling creators to collaborate and expedite the finishing process without compromising on quality.
“Almost 40 years ago, [when] I went to editing school, I would have needed $500,000 to set up an edit system in my house. And now it’s free.”
Looking Ahead: The Future of Video Production
As the creator economy evolves, so too do the tools and technologies at creators’ disposal. Louderback expresses excitement about the integration of AI in video production, particularly in audio enhancement, foreseeing a future where AI acts as a “copilot” for creators.
“I want AI in my camera and AI on my phone, AI in my audio, so I’m all up and ready to do a shoot,” he enthuses. “I can’t wait to see that happen.”
Studios, says Raskin, will become a thing a past, “at least the ones with nails and hammers and wood.” Instead, virtual studios will become the norm. “So your studio can be as big or small or anywhere in the world, and that’s going to change all sorts of events and all sorts of content.”
Metadata will turbocharge video’s utility, Raskin predicts. “Video, I used to call it unconstructed data, it’s just there, nobody knew what to do with it,” she explains. “Well, now, through meta tagging… all of a sudden video becomes something that’s structured, and you can learn what keeps people’s engagement, what gets them involved.”
Shoppable videos will are another big trend to watch out for, she says. “You can see Amazon and some of the big players, Etsy, Pinterest, all doing shoppable videos, and it’s going to be even more so this year, we’re all going to be selling things to each other.”
“When you think about buying your initial setup, when you’re just getting started, make sure that it’s stuff that you can grow with,” Louderback advises, noting that many camera setups, and other equipment, allow users to build systems on top of them. “You can stack, you can grow with your equipment. Make sure you look for modularity and upgrade ability as well,” he says, “rather than buying something, and then be like, ‘I’m gonna throw it out, and I’m gonna upgrade to something new.’”
As platforms cut back on creator support, many “accidental entrepreneurs” are left to navigate the creator economy’s complexities alone.
March 19, 2024
Posted
March 19, 2024
Hey, Sam! Tell Me About Audience Measurement
TL;DR
Consistent measurement practices are more important in the age of “build your own bundle” TV.
Nielsen’s Paul LeFort shared insights into how his company handles audience measurement in this increasingly fragmented era, when it’s common for viewers to watch OTA, streamed and pay TV content in one week (or even one sitting).
“Audience measurement has long been at the heart of the media business,” says NAB EVP/CTO Sam Matheny.
That continues to be true in the age of what Matheny calls “BYOB TV,” referring to those who have “stitched together” a modern version of a cable bundle, often combining a variety of streaming and over-the-air offerings.
Matheny discussed the importance of consistent measurement practices in a fragmented market with Paul LeFort, managing director for Nielsen’s local TV business during the latest installment of NAB Amplify’s “Hey Sam!” interview series. Watch the full conversation (above) or read on to learn more about LeFort’s perspective on viewing has — and hasn’t — changed.
Understanding Video Consumption in 2024
First of all, LeFort points out, “The piece that really stands out is the consistency of over-the-air television. Ten years ago, [it] was about 12% of all households in the US. Now, it’s about 15% of the households in the US receive over-the-air television. So while there’s this tremendous churn in the streaming landscape, in the pay TV landscape, the resilience of over-the-air television remains consistent.”
He concludes, “And clearly, that is a main connection point between viewers and their communities. The local news that broadcasters provide, [they] do a stellar job of covering their local communities.”
One reason for the churn elsewhere in the marketplace is that customers have come to expect a pattern in which “you sign up, you binge it, you blow it out in a couple of weeks, and then you terminate the service,” LeFort says.
Subsequently, “All of these streaming services, they’re making it easier for folks to come in and out of that ecosystem. When there’s a big event, March Madness, Super Bowl, you know, golf tournaments, etc., if there are reasons that you want to sign up, you’ll find those reasons.”
But that convenience-driven fluidity makes understanding metrics more challenging for advertisers (and industry watchers, natch).
“The evolution, the investment, the complexity of what it takes to measure local television in the US, it’s never been greater,” LeFort admits.
He says, “We take that very seriously, the responsibility that we have to measure these audiences, report them in a way that’s comparable, give our clients data that lets them make decisions, not just about their content, but about their revenue and their advertising.”
“Advertisers,” LeFort says, “don’t really care what the platform is. Whether it’s a stream, or whether it’s a broadcaster satellite, they just want to be able to reach their consumers. And conversely, those content creators, whether they’re creating local news and creating premium scripted content, or a reality show, they want to understand, is my impression bigger or more or bigger or smaller than those other providers of the content sources that are out there.
“And so while impressions are the great leveler, it’s being able to look at those different viewing sources, in a comparable way that allows advertisers and content readers to understand their worth relative to their competitors.
But in 2024, LeFort says, “There is no one single solution anymore. There’s no one cable lineup, there’s no one broadcast channel, it’s always going to be a blend of technologies and platforms. And we see that all of those datasets that have their own inherent value in amongst themselves. How do you harmonize them? How do you correct them? How do you make them comparable across content owners and advertisers and broadcasters. And that’s the role that we play — is to bring those things together and do it in a way that is transparent, that can be audited, that fully follows MRC accreditation policies and procedures.”
The fragmentation of the video market “almost amplifies the need for that measurement, even more so because everybody has to understand where they sit in this ecosystem, and do so in a comparable way,” he says.
“You’ve got to have similar metrics,” LeFort says. “And they have to mean the same things, whether it’s broadcast television, or a stream or or cable or satellite. And so listen, monthly average users, that’s important, because that shows how many people are finding you clicking once or 100 times you get counted as a monthly average user. So it does have value in my opinion.”
He explains, “It’s fundamentally important to understand how many people touch my content, how many people tune in, how many people launch a stream …However, [MAUs] misses the other really important part of any kind of audience measurement, and you set it as well, how much time do they spend, and at the end of the day, every number you see from Nielsen or any legitimate measurement company, it’s going to involve those two things, how many people tune in how much time that they spent. And from those two measurements, you get the GRPs, you can get the gross rating points, you get share, you get impressions, you get reach, you get all of those things, but you have to start out with those two fundamentals.”
Making Sense of the Mess
So how does Nielsen do it?
First, LeFort explains, “Big data has a ton of value and a lot of benefits. And that’s why we’ve gone full tilt at incorporating those sets of big data.”
“One source of big data is directly from the streaming platforms themselve,” he says. However, “The census data” from streamers and pay TV companies about user habits “exists in most cases behind a wall. So Netflix knows a ton of information about what you look at on Netflix, what I watch on Netflix, and how we’re all different and serves us up things that they think we want to watch.”
Streaming video “platforms have Nielsen technology built into them,” LeFort explains. “So when you sign up for Disney+, or sign up for TV, or Peacock, go into your user settings, everybody clicks right through …you’ll see Nielsen measurements specified in those in those settings.”
Also, “We’re working with Comcast, and Charter and Rush, DirecTV and Dish, obviously to collect their big set top box data. But the Dish TV looks very different than the DirectTV customer does, right?”
That’s where Nielsen’s audience panels come in. He says, “Panels have such a fundamental role to provide full market coverage and representation across ethnicities, geographies, different households, kids, household language, all of those things that are so relevant to measuring video, Spanish speaking household viewed very differently than non-Spanish speaking household, older households viewed very differently than younger household with kids. And so those panels provide really the foundational truth set that we can correct for the biases that we see in big data.”
LeFort says, “It’s the combination of those big datasets, whether it’s from the streamers, whether it’s from pay TV set top boxes, or from Nielsen’s panel that allow us to harmonize and provide things that are representative and comparable between those different.”
Nielsen then “harmonizes all of that big data from census providers [streamers], like you mentioned, with the fully representative panels that we create.”
NextGen TV to Change OTA Measurement
Additionally, Matheny points out, “Seventy-five percent of the U.S. television households are now in a market where a NextGen TV signal is being broadcast. NextGen TV was designed, you know, from a blank sheet of paper to be both an over-the-air broadcast transmission as well as a connected television solution that leverages the internet.”
“NextGen TV, and its ability to collect privacy protected information from those households from those viewers is tremendously exciting,” LeFort says, “because it’s going to give broadcasters for the first time ever really first party first party data at scale that can be used for a whole bunch of different applications, right? Whether it’s behavioral targeting, digital target, making television — linear, live, local linear television — a lot more like digital for advertisers to buy. That’s incredibly exciting. A ton of potential there.
“And then finally, from a measurement perspective, one of the really exciting things about that first party big data is it will for the first time ever provide a big data source or over the air household, this important resilient, sizeable section of the marketplace. And so as we look at always adding more big data sources to our measurement, as NextGen grows and scales, we think there’s an opportunity to build a big data source for over-the-air based on that.”
How “Nolly” Recaptures An Entertainer’s Brilliance/Resilience (and the Best of 1970s Broadcasting)
TL;DR
A new British period drama is a biopic of the star of a minor tea time soap opera called “Crossroads,” Noele Gordon, affectionately known to her many adoring fans as “Nolly.”
Cinematographer Sam Care and director Peter Hoar wanted to pay tribute to Nolly and recognize her achievements before and after “Crossroads,” but couldn’t ignore the broadcasting history lesson that a 1970s low-budget soap opera would give us circa 2024.
Combining a modern digital camera with vintage anamorphic lenses, the production employed a Sony VENICE camera with Meru lenses, which are rehoused vintage Leica glass.
The first scene of each episode was shot on 16mm film as an homage to the prominence of the format in television production over the decades.
To recreate the broadcast quality of the show, the production team also used Ikegami HK323 1980s broadcast cameras, which were sourced from a company called Golden Age TV.
Costume or period dramas are standard fare in the UK’s film and TV output, but Nolly is the first drama that puts old technology above the importance of the frocks. The limited series is a biopic of the star of a minor tea time soap opera called Crossroads, Noele Gordon, affectionately known to her many adoring fans as “Nolly.”
Cinematographer Sam Care and his director Peter Hoar wanted to pay tribute to Nolly and recognize her achievements before and after Crossroads. Still, they couldn’t ignore the broadcasting history lesson that a 1970s low-budget soap opera would give us circa 2024.
Care would also shine tribute to the broadcasting tech of the time and decided to go the extra mile to make it as genuine and realistic as possible.
“We decided to go for a mixed media approach. The idea was partly to pay homage to some of the formats they had shot within the period. So we combined three formats: a modern digital camera with vintage anamorphic lenses — a Sony VENICE camera with Meru lenses.” The Merus are rehoused vintage Leica glass.
“That was our main format, which still had a vintage edge because of the lenses, and we framed that for a 2:1 aspect ratio. We also decided to shoot the first scene of every episode on 16mm film; this was our homage to 16mm, a prominent format in television production over the decades. But these early scenes are related to specific periods earlier than the main story in Nolly’s life. Some occurred in the 1930s and 1950s, whereas most of the action happened in the 1970s and 1980s; it also helped with our mixed media approach. You’ll see the 16mm’s softness and grain structure, especially at 500 ASA, using Kodak Vision 3 stock,” Care said.
“We framed those opening scenes in a 1.66:1 ratio, slightly different from the main one,” he continued.
“Then, the third format recreated the broadcast quality of the show. We used these Ikegami HK323 1980s broadcast cameras, which we sourced from a company called Golden Age TV, which maintains them. They’ve supplied The Crown and various period shows. We had to re-light the set for them as they’re not as light-sensitive as modern cameras; we framed them in a 1.33:1 aspect ratio or 4:3, which is the ratio that they would have shot in back in the day.”
The show’s homage to technology also helped separate the on and off-camera scenes from each other as the drama entwines the two rather than a sketchy color-graded solution.
Exploring different ratios with different image textures within the one show was exciting for Care. “It’s all very well doing this, but you need a reason in the script, so it’s for storytelling reasons. It was also fun for the camera team and me. We got to use 16mm cameras again and revisit labs – checking the gate was new for some people.
The Kodak stock was rated at 500 ASA, but the Ikegami’s were working at only 50 ASA, so a new lighting design was needed for each. “Originally, the director wondered if we could shoot the Venice simultaneously as the Ikegamis, but ultimately, we had to shoot them differently.
“Even the actors noticed the studio heating up when we used the Ikegami cameras with the lighting to expose them properly. We realised that back then, that kind of heat was typical.
“So the lighting was the other element of being ‘period correct’ on the show. We initially looked for an old film studio in which we could build the sets that would have the old vintage walls and some of the rigging in the ceiling. But there was nothing available. We created them in Space Studios in Manchester, a very modern stage. We ended up building eight sets in there.
“Our big challenge was that we knew that one of our visual approaches was long Steadicam ‘oners’, which would show a large part of the sets. Also, we would start with a closeup of the cast, so the lighting had to be flattering. We also wanted to spin around and see some of the rigging in the ceiling with all the lights on show. We would then finish the shot back in a closeup.
“My challenge was that every lighting source we saw in the shot was also everything that lit the set and had to be period correct. So, my gaffer, Steed Barrett, and I got Panalux, who provided all the lighting for the show, to bring us one of every period light they had in their UK branches. We laid them all out in a room and tested them for a day. We ended up with 2K Zap lights and 2K Bambinos. So that ended up being over 200-period correct Tungsten sources in this enormous stage over the eight sets. They were all in-shot and wired back to our desk op.
“The lights had to be softer than the ones used at the time and flattering for our actors, so we tended to use the 2K Bambinos, harder Fresnel lights, as backlights or three-quarter backlights to get a nice edge light on the actors and the Zap lights we would put more on the side and in front of the set as they were an early sort of soft box I suppose.
“They would have a bulb inside that would bounce into a silver reflective inner lining with an early form of an egg crate that would give the light some direction. They were soft enough to light Helena (Bonham Carter) in a closeup and still look flattering. We did a lot of testing to figure it out and eliminate any shadows.”
For Nolly to represent Crossroads, as realistically as they have, made it a far better show than anything half-hearted. Ultimately, Care and his team’s skills didn’t include any need for digital set extensions, and virtual production only included a night bus scene. There were scenes from an actual excursion to Venice, but Thailand scenes were shot on a Bolton stage in the north of England.
If you’re planning to revisit old eras of television broadcasting for any narrative reason, then Nolly might be the new standard to live up to.
Series director Gus Van Sant travels back to 1966 for Truman Capote’s High Society New York ball, recreated as a Maysles-style documentary.
March 15, 2024
Creator Economy Amplified: The State of the Creator Economy
Watch “Creator Economy Amplified: The State of the Creator Economy.”
TL;DR
Jim Louderback, editor & publisher of “Inside the Creator Economy,” joins veteran journalist Robin Raskin and Tyler Chou, founder & CEO of Tyler Chou Law for Creators, for an exclusive Q&A on the evolving challenges and opportunities within the creator economy.
The creator economy is booming, projected to grow from $250 billion to $500 billion worldwide. However, platforms like YouTube and TikTok are cutting back on creator support, leaving many “accidental entrepreneurs” to navigate the industry’s complexities alone.
Creators are diversifying their income streams beyond traditional platforms. Chou emphasizes the importance of sustainable monetization strategies, including launching products, courses, and consumer goods.
Artificial intelligence offers significant benefits for content creation, such as efficient editing tools. However, the rise of deepfakes and synthetic media poses new challenges for protecting digital identity and intellectual property.
Understanding and engaging with the audience is crucial for growth. New tools like the Superfan app are emerging to help creators connect with their most dedicated fans and develop content that resonates with their community.
In an era where digital content is king, the creator economy has emerged as a vibrant and essential ecosystem, empowering individuals to turn their creativity into careers. Yet, as this burgeoning economy continues to expand at an unprecedented pace, creators find themselves navigating a sea of opportunities fraught with challenges.
As part of NAB Amplify’s “Creator Economy Amplified” series, we sat down with Jim Louderback, editor and publisher of Inside the Creator Economy; veteran journalist Robin Raskin, founder and CEO of Virtual Events Group; and Tyler Chou, founder and CEO of Tyler Chou Law for Creators, to discuss the current state and evolving dynamics of the creator economy, including the challenges and opportunities facing today’s digital creators. Watch the full conversation in the video at the top of the page.
The Growth and Challenges of the Creator Economy
In addition to a thriving YouTube channel, Tyler Chou, The Creators’ Attorney, the veteran Hollywood lawyer represents dozens of creators as part of her “accidental” agency, and advises countless others on strategies for growing their business.
Creators are increasingly on their own, Chou says, describing a climate where “these accidental entrepreneurs” are having their IP stolen left and right and struggle with securing payments from partnerships. “I have two creators right now who have six-figure brand deals that have not been paid. So they’re sort of adrift — on nice yachts — but kind of adrift at sea.”
Louderback, who alongside Raskin is leading the Creator Lab at the 2024 NAB Show, says this lack of support for creators is the biggest shift he’s seen over the past year. “Goldman Sachs says [the creator economy is worth] $250 billion worldwide, growing to $500 billion worldwide. Yet, if you look at the platform’s YouTube, TikTok, Facebook, Snap, they’re all laying off their creator support teams.”
This lack of support means more burnout for creators. “Creating right now is a hamster wheel of effort,” Louderback describes, “And creators are getting burned out and they’re being pushed in so many different directions.”
A Rapid Maturation
Despite this shift, the creative economy is rapidly maturing, Raskin contends, pointing to the astronomical number of people who self-identify as creators, as documented by The Washington Post series, “The Creator Economy,” as well as the proliferation of college courses on the subject. “When colleges have after-school clubs on being a creator, you’re seeing a maturing industry, and it’s maturing very quickly.”
This evolution is driving creators to diversify, Raskin continues. “They’re realizing that a platform like YouTube, TikTok will only take you so far.”
Creators need to know how to make money in a more sustainable way, Chou urges. “Not just heads down, making videos, depending on AdSense and brand deals,” she says. “I help them launch products, courses and actual consumer goods. They’re starting to think about, ‘Okay, I’ve been on YouTube for a few years now. I have a million plus subs. Now what?’”
Embracing AI and New Technologies
Artificial intelligence is also helping to shape the creator economy, both as a production assistant and as a possible foe in the form of deepfakes.
Will AI kill the creator? “If you asked me a year ago, I would say maybe, but now I’m actually very optimistic about how AI can augment and supplement creators,” Chou says, pointing to editing tools like Opus Clip, which uses AI to break long videos up into shorter segments in less than a minute. “With my editor? That would take him a week-and-a-half.”
Chou compares generative AI to streaming in terms of disruption. “I think we all say now that streaming is better, right? Like, the death grip that the studios had on the distribution of content has been as been expanded with streaming,” she explains. “And I think that’s what AI will do.”
But concerns about the challenges of protecting digital identity and intellectual property in an era dominated by deep fakes and synthetic media continue to proliferate, Raskin notes. “Digital identity, protecting of your digital IP is a huge problem,” she says, underscoring the need for robust solutions to protect creators’ work and identity.
Building and Monetizing Audience
Figuring out the best places to cultivate community to help reduce the stress of constantly needing to post across multiple platforms remains one of the biggest tasks creators face, says Louderback. “I think it’s a real challenge. And I don’t see it getting any better right now, unfortunately.”
He emphasizes the importance of understanding and engaging with one’s audience for growth and monetization.
“If you want to grow your audience and monetize, you need to know your audience,” he advises. “So think about your community. Think about your audience. Think about the people who tune in every day and every week to look at your videos. How do you get to know them?”
There are a number of new tools coming out that are designed to help creators understand their audience, Louderback says, including the Superfan app, which helps creators develop a creative network of their most ardent supporters.
These tools, he says, “will help you really understand what your biggest fans want, will help you move those fans in their own communities, and help you figure out how to create things specifically for them that they’ll want to pay for.”
Chou advocates for creators to establish their platforms, emphasizing the importance of maintaining independence from external platforms. “I’m actually in the process of moving my community [from YouTube] on to Mighty Networks, so that I can have more one-on-one connection with them,” she details, describing a newly-launched product from the company called People Magic. “They match people of similar interests together to talk — I think it’s fantastic,” she says.
“I think creators should create on their own land, not on rented land,” Chou advises, encouraging creators to view themselves as businesses capable of generating diverse revenue streams beyond content creation. “Because we have to realize all the platforms are rented. If you have your own community, your own website, your own email list, that’s something that’s yours, and they can’t be taken from you.”
Influencer marketing strategist and creator economy advisor Lindsey Gamble discusses key trends and his predictions for the future.
March 22, 2024
Posted
March 14, 2024
NAB Show Amplified: Actionable Insights for Brands Navigating the Creator Economy
TL;DR
Modern advertising strategies often require brands to dip their toes into the Creator Economy, however shallowly. But they must do so in a way that understands both how and why content is consumed and in which format.
Also, as the lines blur between legacy media formats and user-generated content, brands cannot afford to ignore the significance of the Creator Economy.
Do you know “what’s the right strategy to hit who you’re actually targeting?” Deloitte Principal Dennis Ortiz says understanding consumer habits and the underpinnings of the Creator Economy are crucial to brand success.
“Different generations use different platforms, whether it’s legacy media or social media, for different reasons, and coming up with the right strategy to engage cross platform is likely going to hit optimal outcomes for advertisers,” Ortiz explains in an interview with NAB Amplify. Watch the full video (below) or read on for more insights from the conversation.
It’s important to understand the “why” behind certain types of media consumption.
Ortiz says, “Legacy media tends to be about immersion and experience.” For “Gen X and the older generations, they actually look at legacy media — TV shows, movies, etc. — as the most immersive form of entertainment.”
“Social media, on the other hand, is really more about connection and convenience,” he says. “Social media is driven by creators. And we actually have studies that suggest that, indeed, consumers are very attracted to creators that have similar interests, or shared ideas and values, and spend most of their time on social media following those types of creators that have those similar interests and values.”
One such study is Deloitte’s recently published The Creator Economy in 3D, which explores how social media, content creators and influencers are changing the way consumers interact with brands, media and advertising.
“A big part of this is not just the creators, but also the ability of the creators to create another forum for brands to connect with consumers,” Ortiz explains. “We actually have some research that would suggest that three out of five consumers are more likely to engage with a brand if one of their favorite creators actually recommend that brand.”
Deloitte’s study found the average consumer has five favored content creators, which they said “are the social media equivalent of a favorite TV show, perhaps with less regular schedules.” And fFor Gen Z, that number increases to 10.
This increase in social media time means the average person is engaging with many different voices in content from leading studios, micronetworks, and amateur user-generated content, to full-time content creators.
To be the most effective in driving followers’ purchasing decisions, the Deloitte study says “relatability” is key for content creators, meaning that they share a similar socioeconomic background and/or have hobbies/interests in common.
Also notable: “77% of consumer-creator relationships [Deloitte] surveyed could be traced to either a shared interest or desire to learn from that creator.” And for Gen Z specifically, aspiration is key; 45% indicated they follow “creators out of admiration for their lifestyle.”
“Online communities, Ortiz says, “are going to become increasingly important. That is why individuals are gravitating towards these platforms, because they’re able to find individuals or creators that are very similar to them or that they aspire to be.”
Understand Who Wants What
“Social media and user generated content will increasingly capture consumer attention,” Ortiz explains. In fact, “UGC is the primary form of entertainment for the younger generations.”
The study notes that “Gen Z and millennials spend 26% to 37% more time on social media than previous generations,” reinforcing those platforms as a good way to reach those target audiences.
And remember that the oldest members of the Millennial cohort are now 43 years old, and the youngest Gen Z are now 12 years old, according to Beresford Research, making both demos extremely desirable for many brands.
“I think that says a lot about where their consumption habits are, and where advertisers should be spending more of their dollars to capture that specific audience,” Ortiz says.
And why are Millennials and Gen Zers gravitating to social media? One explanation, per Ortiz, is that “social media platforms are now bringing a broader set of entertainment options to the table.”
Blurred Lines and New Opportunities
Take YouTube, for example. Ortiz says, “Now they have YouTube TV. They have the largest music library out there. They’re in podcasts, as well as gaming. YouTube now, as per a Nielsen report that came out in January, is the largest streaming service out there. And they’re … the fourth largest cable provider out there.”
With new generative AI tools entering the market all the time, UGC is also set to grow exponentially, and “that could continue to drive more eyeballs to the platform[s],” Ortiz predicts.
“The lines are blurring between legacy media, and what will be considered social media,” Ortiz emphasizes. “If you ask someone in the Gen Z age group… what movie they watch, they may be talking about a Mr. Beast movie that they saw on YouTube. Whereas if you ask some of the older generations, they might be talking about ‘Oppenheimer,’ which they saw in the theater or saw on streaming.”
Social media companies are no longer just competing with each other. He says, “What started as a creator-driven platform is now becoming an all encompassing entertainment platform. And so the legacy media companies should be thinking about ‘how do we play in that space, of being able to capture audience that might be diverting to other forms of media?’”
And companies should remember that “there is a symbiotic discovery relationship between social media and legacy media,” Ortiz says.
For example, “We have some research that suggests that consumers are driven towards specific television shows, as well as movies, based on what they’ve heard on social media.”
Both Consumers and Creators Need to Take Responsibility for AI Content
TL;DR
Adobe CEO Shantanu Narayen talks about how the company is incorporating AI and its work to tackle misinformation, urging creators to either use AI or miss out.
Narayen acknowledges the responsibility of companies like Adobe to mark the provenance of content generated by AI, but puts the onus on consumers to be more aware of the media they consume.
He doesn’t believe artists will be overtaken by AI, only that Adobe will work with AI to build new tools for creators — but then what else is he going to say?
“If you don’t learn to use AI you’re going to be in trouble,” declared Adobe chair and CEO Shantanu Narayen, who also put on the onus on the general public to learn more about AI an to question the veracity of the content they are served up as fact-based news.
He was speaking to Washington Post tech columnist Geoffrey Fowler in an illuminating exchange about how the tech vendor is seeking to balance its commercial aims with tackling misinformation.
There’s also an existential threat to Adobe itself. Won’t generative AI simply erode any business the vendor has to market its own content creation tools?
Narayen responds. “I think [AI] is actually going to make people much more productive, and it’s going to bring so many more marketers in small or medium businesses into the fold [to be able to use Adobe’s tools even more easily to create content],” he says.
“AI really is an accelerant. It’s about more affordability. And it’s about more accessibility. And Adobe is always one when we solve problems, and allow more people into the field.”
He maintains that GenAI was a good thing, on the whole, both for creators and for Adobe itself:
“It is going to be disruptive if we don’t embrace it and we don’t use it to enable us to build better products and attract a whole new set of customers. But I’m completely convinced that this will actually be an accelerant to further democratize technology, rather than just a disruption.”
Fowler asks how Adobe can convince the creatives who buy its tools that these tools — and Adobe’s AI Firefly — are not in the process of replacing them with AI.
“I’m convinced that the creatives who use AI to take whatever idea they have in their brain are going to replace people who don’t use AI,” Narayen replies.
“If people don’t learn to use it, they’re in trouble. I would tell young creators today that if you want to be in the creative field, why not equip yourself with everything that’s out there that enables you to be a better creative. Why not understand the breadth of what you can do with technology? Why not understand new mediums? Why not understand, the different places where people are going to consume your content. A knowledge of what’s out there can only be helpful rather than ignoring it.”
Keeping the creator community at the center of its brand, Adobe has opted to differentiate itself from other AI tools developers, like Stable Diffusion or OpenAI, in training Firefly on data that it owns or that creators have given permission for use.
“I think we got it right, in terms of thinking about data and in terms of creating our own foundation models and learning from it,” he says. “But most important in creating the interfaces that people have loved. I think we’ve been really appreciated by the creative community for having this differentiated approach.”
The conversation shifts to the dangers of AI, and how much of a threat AI poses to truth. Fowler notes that people have long been able to use Photoshop “to try to lie or trick” people into believing misinformation, so what’s different with GenAI?
Narayen says technology has always a had unintended consequences. “It’s an age-old problem, [but where] generative AI is different is the ease at which people can create content. The pace at which it can be created, is going to dramatically expand,” he says.
“So it’s incumbent on all of us who are creating tools, and those distributing that content, including the Post, to actually specify how that piece of art was created, to give it a history of what was happened.”
“The challenge — and the opportunity — that we have, is that this is not just a technology issue. Adobe and our partners have worked to implement credentials that identify, definitively, who created a piece of content and was AI involved and how it altered along the way. The question is, how do we as a company, an industry and a society, train consumers to want to look at that piece of content before determining whether that was real or not real,” Narayen says.
“We’re going to be flooded with more information. So it’s the training of the consumer, to want to interrogate a piece of content, and then ask Who created it? When was it created? That is the next step in that journey.”
Fowler pushes back on this, quizzing just how much onus should be on the user, or viewer, and how much responsibility should publishers or AI vendors share. He points out that Adobe was selling AI-generated images of the Israel Gaza war and that Adobe said the images were released because they were labeled as made by AI. “But is that just proof that the general public is not adequately prepared to identify generative AI images versus originals,” he said.
“The consumer is not solely responsible for all obligations associated with trying to determine whether it’s AI or not,” Narayen said.
“Certainly, distributors of that content and the creator of the content also has a role to play [but] the consumer has a role to play as well because they’re the ones who are at the end of the day consuming the content.”
He emphasizes the need for consumer education and insists that consumers take some, not all, but some responsibility for how they interpret the content they view, hear or read.
“The more a company like the Washington Post promotes this notion of content credentials, [the] education process will increase.”
Narayen also defends Adobe by saying it is not a source for news. “Adobe only offers creative content, we do not offer editorial content. And what people were doing was trying to pass off what was editorial content or actual events as creative. So we have to work and moderate [or] remove that content.”
Fowler challenges that idea of content credentials are welcome to those who view it as a good idea but it still leaves open the misuse of AI in content generation by bad actors. What can be done about them?
Narayen doesn’t really have an answer other than widening the education of the public. “The good guys are going to want to put content credentials in to identify their sources or identify what’s authentic. I think if we can continue to train other consumers to beware in terms of content [provenance] that’s one step in terms of the evolution of how we can educate people.”
He is optimistic about winning the battle. “We will get through this in a responsible way and it will both make people more productive and will make them more creative. We will respect IP, perhaps in a different way than it was done when it was just a picture, but it will happen I’m confident that. Companies and governments will work together to have the right thing happen.”
AI has a visual plagiarism problem, raising legal challenges and the urgent need for industry collaboration in ethical AI development.
March 13, 2024
Could AI Deconstruct Hollywood, Then Build It Anew for Everyone?
TL;DR
The radical production efficiency of AI is anticipated to have resounding creative implications.
On the one hand, AI will collapse the traditional content creation industry and conventional creative and technical roles dependent on it.
On the other, AI will be in the hands of literally anybody, opening new and unforeseen storytelling possibilities that could benefit diverse communities. Who could argue with that sentiment?
Filmmakers and artists are grappling with what AI means and no one can quite decide if it’s a good thing or a bad thing.
There are many apocalyptic scenarios for the film and TV industry, the most extreme of which sees the entire studio system (including even broadcast) collapsing, replaced by AI tools that can perform every function.
Yet this is also depicted as a double-edged sword we should welcome as the ultimate in democratization and infinite storytelling possibility.
This optimistic view appears just that — optimistic verging on the fingers-crossed — as experts look for a silver lining in the inevitable technology change sweeping the industry.
Perhaps we should even be making a dividing line in human history: Before AI and After AI.
As photoreal video and finessed prompt-to-text generation advances, it won’t be long before any movie or TV show, still image, painting, or novel created in the centuries of B-AI history is viewed as an outdated artifact.
More than that, the ability of AI to simulate anything could call into question how any and every work of art to date was crafted.
Even “behind the scenes” footage of humans actually crafting a film on set could be called into question. It could be faked, right?
That’s a pretty soul-destroying thought, but let’s have faith that we record and hand down the history of creation so that future generations appreciate the sweat, skills and inspiration and collaboration it took to make, say, Singin’ In The Rain or Raiders of the Lost Ark.
Hamper, however, also points out that our trust in what we see on screen has always been one of suspending disbelief. If someone is shot dead in a TV drama, we already know the actor wasn’t killed in real life.
“From reality TV narratives, to film lighting to special effects snow, you accept it. It’s all just been sorcery happening behind a screen. We have become fully locked into this fake reality,” he says.
“But at least it is human fakery,” Hamper adds, concerned that now even the skills with which humans used tools to “fake” things on screen will be completely taken over by machines.
Then he flips his own argument on its head. He believes (hopes?) that humans will still be essential for the creative process, “at least for now.”
“The one thing I encourage all creatives to think about is not to how to cut ahead of the curve of AI, not how to monetize it or clamber on the bandwagon, but to stop and think how these tools can help tell stories that have not been possible.”
The death of trust “may not be a bad thing,” he says if we can use AI to conjure stories that helps humanity connect with one other and the world around us in ways that have not been possible before.”
Before we call time on the content creation ecosystem, let’s take another perspective. The stock footage industry, for instance, is reckoned by almost every pundit to be virtually wiped out, and soon.
This sector of the industry was predicted to be valued at $7 billion by 2027, according to research firm Arrington. That was in 2022. Since then, market leader Shutterstock has partnered with an AI developer to grow its image library with AI stills and video.
“The underlying business model of an industry that was supposed to near $8 billion in just a few years, is essentially wiped out in the medium term,” says a review of AI’s impact by Synapse.
Think again.
“The idea of going to these sites and purchasing 10 seconds of footage will fade. But high quality data is the only way OpenAI or any competitor will be able to create a usable model. It essentially shifts every B2C stock site to a B2B video supplier. OpenAI may also enlist an army of stock filmmakers to collect certain scenarios that are missing from the model.”
What about VFX? Surely another industry that will be upended by AI. Won’t the $400 billion animation industry dominated by like Disney and Netflix “see massive disruption as the technological moat drops significantly?”
Maybe. Or maybe the money that went to a few (studios) will now be shifted. It stands to reason that one group to gain will be those supplying the underlying tech, thinks Synapse. Not necessarily the AI tool developers, but the makers of computer processors required to power the data crunch. (Could NIVIDIA CEO Jen-Hsun Huang become the richest man on the planet?)
The rest of the pie could go to creators hitherto largely cut out of the greatest rewards.
“The industry risks being over reliant on AI video models to serve their customers by making [content] more similar to wrappers than the foundations that help builders create” says Synapse.
“Think of it this way. Rather than an entire team of animators, VFX, lighting specialists and more, an individual with a story to share, we’ll be able to create and distribute a story at high speed and efficient cost. Creation of new worlds in the gaming and VR space will be streamlined and available to the individual creator.”
Others also see this upside in the evisceration of the traditional content creation industry model.
Chris Wells is a content marketer, but his words appear on behalf of Lightworks, the editing system favored by Thelma Schoonmaker, among others.
In an essay written for the Lightworks blog, he endorses the optimistic outcome of AI even as it destroys jobs. Think of it as a phoenix from the flames.
“Aspiring filmmakers will no longer need expensive equipment and large teams to bring their ideas to life. Instead, all that will be needed is an internet connection and an idea to manifest all the rich, cinematic scenes one’s future auteur heart could desire.”
It’s a good thing, if you follow this line of thinking.
“Directors will be able to rapidly turn their visions into footage, learning from results and refining iterative drafts in a fast feedback loop previously impossible in such a visual medium. Entire short films could be brainstormed, drafted, revised, and finalized in days rather than months or years,” Wells continues.
“Filmmakers will also gain the flexibility to experiment with a wide range of styles and narrative directions, unencumbered by the practical constraints of traditional filmmaking. By streamlining the technical aspects of production through AI, Sora will liberate creators to focus purely on their directorial craft.”
What’s more, he contends, with a tool as powerful as AI in the hands of anybody, previous barriers for women, people of color, or disability will fall away. Who could possibly argue with that utopia?
“These instant video creation capabilities could place indie artists and major studios on equal footing like never before,” Wells writes. “Aspiring directors might no longer need to struggle to raise funds or await permission for the ‘right’ location. Their visions could spring to life at their fingertips. Lowering the barriers of entry through technology may lead to an exponential growth of new filmmaking talent from underrepresented communities.
“By making professional filmmaking radically accessible, Sora has the potential to promote empowerment and self-actualization for all.”
You can’t argue with its statement: “Whether we like it or not, we are forcibly standing on the precipice of a new era in technological innovation,” but you might take issue with the hope — for that’s what it is — that humans remain at the center of the creative process.
Lightworks wants to preserve “the human element in the AI Age,” says Wells.
“While Sora promises creators radical new capabilities for magical instantaneous video generation, the essence of videomaking remains profoundly human.”
Perhaps resistance is futile. While AI pushes the boundaries for experimenting with stylistic techniques once deemed practically impossible, “filmmakers must lead in establishing best practices for AI tools to expand creative possibility without overtaking human artistry or ethics.”
Attention is turning to how generative AI will be not just used in production but how it will transform every aspect of storytelling.
March 11, 2024
Navigating the Creator Economy: AI Video Generators for Social Content
TL;DR
AI tools designed to improve and speed your marketing communications are legion. Here are a few of the latest video generators powered by AI.
Users can create videos for a wide variety of use cases with these tools. This includes generating educational videos, explainers, product demos and social media content.
These tools all work in similar ways, so it’s a question of try before you buy (or before you publish).
Video content is a must have for businesses and content creators wanting to compete. At the same time, it has never been more accessible to create professional looking video content by using AI to do most of the work. This article lists some of the more popular AI video generators targeting marketers or anyone in business looking to create everything from HR training videos to YouTube clips, highlight reels, voiceovers or targeted marketing content to be published online and social networks, as YouTube videos, Tiktok Reels or video ads.
They pretty much all share some common denominators. They don’t require much if any prior experience of editing or design. Many are browser-based, meaning they can accessed from anywhere via an app. Most work with just a few clicks requiring the upload of some raw content (a blog post for example) and user choice of voice and “avatar” to personalize the video. The output of shortform content complete with background music, graphics and templates is done in a few minutes.
However, as SproutVideo’s Conner Carey points out, the videos they generate leave significant room for improvement. “These shortcomings make them most effective at creating faceless videos with voiceovers, such as for FAQs and blog post summaries,” he notes.
Currently, all of these tools produce about the same quality of AI-generated video. But what differentiates the good from the bad (and the ugly) is how easy the platform allows you to edit the video, adding your footage, scenes, music, and more.
Most reviewers advise trying a few (most offer free trials) to ascertain ease of use and results.
Here’s a pick of 10 AI video generators for marketers available to use today, leaning on the selections of Alex McFarland at Unite.ai.
With Pictory you start by providing a script or article, which will serve as the base for video content. For example, turn your blog post into a video to be used for social media or your website.
“This is a great feature for personal bloggers and companies looking to increase engagement and quality,” says McFarland. “It’s simple to use and takes just minutes before delivering professional results that help you grow your audience and build your brand.”
Another feature of Pictory, for those looking to create trailers or share short clips on social media, is that you can create shareable highlight reels. And you can also automatically caption and summarize videos.
Synthesys relies on text-to-video technology to transform scripts into dynamic media presentations. Creators and companies can use Synthesys to create videos with lip-syncing AI video technology. All you have to do is choose an avatar and type your script in one of 140+ languages, and the tool will do the rest.
The software offers 69 real “Humatars,” and a voicebank of 254 unique styles. It also offers full customization, an “easy-to-use” interface for editing and rendering, and high-resolution output. Again, it is aimed at marketers or creators wanting to generate explainer videos and product tutorials in minutes.
But it’s not to be confused with Synthesia, another platform targeting brands that also enables users to quickly create videos with one of 70 AI avatars. Besides the preset avatars, you can also create your own. Synthesia claims to be used by some of the world’s biggest names like Google, Nike, Reuters and BBC.
McFarland notes that Synthesia’s AI voice generation platform “makes it easy to get consistent and professional voiceovers, which can be easily edited with the click of a button.” These voiceovers also include closed captions. Once you have an avatar and voiceover, you can produce quality videos in minutes with more than 50 pre-designed templates.
If you’re looking for a more powerful AI to generate marketing and explainer videos, InVideo might be the one. It doesn’t require any background in video creation or video editing, either. All you have to do is input your text, select the best template or customize your own, and download the finished video. The video content can then be shared directly to social media. InVideo says its users develop promo videos, presentations, video testimonials and slideshows.
HeyGen claims to make video creation “as easy as making PowerPoints.” Once again, the process is to record and upload your real voice to create a personalized avatar, or simply type in the text that you want. There’s a wide range of voices with more than 300+ to choose from. There are multiple customizations available including combining several scenes into one video and, of course, adding music that matches the theme of the video.
The Deepbrain AI tool offers the ability to create AI-generated videos using basic text. Simply prepare your script and use the text-to-speech feature to receive your first AI video in five minutes or less.
VEED also makes it easy to transcribe your video files in one click. All you have to do is upload your video, click “Auto Transcribe,” and download the transcript. With its free video editing app, you can work on creating content right in your browser.
Fliki apparently makes creating videos as simple as writing with its script-based editor. To McFarlane, Fliki stands out from other tools because it combines text-to-video AI and text-to-speech AI capabilities to give an all-in-one platform for content creation. It features more than 2,000 text-to-speech voices across 75+ languages.
The Colossyan video generator enables users to choose from a diverse range of avatars and provide the avatar with a script. After your first video is generated, you can then target different regions by auto-translating your whole video with the touch of a button. You can easily change accents and clothing and choose from upwards of 120 languages.
Elai.io users generate video from the link to an article or a blogpost in just three clicks. You first copy and paste a blog post URL or HTML text before choosing one of the templates from the library. All that’s left to do is review the video, make any changes, and render and download it. There are over 60 languages available and more than 25 avatars to choose from. Besides selecting a presenter from the library, you can also request a personal avatar.
Biteable is an AI video assistant and in-browser editing suite that helps create simple, templated videos from script to edit with just one prompt. You choose the video type (explainer, promo, how-to, etc.), the format (landscape, vertical or square), and the visual style from a variety of options and, of course, enter a descriptive prompt. It generates a slideshow-style video complete with AI-generated script, stock video, images, and royalty-free background music.
While it may not win any awards for cinematography, “it’s incredibly useful for creating quick social videos or promoting product updates, rates Vidyard.
Vidyo.ai uses AI to create Reels, TikToks, and YouTube Shorts from long-form video content. Once you upload a video or insert a URL, the platform takes a few minutes to produce a handful of potential short video content with captions. Munch is a comparable tool spotted by Vidyard, “albeit with slightly less impressive results. However, some users may find the customization functionalities easier to use.
A room-size computer equipped with a new type of circuitry, the Perceptron, was introduced to the world in 1958 in a brief news story buried deep in The New York Times. The story cited the US Navy as saying that the Perceptron would lead to machines that “will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”
More than six decades later, similar claims are being made about current artificial intelligence. So, what’s changed in the intervening years? In some ways, not much.
The field of artificial intelligence has been running through a boom-and-bust cycle since its early days. Now, as the field is in yet another boom, many proponents of the technology seem to have forgotten the failures of the past — and the reasons for them. While optimism drives progress, it’s worth paying attention to the history.
The Perceptron, invented by Frank Rosenblatt, arguably laid the foundations for AI. The electronic analog computer was a learning machine designed to predict whether an image belonged in one of two categories. This revolutionary machine was filled with wires that physically connected different components together. Modern day artificial neural networks that underpin familiar AI like ChatGPT and DALL-E are software versions of the Perceptron, except with substantially more layers, nodes and connections.
Much like modern-day machine learning, if the Perceptron returned the wrong answer, it would alter its connections so that it could make a better prediction of what comes next the next time around. Familiar modern AI systems work in much the same way. Using a prediction-based format, large language models, or LLMs, are able to produce impressive long-form text-based responses and associate images with text to produce new images based on prompts. These systems get better and better as they interact more with users.
It quickly became apparent that the AI systems knew nothing about their subject matter. Without the appropriate background and contextual knowledge, it’s nearly impossible to accurately resolve ambiguities present in everyday language — a task humans perform effortlessly. The first AI “winter,” or period of disillusionment, hit in 1974 following the perceived failure of the Perceptron.
But it wasn’t long before the same problems stifled excitement once again. In 1987, the second AI winter hit. Expert systems were failing because they couldn’t handle novel information.
The 1990s changed the way experts approached problems in AI. Although the eventual thaw of the second winter didn’t lead to an official boom, AI underwent substantial changes. Researchers were tackling the problem of knowledge acquisition with data-driven approaches to machine learning that changed how AI acquired knowledge.
Fast forward to today and confidence in AI progress has begun once again to echo promises made nearly 60 years ago. The term “artificial general intelligence” is used to describe the activities of LLMs like those powering AI chatbots like ChatGPT. Artificial general intelligence, or AGI, describes a machine that has intelligence equal to humans, meaning the machine would be self-aware, able to solve problems, learn, plan for the future and possibly be conscious.
Just as Rosenblatt thought his Perceptron was a foundation for a conscious, humanlike machine, so do some contemporary AI theorists about today’s artificial neural networks. In 2023, Microsoft published a paper saying that “GPT-4’s performance is strikingly close to human-level performance.”
But before claiming that LLMs are exhibiting human-level intelligence, it might help to reflect on the cyclical nature of AI progress. Many of the same problems that haunted earlier iterations of AI are still present today. The difference is how those problems manifest.
For example, the knowledge problem persists to this day. ChatGPT continually struggles to respond to idioms, metaphors, rhetorical questions and sarcasm — unique forms of language that go beyond grammatical connections and instead require inferring the meaning of the words based on context.
Artificial neural networks can, with impressive accuracy, pick out objects in complex scenes. But give an AI a picture of a school bus lying on its side and it will very confidently say it’s a snowplow 97% of the time.
Lessons to Heed
In fact, it turns out that AI is quite easy to fool in ways that humans would immediately identify. I think it’s a consideration worth taking seriously in light of how things have gone in the past.
The AI of today looks quite different than AI once did, but the problems of the past remain. As the saying goes: History may not repeat itself, but it often rhymes.
How Sun Serves as the Color-Killer in “Dune: Part Two”
TL;DR
Playing with physics and light for “Dune: Part Two” gave the homeland of the villainous Giedi Prime a startling black-and-white look.
For director Denis Villeneuve, an environment that would breed the Harkonnen’s fascist culture is a planet where the sun is so bright and blinding that all color is washed out.
DP Grieg Fraser shot the scene with infrared imagery, although this caused complications for costume design.
The film is one of a series of recent projects that have shot using the technique, including “True Detective: North Country” and “The Zone of Interest.”
In a film awash with frames of retina-burning golden intensity the striking monochromatic scene of the gladiator fight introducing the psychopathic Feyd-Rautha (Austin Butler) stands out.
Dune: Part Two director Denis Villeneuve wanted the aesthetic of the evil Harkonnen to signify the polar opposite of the sunlit faith of the desert dwelling Fremen.
Dune author Frank Herbert had never established much information about the Harkonnen homeworld, called Giedi Prime, other than that it had been industrialized into an almost complete wasteland.
“I love how Frank Herbert shows how the psyche of the tribes of the people are influenced by the landscape,” Villeneuve told Susana Polo at Polygon. “If you want to learn about the Fremen, you just have to learn more about the desert and it will give you insight about their way of thinking, their way of seeing their world, about their culture, about their beliefs, about their religion.”
But with far fewer Harkonnen details to work with, Villeneuve was forced to improvise, and like any filmmaker, he settled on using light to tell the story — specifically the light from Giedi Prime’s sun.
“I wanted to find something that had the same evocative power and the same cinematic power for the Harkonnens,” he said. “I wanted to be generous with their world and make sure that it will be singular, and it will inform us about where their political system is coming from; where their sensitivity, their aesthetic, their relationship with nature is coming from.”
In an interview with Hoai-Tran Bui for Inverse, he added, “The idea that the sunlight, instead of revealing colors, will kill colors; that their own world will be seen in a daylight as a bleak black-and-white world, will tell us a lot about their psychology.”
He took the idea to Australian cinematographer Greig Fraser, who won an Oscar for his work on Part One, and Fraser suggested filming the scenes using Infrared photography.
The DP had used the technique on 2012’s Zero Dark Thirty and 2016’s Rogue One: A Star Wars Story. “It’s the same light the security camera uses, and you don’t see it. So, my fascination with infrared started because our eyes can’t see it, but the camera can,” Fraser told Jazz Tangcay at Variety.
Fraser shot the Giedi Prime scenes on an Alexa LF, modified so it could only see infrared and not any visible light. Since the sun emit infrared (creating life on this planet) it felt like a suitable creative solution to depicting the life-sucking environment destroying Harkonnen.
The effect is an eerie, translucent, effect on human skin, aided by the fact that the planet’s population is bald. But by creating the in-universe rule that the sun was washing out the colors, the filmmakers created other challenges. One was, what happens when characters step from the shadows into the sun?
“We needed to come up with rules for what the sun does,” Fraser told Inverse. “Our rules were effectively everything that the sun hits is washed out. So it’s direct sun and it’s bounced sun.”
When inside or in the shade, characters are lit by artificial light, Fraser explained. To achieve transitions, such as when Léa Seydoux’s Lady Margot Fenring emerges from the shade into the sun during the gladiator fight, Fraser had to shoot on a 3D stereo rig. One camera filmed as normal to a full color sensor, the other aligned on the rig, shot infrared imagery.
“We made sure we had lights that put out infrared for the infrared camera, and we had lights that the infrared camera couldn’t see, which were LEDs that put out visible light but don’t have infrared light. We had to have two different types of light sources on set that each camera could see separately and see differently.”
Another challenge was that when they started to shoot the photography showed up some of the costume fabrics — which were black in daylight — but appeared white under Infrared.
Fraser says he didn’t know why certain fabrics worked, telling Variety, “I’m sure there’s a rhyme and reason, from a material standpoint. I just know we had to do a lot of camera testing to make sure everyone was dressed in black.”
The scene, which is a birthday celebration-cum-Nazi rally, also features strange ink-blot fireworks. Villeneuve told Fraser, “They’re like anti-fireworks. They suck the light out as opposed to putting the light in.” Also, to Inverse, Fraser adds, “We worked pretty hard at trying to achieve that goal, this kind of anti-explosion type of light.”
Fraser elaborated on the decision to shoot infrared in an interview for the ARRI Rental website.
“We’d been on this planet for night interiors in part one, but we’d never been outside, so we were discussing what it would look like. I did a test for Denis where the inhabitants have very pale white skin, based on the notion that there’s no visible light from the sun on Giedi Prime, only infrared light. When the characters go from inside to outside, they effectively go from normal light to infrared light,” he detailed.
“On Rogue One, ARRI Rental modified some ALEXA 65s to do exactly the same thing, and we used them as VFX cameras, lighting parts of the set with IR light that didn’t affect the main image,” he added. “We just took that a step further and used them as our main cameras for Giedi Prime. They literally only record the infrared that bounces off skin or clothes, so colors are rendered as different tones and something that looks black to the eye might look white to the camera. It meant that we had to have exterior and interior versions of the same costume for some characters.”
It’s worth noting that infrared shooting techniques are in vogue just now. Hoyte van Hoytema used the 3D rig technique to capture eerie sequences for Jordan Peele’s Nope, and this inspired Florian Hoffmeister to go further and shoot extensive night exteriors for the unsettling Alaska set murder mystery True Detective: North Country (also on paired Alexas, with one camera modified without a color filter and with infrared lights).
Most notably, Lucas Żal shot infrared sequences for The Zone of Interest, although here the rest of the picture is so bleak that these scenes represent hope amid the darkness.
Cinematographer Greig Fraser employs the ARRI Alexa LF large-format digital camera for his collaboration with Denis Villeneuve on “Dune.”
March 6, 2024
“Feud: Capote vs. The Swans” Goes “Behind the Scenes” of the Black and White Ball With an Imagined Documentary
TL;DR
The third episode in Season 2 of FX anthology “Feud: Capote vs. The Swans” travels back to 1966 for Truman Capote’s High Society New York ball, which is recreated in the style of a documentary that was never actually shot.
Director Gus Van Sant shoots in thestyle of Albert and David Maysles and other mid-1960s documentarians who practiced theDirect Cinema aesthetic: black-and-white, handheld, favoring immediacy and reportage over gloss and precision.
The Maysles did spend time with Capote in 1966, filming documentary short “With Love From Truman,” but it had nothing to do with the ball.
Creating a faux-documentary gave Van Sant the freedom to run around with a handheld camera, and allowed viewers to see the Swans’ many layers of masks.
The third episode of Season 2 of FX anthology series Feud: Capote vs. The Swans travels back to 1966 for Truman Capote’s “best party ever.”
“Masquerade 1966” relives the legendary Black and White Ball hosted by the infamous writer at New York City’s Plaza Hotel — a lavish event boasting a guest list that included everyone from Frank Sinatra and Andy Warhol to Lauren Bacall, Ben Bradlee, the Kennedys, the Agnellis, the Vanderbilts, and the Astors. “As spectacular a group as has ever been assembled for a private party in New York,” according to The New York Times.
Director Gus Van Sant and showrunner Ryan Murphy present the hour-long episode as a black-and-white documentary of the party and Capote’s (Tom Hollander) weeks of preparations for his big night.
At its heart, it’s a flashback episode, with the Swans — Babe Paley (Naomi Watts), Slim Keith (Diane Lane), and Lee Radziwill (Calista Flockhart) — seen in various states of anxious planning. Creating even more drama, two of the high society Swans are under the impression that they would be the event’s “guest of honor.”
The documentarians catching this all, though rarely glimpsed, are depicted as real-life filmmakers Albert and David Maysles. But no such Maysles documentary was ever shot, let alone released. “It was an invention of Ryan [Murphy’s] to pretend like Truman hired somebody to shoot the ball, and then decide to not to go through with it at the end,” Van Sant tells The Hollywood Reporter. “So that was our concept, and our footage that we shot was supposedly their unused footage.”
As THR’s Mikey O’Connell points out, there is a seed of truth here. The Maysles did spend time with Capote in 1966, filming documentary short With Love From Truman. It just had nothing to do with the ball.
“That was an invention,” Van Sant confirms to Joy Press at Vanity Fair. He did watch footage from the short film the Maysles shot of Capote when he was younger, but creating a faux-documentary gave Van Sant the freedom to run around with a handheld camera, and allowed viewers to see the Swans’ many layers of masks.
But though this peek behind the scenes is imagined, “it feels oddly real—like watching never-before-seen footage unearthed from an archive,” according to Coleman Spilde of The Daily Beast. “The episode is a fine example of how to meld past and present, fiction and reality, for something unique.”
Van Sant explains the aesthetic he deployed, saying that in the ‘60s, cinematographers were freeing themselves of the tripod.
“It’s been emulated to the point that now it’s our standard movie style, which is handheld. And handheld today means, like, jerk it around on your shoulder and move it. The people in the ‘60s were trying to hold it really still. They were also trying to get the action, so that was one little aspect of emulating their style. They weren’t trying to make it bumpy, they just…didn’t have a tripod!”
Matt Zoller Seitz at Vulture calls it “the stylistic peak of the series” and talks to Van Sant about creating it with DP Jason McCormick.
“I’ve watched the work of a lot of documentarians, particularly ones who were part of the same movement as Albert and David Maysles,” Van Sant relays. “There was also D.A. Pennebaker, and Frederick Wiseman and Richard Leacock. The films they made were always fascinating to me. They were informing the French New Wave, partly, and by the 1980s, their work influenced MTV videos, as well as films like Oliver Stone’s JFK, which utilized MTV-style camerawork that was emulating the work of documentary filmmakers from that period.”
The director adds that if you construct reality properly, it really doesn’t matter where you put the camera. “If it’s a reality that makes sense, you could shoot it from the corner of the room with your phone. That’s what those documentarians were doing: They went to a location and put themselves someplace, and it was usually the wrong place in relation to where the action was going to be, so they’d have to zoom in to get to the shot they needed. Or they’d try to run over there. A lot of times they got a bad shot. But it was the action you were looking at anyway. You can kind of force yourself into their situation.”
Van Sant does in fact sneak in a few shots from the actual event shot by newscasters of the arrival of some of the guests. And there was no shortage of film of Truman Capote to help recreate his character.
The director is no stranger to experimenting with form, often in stories that meld reality and drama whether giving William S. Burroughs a supporting role in Drugstore Cowboy, or interpreting the life of HarveyMilk (Milk), or shooting Elephant and Last Days, which are reactions to Columbine and the death of Kurt Cobain but not conventional docudramas. His most formal exercise was remaking Hitchcock’s Psycho shot for shot.
“I always try to make a story conform to the reality as I know it,” he told TheDaily Beast. “When I first started out with Drugstore Cowboy, I was putting so much emphasis on blocking. I ended up doing it in the way I understood Stanley Kubrick did his filmmaking: He would work on a scene first and would figure out the shots afterwards. After that, I started working in that manner,” he said.
“As I got more familiar with my cinema, the blocking started to become more and more complicated, because I realized that anything that happens in reality defies the logic of how you would block it in visual fiction. Even with something that happens in a simple, given space, like a convenience store, the way people move and what they do is very surprising. If you were to shoot a basic interaction between two people in a convenience store with your phone and then watch it a couple of times, you’d realize the blocking of reality is quite unexpected. People might enter and exit before they even do anything! Odd things happen all the time. If you can capture those moments and use them in your fiction, you can represent reality in an almost spooky way.”
He adds in the same interview, “Emulating different forms to show different things has always been something to work on, like having a recipe to make. We were doing the same kind of thing on the third episode of Feud but with the films of the Maysles and D.A. Pennebaker. We were trying to approximate a documentary of the Black and White Ball so we could see what it would have been like to capture the black-and-white ball, as opposed to explaining it cinematically. It was an experiment. We were emulating films that existed. Their chaos was inspirational.”
In ‟El Conde,” Chilean director Pablo Larraín turns the story of General Augusto Pinochet into a stomach-turning tragicomic melodrama-horror movie.
March 4, 2024
Creator Economy Amplified: Lindsey Gamble on the Increasingly Blurred Lines of Influencer Marketing
Watch “Creator Economy Amplified: The Increasingly Blurred Lines of Influencer Marketing.”
TL;DR
Influencer marketing strategist and creator economy advisor Lindsey Gamble shares his expert insights, discussing key trends and his predictions for the future.
Social media platforms are expanding their focus to include a broader array of content producers, Gamble says, blurring the lines between individual creators and traditional publishers.
He highlights the transformative role of generative AI in social media, enhancing creative processes and enabling creators to reach global audiences with innovative tools like AI dubbing.
Brands are increasingly recognizing creators not just as content producers but as influential consumer segments, partnering with them to promote products and engage audiences on platforms like TikTok.
Gamble predicts TikTok will launch its own shopping day, aiming to compete with major retail events like Amazon Prime Day. He ultimately envisions a shift away from traditional social media platforms towards a more integrated digital ecosystem where creators and brands collaborate to build audiences and monetize content more effectively.
Influencer marketing, a concept that has evolved significantly since its inception, is on the brink of reaching unprecedented heights. With the global influencer market projected to grow to $143 billion by 2030, per Statista, the industry stands at a pivotal juncture. Lindsey Gamble, an influencer marketing strategist and creator economy advisor, offers his insights into the transformative trends shaping this space as we head into 2024.
As part of NAB Amplify’s “Creator Economy Amplified” series, we sat down with influencer marketing strategist and creator economy advisor Lindsey Gamble to take a deep dive into key trends from over the past year and unpack his predictions for 2024.
The associate director of influencer innovation at social media management platform Later, Gamble’s extensive experience and insights have not only shaped brands’ influencer marketing strategies but have also provided a roadmap for navigating the evolving landscape of the creator economy. He touches on the increasingly blurred lines between creators and publishers, the integration of AI in social media, and the strategic moves made by various platforms to cater to creators and audiences alike.
Watch the full conversation in the video at the top of the page.
The Evolving Roles of Creators and Publishers
The social media landscape is undergoing a transformative shift, with the roles of creators and publishers increasingly converging, Gamble notes. Social media platforms, traditionally the realm of individual content creators, are expanding their embrace to include a broader spectrum of content producers, including publishers, digital magazines, and content collectives.
The evolving roles of creators and publishers signal a maturing digital landscape, he says, one where the lines between content production and distribution are becoming increasingly blurred. “It’s one of the most fascinating things that I’ve been keeping an eye on over the last year and change.”
The blurring lines between creators and publishers are evident as platforms like Pinterest and LinkedIn adjust their strategies to cater to a wider array of content producers. “Social media platforms have really over-indexed on creators, helping them with new tools, ways to monetize,” says Gamble. “More recently, that’s changed; we’ve seen a lot of these social media platforms go back to the traditional playbook where they’re also turning their attention to publishers.”
Pinterest, for example, has broadened its definition of creators to encompass magazines and digital collectives, opening up opportunities for these entities to participate in its Creator Inclusion Fund. Similarly, LinkedIn, which has heavily invested in supporting creators, is subtly shifting its focus to appeal to a broader user base, including professionals and businesses.
Amid new opportunities, this shift is not without its challenges. “Creators and publishers are kind of in competition today,” Gamble explains. He describes a landscape where publishers are adopting creator-like strategies to produce content that resonates on a personal level, while creators are exploring monetization avenues traditionally associated with publishers. This competitive yet symbiotic relationship underscores the complexity of the evolving creator economy.
GenAI’s Role in Social Media and Content Creation
The integration of generative AI into social media platforms is revolutionizing the way content is created, discovered, and consumed, says Gamble. “AI is here, it’s not going anywhere.”
Snapchat was the first social media platform to jump into the fray, in early 2023, he recalls, “which was really a surprise.” This marked the onset of a trend that major platforms like LinkedIn, Meta, and YouTube soon followed.
“Now we’re [really] starting to see tools and features that are beneficial to creators,” he says. “That can be something as simple as being able to remove the background out of your existing photo and put yourself in a totally different setting.”
However, the integration of AI is not without its challenges, particularly for creators. As brands begin to leverage AI tools to produce their own content, the space for traditional creator-led content could shrink, Gamble suggests.
But the benefits of generative AI are undeniable when it comes to growing your business as a creator, he adds. “YouTube launched a lot of features last year, and one of the standout ones is an AI dubbing tool,” he says, detailing how the ability to release a single video in a variety of languages not only reduces barriers to reaching a wider audience, but also provides new opportunities for monetization. “It’s a great addition to creators in terms of their businesses.”
Social Becomes Search
The evolution of social media platforms into more search engine-like entities is another significant development Gamble highlights. He notes the importance of AI in this transformation, enabling platforms to offer personalized content recommendations and insights. This shift, however, demands that creators optimize their content for algorithms to increase discoverability, a task that AI tools are making more manageable and effective.
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
“Because social media platforms are becoming very similar to search engines like Google,” he says, creators not only “have to create great content that people are going to resonate with, and content that feeds into the algorithm, but you also have to take a approach similar to SEO for websites.”
Depending in your perspective, this is either an opportunity to increase discoverability or just another task to do as a creator, Gamble remarks. “In addition to posting that content, you also have to figure out ‘how do I write the right captions… that are going to get me in front of people when they’re using these platforms,’ like a search engine, in addition to those discovery mechanisms.”
TikTok’s E-commerce Ambitions
TikTok is rapidly emerging as a powerhouse in the influencer marketing arena, captivating audiences and creators alike with its dynamic content and interactive features, and now making strategic moves into the e-commerce space.
Instagram remains at the core of influencer marketing, Gamble says, noting that the Meta-owned platform still holds the most ROI for advertisers. “If you look at some of the data, most brands are starting on Instagram, but TikTok is definitely picking up, especially for new brands.”
Anticipating TikTok’s next big move, Gamble predicts, “I think we’re going to eventually see TikTok launch a TikTok Shop Day that pretty much is what Amazon Prime Day is.”
He also points out an emerging trend where “creators are becoming a consumer segment,” with TikTok at the forefront of this shift. Brands are increasingly recognizing the value of partnering with creators not just for their content creation skills but also as influential consumers who can authentically promote products. This approach is particularly evident on TikTok, where the platform’s unique ecosystem fosters a close-knit community of creators and viewers, making it an ideal venue for targeted e-commerce initiatives.
The Future of the Creator Economy
As we look toward the horizon of the creator economy, Gamble emphasizes a pivotal shift in how brands and creators collaborate. Understanding the nuanced needs and preferences of creators is becoming increasingly essential for brands aiming to develop effective products and marketing strategies.
“Brands that look at creatives as a segment, not necessarily just launching products, but figure out a way to position their brand and products as a benefit to creators can really tap into the 300 million or whatever the number is today of creators out there that they’re missing out on,” he explains.
Gamble’s insights suggest a future where partnerships between brands and creators evolve beyond traditional sponsored content. By genuinely understanding and integrating into the creator lifestyle, brands can uncover innovative ways to support creators, whether it’s through products that enhance their creative process or services that address their unique challenges. “Talk to creators, consult with them, and look beyond sponsored content to gain valuable insights,” he advises.
Looking ahead, Gamble shares his vision for the creator economy: “Essentially, I think we’re going to move away from social media platforms,” he predicts, “and everything’s just going to be a platform for creators and brands to create and build audiences and monetize.”
Weird is Wonderful: The Adventure of Editing “Poor Things”
TL;DR
Editor Yorgos Mavropsaridis, ACE discusses his longtime collaboration with director Yorgos Lanthimos on the multi-Oscar nominated “Poor Things.”
Mavropsaridis explains how the dance scene on the cruiser became a microcosm for the whole film.
He shares how he had to rip up the rule book for editing when they first met — and continues to do it on Lanthimos’s films to this day.
Editor Yorgos Mavropsaridis has collaborated with director Yorgos Lanthimos for more than 20 years and knew from the first moment they met that he had to ditch all the rules he had learned.
“The first question is ‘what is reality?’ he told Hayden Hillier Smith in an extensive interview at The Editing Podcast about the making of awards season favorite Poor Things.
“From the first collaboration I discerned that this is a guy who wants to say things in a different way, not the usual way we approach themes or character. For Poor Things I discovered many themes that existentially if you like, are about how easy it is to be in a society, which puts some rules on you.
For Lanthimos storytelling is not a didactic experience. “I want you to feel no, it’s more loose, it’s more open to interpretations and feelings,” says Yorgos Mavropsaridis who is Oscar nominated again following his work on previous Lanthimos drama The Favorite.
“All Lanthimos’ films desire a new kind of reality, which has certain rules how an individual can behave and questions whether this behavior is dictated by the character’s needs or by some external force. And of course, it’s the same with Bella Baxter.”
The lead character is played by Emma Stone in what has already been a BAFTA and SAG Award-winning performance.
Mavropsaridis says he still has to go against his instinctive approach to editing. “And I have to surprise myself as well, to create something new and not to repeat the same situations all the time.”
In all his previous films, they had used classical music mostly, but the director commissioned Jerskin Fendrix to compose the music for Poor Things months before the shooting started. Not the exact music as it was in the final film, but the general themes so they could have them in editorial after the first cut.
Lanthimos also used a lot of this music on set, having done this previously on The Killing of a Sacred Deer (2017). “Different music was played back for [Stone] to somehow get inspired by the music — to have this surprise of — for the firsttime — seeing something.There was also music to set the inner rhythm or their external movements because Yorgos likesthe choreography of the actors — not only the facial expressions — and this way, the movement,internal or external, is influenced.”
Almost every scene uses an extremely wide-angle fisheye lens. Mavropsaridis explains there was no discussion with the director about when to use them.
“The usual pattern was a fisheye lens, or the 4mm lens with the iris mask, then a long take with movement combination, zoom in or out with tracking shot. Usually, my editing brain needs a reason to use them.
For example, the first time we used this 4mm lens was when Godwin Baxter went down the stairs, heard the piano playing, and then we cut to him. He looks at her and smiles. At that moment, I thought, “Okay, that 4mm lens would be a nice point of view from this strange man.’ Then the next time was when Max comes in, Bella runs and embraces Godwin Baxter like a baby. I thought it was funny: a grown-up woman being like a baby, maybe seeing it through Max’s eyes for the first time — this strange situation, there are always small reasons. Subliminally they might say something to a viewer.”
Another example is when they are in the cruiser and Bella Baxter says to Duncan Wedderburn, “You’re in my sun!” so Mavropsaridis cuts to the 4mm lens when he throws the books away, “just to punctuate the situation. Different reasons all the time.”
It was the director’s idea from the beginning to have the first part of the film be a kind of homage to the old Gothic films shot in black and white. They then break that by introducing the color picture in the beginning.
“It was broken in an interesting way when Godwin Baxter recites the story of Victoria Blessington: how he found her, being pregnant with the baby, was shot in color,” Mavropsaridis says.
“There was a good juxtaposition between black and white in the office narration and the color of her suicide and the discovery of her body, which also breaks interestingly the time continuum between the two situations that are kept continuous with his narrating tale. Then the rest of the film, after her leaving London, was in extreme color and also in different hues of color. For example, the first part in Lisbon was shot with color negative.”
The scene where Bella dances without a care in the world was edited “incorrectly” by Mavropsaridis initially. He felt the choreography should remain intact when in fact it had to be awkward. The creative idea was that the dance was “a microcosm of the big world of the film.”
“Of course, it was very nice to see her in a situation with other dancers, and I thought it was nice to keep this situation with the other people dancing around her that was so funny. But this was not what it was supposed to be,” he says.
“Bella is about 16 years old at that time. She sees people dancing for the first time, and the particular music excites her and she wants to dance, but she hasn’t danced before, her movements are rough and awkward, but she doesn’t care about what other people would think. And we didn’t have to care if her movements were choreographed or ‘correct.’ It had to be spontaneous,” he continues.
“Everybody wants to control her, so the main part of the choreography we had to keep were these movements: When Duncan puts his arm around her, trying to manipulate it, and she reacts, trying to free herself. This dance scene is a microcosm of the whole life situation.”
Once they had reached this point where everything was in place the cut was three-and-a-half hours. Then they had to deconstruct the whole thing.
“We have constructed it. Now let’s take it apart and see what we can do to try this or that. He’s very precise in what he wants, but usually, the edit has to improvise on how to achieve it,” Mavropsaridis says.
“He doesn’t say much, but since we’ve edited together for almost 25 years now, I know what he means, and I know which way I have to tackle it. I have a lot of freedom from him to try things, even if they were not discussed. If I have an inspiration in the middle of the night, I will do it,” he continues.
“Maybe it works, maybe it doesn’t. After many trials and errors, many hours, and many films together, we have reached a very understanding way of working. I believe that Poor Things was an easier film to edit.”
A discussion at the dinner table about marrying Bella includes flash forwards and flashbacks. This was composed in the edit to cut length and keep the story moving, Mavropsaridis told Steve Hullfish on the Art of the Cut podcast.
“It is a method that we developed on the film we did together, Dogtooth because Yorgos likes to shoot his film in continuity. He doesn’t edit during the shoot so in editorial we felt that this big scene with a lot of discussion going on needed to be compressed.”
Typically, editor and director will have a few issues that can only be resolved in the edit, but there is now a telepathic connection between the pair that is only the result of like minds working together for so long.
“There was a problem about a scene on the cruise ship,” he told CinemaEditor magazine. “While Yorgos was emailing me I sent over my solution and he said, ‘That is exactly what I have in mind.’ I have reached a point of being able to understand his thoughts without talking to him. After so many years I know what the small things are that bother him and what he tries to achieve. At the same time, he has helped me to overcome my laziness of the mind, so it is now easy to me to throw a scene out and do it a different way.
“I always have in my mind Lanthimos’ own phrase — ‘Is that all we can do?’ So I have to prove each time we can do more and better.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
February 28, 2024
Posted
February 28, 2024
Spies Like Us: The Collaborative Post on “Mr. & Mrs. Smith”
TL;DR
“Mr. & Mrs. Smith” editor and co-producer Greg O’Bryant talks about crafting the new TV series from the makers of “Atlanta.”
There were multiple editors across the eight episodes, including Kate Brokaw, Kyle Reiter and Isaac Hagy, but O’Bryant cut the pilot, and served as the conduit through which picture, VFX and sound passed.
He says the show’s reshoots were “no big deal,” and actually helped the final show come together.
The premise for new Amazon Prime Video series Mr. & Mrs. Smith is like online dating. “You show up and meet somebody and you see how far it goes,” says the show’s lead editor Greg O’Bryant.
It is of course a riff on the 2005 hit feature starring Angeline Jolie and Brad Pitt, a married couple who are also spies working sometimes for and sometimes against each other to ‘hilarious’ effect.
The Amazon redo is from the storytellers from critically acclaimed series Atlanta, Donald Glover, and writer Francesco Sloane. Glover and Maya Erskine star as the two title characters (John and Jane), who operate as an undercover married couple while working for a mysterious spy agency.
“Technically, it exists in the same cinematic universe, which there are some hints about in episode four. But it was everyone’s intention to tell a different story with a different tone,” O’Bryant explains in interview with Matt Feury on The Rough Cut.
Showrunner Francesca Sloane used the analogy of the “storytelling sandwich.”
“That means the characters’ relationship is the bread and the spy stuff is the meat of the sandwich. It’s there, but the ratio should be skewed towards the relationship parts. We wanted to tell a story about modern relationships, why we get into them, and why we stay in them. If you think about it.”
He adds, “John and Jane had a pretty dire and immediate reason to stay together, but I think it was more about the heart than it was about action. Hopefully we achieved both.”
There were multiple editors across the eight episodes, including Kate Brokaw, Kyle Reiter and Isaac Hagy, but O’Bryant cut the pilot, has a hand in each episode, and is also a co-producer on the show. “I tell everybody that I know just enough to be dangerous,” he joked.
“I think it’s helpful to have an editor’s perspective on everything, whether it’s reshoots, color, sounds, visual effects, or music. It’s different on a film. Film is a director’s medium. TV is a writer’s medium. Writers usually appreciate having a technical head in the game.”
He was heavily involved in creating the score with Sloane, composer David Fleming, and Donald Glover, approving the VFX and working with Harbor Sound on the audio mix.
“The idea is, once we start editing, everything comes through my room before it’s done, whether it’s another editor’s cut, a VFX shot, a music cue, or a score cue.
“What makes TV special is when it feels like it’s all done by the same hand. It’s not just me. Francesca might be in my room. The other editors might be in my room. But it’s all going to come through the same tiny pipeline and, hopefully, that adds a level of consistency to it all.”
There was some pretty extensive reshoots, which O’Bryant says was helpful. “I know the larger world tends to think, ‘Oh, reshooting? Something must have gone wrong,’ [but] I think it almost always helps to go in and pick up a couple of shots for this or that episode, or even add in scenes,” he says.
“We redid whole scenes for the Mr. and Mrs. Smith pilot. They reshot two scenes entirely in the same location and everything. The team looked at what we had and said, ‘Hey, we’ve learned some things about this show. Let’s go back and do it a little differently.’”
In one episode, John and Jane are seeing a therapist about their marriage troubles. Within that we see vignettes of different missions they’ve been on.
O’Bryant says this episode had the most straightforward comedy and the most improv. “There’s a fair amount of improv in the show. Donald and Maya are both comedians, so there’s a lot, but that one had the most improv.”
Episode four was originally planned with an extensive action sequence in the Mexican jungle but the trick was making it work with comedy as well as thrills in the edit.
“We tried it a bunch of different ways. We spent weeks tweaking just that little sequence. We really beat the bushes before coming up with that fast, vibe-y, out-of-control thing we ended up with. I think we found a good middle ground. But more importantly, that’s the tone of the show. The tone of the show is about how to get that hard cut to be funny. We need to remember that these guys aren’t that good at being spies.”
Keeping the audience guessing without tilting bias was key for “Anatomy of a Fall” director Justine Triet and editor Laurent Sénéchal.
February 27, 2024
Navigating the Creator Economy: Leveraging AI for Influencer Marketing
TL;DR
Artificial intelligence is significantly transforming influencer marketing by enhancing content creation, improving discovery processes, and providing advanced metrics for campaign analysis.
According to Ogilvy’s “2024 Influence Trends You Should Care About” report, hyperpersonalization, where AI tailors content to individual users, is a key trend for 2024, leading to more personalized and engaging influencer interactions.
The introduction of AI-generated virtual influencers, such as Meta’s AI Personas, marks a shift towards personalized, one-to-one interactions between influencers and fans, using digital replicas of celebrities.
Despite the benefits, there’s a growing concern about maintaining authenticity in influencer marketing as AI becomes more prevalent. The challenge lies in leveraging AI without losing the genuine connection that audiences value.
Experts argue for a balanced approach to using AI in influencer marketing, emphasizing the importance of not letting AI overshadow the creativity and authenticity that are central to successful influencer campaigns.
In the ever-evolving landscape of influencer marketing, one of the biggest key trends to watch for in 2024 is the increasing use of artificial intelligence. From the integration of AI technology in content creation to enhanced metrics and personalization, AI’s integration into influencer marketing is reshaping how content is created, how audiences are engaged, and how campaigns are measured for success.
Over the past year, discovery has gotten a boost with the development of AI-powered tools that analyze vast amounts of social media data to identify potential influencers who best match a brand’s values and target audience. AI can also predict the potential success of an influencer marketing campaign by analyzing historical data. In addition, AI-driven platforms can assist influencers in generating content by suggesting captions, hashtags, and even optimizing image and video quality.
AI tools can segment an influencer’s audience based on various criteria, enabling brands to effectively tailor their messaging to specific demographics. AI can additionally provide real-time analytics and performance insights, allowing brands to track the success of their influencer campaigns. Finally, AI algorithms can also help identify fake followers and engagement, a critical concern that goes well beyond the influencer marketing industry.
“None of us can be sure of all the ways AI will impact the influencer marketing industry, but it’s a safe bet to say more influencers will be using AI next year,” Danielle Wiley, founder and CEO of influencer marketing agency Sway Group, writes at Forbes.
“AI can help with all sorts of content creation tasks, from writing captions to editing photos and videos,” she notes. “AI can also help influencers (and brands) figure out what their followers like by analyzing comments and likes to suggest what kind of content should be posted next. Tools that utilize AI will make it easier for influencers to quickly create high-quality content, keeping their followers engaged.”
In 2024, the influencer marketing industry is poised at an exciting intersection of technology and human creativity. The brands and influencers who navigate this intersection skillfully, using AI as an ally to enhance their authentic voice, are likely to emerge as the frontrunners in a rapidly evolving digital landscape.
Hyperpersonalization is the Next Big Thing
AI will extend its reach even further into influencer marketing in 2024, industry experts agree. Hylerpersonalization, which tailors content to each individual follower’s specific preferences and behaviors, is one of the biggest AI trends to watch out for this year, according to a report from Ogilvy, “2024 Influence Trends You Should Care About.”
“In 2024, expect to see a more hyperpersonalized form of engagement with influencers as the interplay between influence and AI enters a new era,” the powerhouse agency predicts.
John Harding-Easson, Ogilvy’s head of influence for EMEA, discussed the increasing use of AI in influencer marketing during a video presentation of the report. “It’s been a really big year for AI, particularly in influence,” he said. “It’s expanding at a rapid rate.”
While AI was already firmly on Ogilvy’s radar in its 2023 report, Harding-Easson said, “I think we were quite surprised at just how much space has advanced.”
Looking at next year, he continued, “What we’re expecting in 2024 is AI influence to enter a new chapter, one that will see a pivot to more hyperpersonalized engagement with influencers.”
As influence continues down this hyperpersonalized path, “we’re seeing that personalization with an influencer isn’t just the feature,” he added. “It can be the foundation of a campaign.”
Virtual Celebrity Influencers
Companies already tapping the power of virtual influencers for hyperpersonalization include Meta, which is poised to capitalize on the trend, per Ogilvy’s report.
“Meta’s AI Personas, introduced in late 2023 and fully deployed in 2024, signal a significant shift from broad-reaching influence to personalized, one-to-one interactions that maintain a sense of authenticity,” it reads.
First introduced at Meta Connect 2023 as AI chatbots that have “personality, opinions, and interests, and are a bit more fun to interact with,” Meta Personas employs AI replicas of celebrity influencers to attract engagement, as Fortune’s Alexandra Sternlicht reports.
“Developed in partnerships with stars such as Charli D’Amelio, Tom Brady, and Kendall Jenner, the bots use the magic of generative AI to create animated digital replicas of the celebrities. Users of Meta’s WhatsApp, Instagram, and Messenger, can have one-on-one interactions with the bots, asking them questions, confiding in them, and laughing together at their jokes.”
“Meta Personas really does signal a shift in the dynamic between influencers and their fans,” Harding-Easson emphasizes. “The ability to chat with your favorite influencer, your favorite celebrity is an experience that was previously unimaginable to fans and it’s really just going to change what we expect from our influencers as well.”
As with any technology still in its infancy, however, “some of the experiences are quite clunky at the moment,” he acknowledges. “Some of the conversations feel forced, but this is only round one.”
Virtual celebrity influencers are “just the first new form of personalized communication and content discovery,” Harding-Easson adds, noting that Meta has also announced plans to allow users to create their own AI replicas later this year. “So scaling interactions with AI influencers is really ripe with opportunities.”
Alex Dahan, founder and CEO of global creator marketing company Open Influence, discussed the implications of AI influencers for advertisers with Marketing Drive. “These virtual personalities, created using advanced AI technologies, can offer brands new and innovative ways to engage with audiences, especially in the realms of fashion, technology, and entertainment,” he said.
However, the reliance on AI influencers comes with its own set of challenges. One key concern is maintaining the authenticity that is the hallmark of successful influencer marketing. There’s a delicate balance to be struck: leveraging AI for enhancing creativity and efficiency, while ensuring that the content remains genuine and relatable. Overuse of AI risks creating a disconnect, as audiences tend to value content that resonates with real-life experiences.
“With AI, it’s all about finding the right balance — using advanced tools to enhance creativity and efficiency, while keeping authenticity and ethical considerations at the forefront.” says Wiley.
Perhaps anticipating the advent of AI in influencer marketing, last year Ogilvy called for an industry-wide AI Accountability Act that would mandate disclosure around the use of AI influencers, Harding-Easson said. Meta, he says, is already moving in this direction, deploying watermarks on its AI Personas to help users clearly distinguish the virtual from the real.
According to Rafa Titus, global head of influence at Ogilvy, “32% of people already can’t tell a human face from an AI face, and that’s going to go up.” The Accountability Act, Titus says, is aimed at “really making sure that marketers are disclosing their use of AI influencers so that that trust that we have in the space doesn’t go away.”
“As the industry tries to get ahead of the next phase of the creator economy, naturally many are looking to artificial intelligence. They’re not necessarily creating AI influencers a la Lil Miquela, but offering product imagery that can supplant influencer-styled photo shoots and more personalized product recommendations via chat bots,” Tassin observes.
The promise of generative AI does have some overlapping value with what influencers provide, she says. “And yet, while GAI tools like ChatGPT took the world by storm this year and captured the imagination of retail industry leaders, consumer trust in AI didn’t necessarily follow the hype.”
The exciting potential of customer-facing generative AI for e-commerce brands won’t be realized in 2024, Tassin predicts. “Retailers should resist chasing shiny objects and look to consumer adoption of other types of retail tech for signals of the adoption curve,” she warns. “If GAI tools follow patterns similar to that of AR product visualization and size measurement tools, widespread adoption of customer-facing GAI is a long way away from replacing the human faces and expertise that influencers provide. Instead, retailers should ensure their influencer relationships are in sync with best practices based on what actually drives purchases on social media.”
Wiley agrees that influencers and their partnered brands should be cautious about an over-reliance on AI “While AI tools are great for streamlining tasks, using them too much can make posts seem artificial or overly polished, risking the loss of that genuine, real-life connection with followers,” she says. “There’s a risk of influencers becoming too dependent on AI for content ideas, which might lead to a disconnect with what their audience truly values. Lastly, there are ethical concerns. For instance, if an AI tool creates too artificial of a scene, it could mislead followers, causing trust issues.”
Adweek’s Adam Rossow also argues that influencers won’t necessarily look to AI for content creation. “While brands and agencies will tap into AI to help guide their choice of influencers, analyze campaigns and brainstorm on their creative briefs, we will see far less reliance on AI from the influencers themselves,” he writes.
“Leaning on AI takes away from their pride of creative ownership and has some fearing plagiaristic repercussions. However, what will ultimately drive influencers to eschew AI in formulating creative is the one thing they hang their hats on: authenticity.”
“Without authenticity, the relationship between an influencer and their followers is tarnished, if not completely broken. The lure of saving precious time and creative energy is not great enough for influencers to risk augmenting their voice or coming off as even remotely synthetic. AI will make influencer marketing more approachable, streamlined and measurable, but it won’t be embraced as an easy button for influencers and creators.”
Exclusive Insights: Get editorial roundups of the cutting-edge content that matters most.
Behind-the-Scenes Access: Peek behind the curtain with in-depth Q&As featuring industry experts and thought leaders.
Unparalleled Access: NAB Amplify is your digital hub for technology, trends, and insights unavailable anywhere else.
Join a community of professionals who are as passionate about the future of film, television, and digital storytelling as you are. Subscribe to The Angle today!
The Creator Economy is reshaping digital advertising, offering brands authentic engagement and accelerating the consumer purchase journey.
February 27, 2024
How to (Comprehensively) Compare Cameras: HBO’s CAS at NAB Show
TL;DR
The HBO Camera Assessment Series is a feature-length movie that demonstrates various camera systems. In Las Vegas this year, NAB Show will screen the most recent assessment.
Additionally, CAS co-creators Stephen Beres and Suny Behar will discuss the test methodology, the technology changes and the type of analysis deployed.
Then, on Tuesday, April 16, CAS co-creators Stephen Beres and Suny Behar will discuss the test methodology, the technology changes impacting the style as well as the type of analysis deployed.
For the most recent assessment, Behar and Beres used Zeiss Cinematography lenses to test camera systems including the ARRI Alexa Mini, SONY Venice 2, RED V-Raptor, BLACKMAGIC Ursa 12K, ARRI Alexa 35 and KODAK 5219 Film.
“When we started 10 years ago,” recalls Behar, “a lot of the questions weren’t about comparing the performance and the quality of the cameras as much as it was comparing whether or not some cameras even could perform.
“There was a vast difference between a camera that could record even 10 bit 4:4:4 versus a [Canon] 5D that was 8-bit 4:2:0, so you couldn’t do green screen work; there was significant motion artifacting; and it was difficult to focus. Those larger differences aren’t what we’re looking at now because all the cameras can do at least 10-bits 4:2:2 minimum. They at least have 2K, super 35 sized sensors.”
There continue to be differences, some quite significant, among the tested cameras, Behar adds, “but it’s in different realms. The tests are no longer about [finding] where the picture just breaks, but as people expect more, there are other issues we investigate.”
There are circumstances people wouldn’t have tried to shoot a decade ago that are becoming standard expectations of a DP.
“You are going to care about signal to noise if you’re trying to shoot with available light, where some cameras will be significantly noisier than others. In the world of HDR, if you’re shooting fire or bright lights, you are going to care about extended dynamic range in the highlights, if you hope to not have to comp all your highlights in with the effects because [the highlights] broke.”
Stephen Beres explains that these tests, which have screened at various venues, serve as the start of discussions for his networks’ productions, not as any kind of dictate.
“We don’t have a spreadsheet of allowed and disallowed,” Beres explains. “What we have is projects like this, so when we sit down together — the studio and the creative team on the show — and we look at these kinds of things as a group, it can help us start the discussion about the visual language of the show. ‘What visual rules should be set up for that world that that show exists in?’
“And then we sort of back that into the conversation about ‘what technology are we going to use to make that happen?’. And that’s not just about cameras. It’s the lensing. It’s what we do in post, and it’s how we work with color. It’s how we work with texture. All those things go together to create the visual aesthetic of the show.”
Once they complete a new installment in the CAS, the company is delighted to share the results with all who are interested. Beres and Behar have both taught about production and post on the university level, and they clearly enjoy sharing their knowledge.
The Assessments
A great deal of thought goes into designing these camera tests in order to display apples-to-apples comparisons, with elements such as color grading and color and gamma transforms all handled identically.
“I think all of the cameras we tested this time shot RAW,” Behar says, “so then you have to make decisions about how you’re going to get to an intermediate [format for grading].”
They decided to use the Academy Color Encoding System (ACES) as a working color space. While there are certainly some people in the cinematography and post realms who still have various issues with ACES, Behar says, it has been useful in some ways because ACES forced every manufacturer to declare an IDT whether they liked it or not.
The IDT, or Input Device Transform, along with the ODT (Output Device Transform), provides objective numerical data quantifying the exact responses of a given sensor so that it can be transformed perfectly into ACES space.
While some manufacturers were reluctant to subject their sensors to such scrutiny (where little tricks involving after-the-fact contrast and saturation, etc., can’t hide their flaws), all did come around because of the growing adoption of ACES and its support from the Academy of Motion Picture Arts and Sciences and the American Society of Cinematographers.
Because of this, the ACES imagery upstream of any color grading really does provide a look into a sensor’s dynamic range, color and detail rendering.
Then, the CAS did the same across-the-board grade (no secondaries, no Power Windows) and transform to deliver final rec. 709 images for all the tested cameras to test many of the different sensors’ attributes and liabilities. Next, to test in HDR, they derived a PQ curve from the same picture information and opened it up without any further adjustments.
“The only test that we did not go through that exact pipeline for,” says the cinematographer, “was the dynamic range test. I’ve always felt that the ACES-rec 709 transform is too contrasty, meaning it has a very steep curve and a very high gamma point, which tends to crush blacks and push up mids. It does give you a punchy image, but if we’re testing dynamic range, and especially in low light, the first question the viewer would have would have been, ‘is there more information in the blacks?’ or ‘how did you decide what to crush?’ and those are very valid points.”
For this, Behar shot a very large number of test charts, that gives them the ability to map their own gamma transform. Shooting in log form at key exposure and at many steps over and under, the team is able to lock in an across-the-board standard for middle gray based on each camera system’s log profile. Once each camera is set up for perfectly exposed middle gray, the tests of over- and under-exposure can be objectively compared.
Given that a number of the cameras tested reached approximately 18 stops of dynamic range, I enquired whether such a capability is overkill. Circumstances where a cinematographer would actually use that much dynamic range are few and far between. More likely, they’ll want to use lighting and grip gear to limit such situations, as they always have.
“That’s right,” says Behar. “I think most DPs won’t need more than 12, maybe 13, stops of dynamic range to tell a story. You can’t hide a stinger in the shadows if you’re seeing 10 stops under. You can’t have a showcard in the window if you’re seeing 12 stops over.
“But then it stands to reason that the camera manufacturers should allow us to use that information to create soft knee rolloffs and toe rolloffs for lower dynamic range, but with beautiful rolloffs into the highlights and the shadows.
“You can’t create a look [digitally] that is like Ektachrome, with maybe four stops over and three and a half under, if you’re clipping at four stops. You need to burn and roll and bleed and have halation. With the dynamic range on some of these cameras we’ve tested, you can do more than just light for an 18-stop range.”
Behar and Beres both take great pride in these CAS films, which are shot and produced to feel like high-quality HBO-type programming, not just charts and models sitting in front of charts.
“This is real scenes with moving cameras, moving actors,” says Behar, promising the cinematography and production is of the highest caliber. “The number one feedback response we’ve gotten so far has been, ‘Holy crap! I thought this was going to be a camera test!’”
Shadow of a Doubt: The Very Deliberate Editing for “Anatomy of a Fall”
TL;DR
The tension between truth and doubt and what we see and hear on screen is at the heart of the story and the filmmaking in “Anatomy of a Fall.”
Editor Laurent Sénéchal talks about the challenges of maintaining ambiguity around the lead character for Justine Triet’s Oscar-nominated thriller.
Triet seems to be putting the nature of truth under layers of translation, explaining that the question of the language is at the core of their work.
There are no easy answers to “did-she-do-it?” mystery Anatomy of a Fall, but neither is this the switch-back sensationalism of Hollywood courtroom dramas like Presumed Innocent or Jagged Edge. Keeping the audience guessing without tilting their bias either toward or against the accused at the center of the drama was the key for director Justine Triet and editor Laurent Sénéchal.
“We didn’t want to play a game, having the audience feeling that she’s guilty for 15 minutes, then she’s not guilty,” the editor told Steve Hullfish’s Art of the Cut podcast. “We wanted the audience to keep their doubts about her, but to start to be endeared by her — to start to be with her in these intimate moments. It was a challenge for Justine to ask the audience to do both keep doubts, but also ask the audience to love her.”
Anatomy of a Fall won the Palme d’Or at Cannes, is nominated for five awards at the Oscars, including Best Picture and Best Editor for Sénéchal, who is also nominated for an ACE Eddie award.
At the beginning of the film, a man dies and like many thrillers, there’s a pacing of the revelations — the things that are discovered about the death. Triet and Sénéchal however are constructing, or deconstructing, the courtroom drama genre.
Sénéchal says it was really important to be precise with these elements to maintain the ambiguity around Sandra, the accused (played by Sandra Hüller, who is nominated for an Oscar for Best Actress).
“The idea was to start like a thriller movie. We were aiming to use this genre movie to lead the audience as far as possible in the complexity with our characters. It’s a movie that is not straight forward.”
The film questions the nature of love and married relationships, what is to be an individual in a couple? asks what is to be a father, what is to be a son, quizzes our memories and how we construct truth in real life and in the movies.
“It’s really complex. So, you’re going to see a thriller, but an unusual one. We had to pay attention to this idea during the editing process.”
Sénéchal goes into more detail in discussion with Awards Radar, telling Maxance Vincent, “It was really challenging because as soon as we had scenes in a certain order or scenes showing some things between her and her husband Samuel [Theis], you could have a total derailment. We could derail the main contract between the audience and the movie because we’d edited scenes in a certain way, where we felt like Sandra was being manipulative towards Samuel.
“So we had to rethink, screen the movie, and redesign some scenes, to make sure that we find her endearing, even if we have doubts about her. It was really hard to build the path of the audience. You are free as an audience to make up your own mind about what you see. My job as editor is to build very wide roads for the audience to make their own journey into the movie.”
“Anatomy of a Fall” | Official Clip – “You Are Not A Victim”
Sénéchal spent 40 weeks in editorial to shape the picture. There’s a section in the trial where an audio recording of a fight is being played in court, and it starts with just the audio, then it jumps to flashbacks of the actual fight.
“I asked them to shoot it in a way that we have options,” he related to Hullfish. “We can stay long in the courtroom before going in the flashback if we want to because I knew that this moment was going to be tricky for me. They got very long shots on Sandra Hüller. Also they got the audience in the courtroom.”
He continued, “What worked was to be long enough for the audience to be a bit lazy; they start to get used to the audio, and that’s when I go into the flashback, and you are very soon taken by the fight itself. Then, coming back into the courtroom we wanted it to be at the highest climax of the fight. But the climax — the words — what she’s saying to her husband — is so harsh. It’s really violent. The words are like weapons.”
Sénéchal had previously collaborated with Triet for 2016’s In Bed with Victoria and 2019’s Sibyl. Director and editor discuss their relationship in an interview recorded for Deadline’s The Process, as well as filming scenes with the film’s canine character and the choice to use different languages.
In Anatomy of a Fall, the characters live in France, but since the main character, Sandra, is not herself French (nor does she speak it very well), most of her dialogue is spoken in English.
This includes her appearances in court where after attempting to give her evidence in French, she gives up and speaks in English for the rest of the case, resulting in a rather strange scenario of her being questioned in French, understanding perfectly, and responding in English.
Triet seems to be putting the nature of truth under layers of translation, telling The Process the question of the language is at the core of their work.
Sénéchal adds, “We also wanted the movie to be simple for the audience because the subject was so complex. There is a complexity in the empathy for the main character.”
Even when the verdict is reached in the case there’s still a lingering sense of ambiguity which bleeds into the moment that Sandra is reunited with her son.
“What we wanted to show is the arc of a boy who is growing up,” he explained to Awards Radar. “You still don’t know the mother, you are starting not to know how the boy is feeling. When they’re reconnecting in the house, everything is so complicated.”
He adds, “The movie shows how you must stop thinking of life as straight, simple, and compact. Becoming a grown-up for him is becoming opaque, too, because at the end, when he is doing his second testimony, we see him calling on memories, but it feels like an invention. We want the audience to feel that when we have these images, who do we ultimately suspect? There is a tension between truth and doubts and what is on screen. We don’t have access to everything he’s thinking, and he may become like his mother, someone we don’t know. But it’s our condition to listen to them and make up our minds about what is on screen.”
Elaborating on this to Kara Warner at Vanity Fair, Sénéchal said he identified that the flashback argument was “a very strange scene” in the script. “At the beginning, I was wondering if it was going to work because it’s nobody’s point of view at all. Then I saw the material and when we started, it was obvious that it has to be like that. That’s the power of cinema. It can seem weird when you read it, but when you are in front of the actors, the characters, it’s so vivid. It’s at the heart of the story.”
Speaking on The Rough Cut podcast, Sénéchal discussed turning the thriller into Kramer vs Kramer as the drama pivots on how we view the relationship between husband and wife as we learn more private details about them.
The nuances in their relationship script stem from the script but the writer-director and editor still had to extract the right balance from the coverage in editorial.
“It’s not a movie which was heavily recut in post so much as redesigned,” Sénéchal told Vanity Fair. “The main aspects of the movie were really well-scripted. We made deliberate choices like the fact that we didn’t use any score music. I think it was a good choice because if we had divided the argument in pieces, in sections, it would’ve been another movie.”
Sénéchal compared the delicate juggling act to playing Tetris. “If [we] changed some slight details in the beginning, you could really see another movie emerge. Sometimes we had some derailment of the ambiguity around Sandra. The movie was no longer very interesting when she was becoming too innocent or too guilty, or too manipulative. The main challenge for the editing was this arc of ambiguity for her, how to stay with her, how to be endeared by her with this ambiguity still around her. It was really hard to do.”
Sora prompt: Reflections in the window of a train traveling through the Tokyo suburbs.
Late last week, OpenAI announced a new generative AI system named Sora, which produces short videos from text prompts. While Sora is not yet available to the public, the high quality of the sample outputs published so far has provoked both excited and concerned reactions.
The sample videos published by OpenAI, which the company says were created directly by Sora without modification, show outputs from prompts like “photorealistic closeup video of two pirate ships battling each other as they sail inside a cup of coffee” and “historical footage of California during the gold rush.”
At first glance, it is often hard to tell they are generated by AI, due to the high quality of the videos, textures, dynamics of scenes, camera movements, and a good level of consistency.
OpenAI chief executive Sam Altman also posted some videos to X (formerly Twitter) generated in response to user-suggested prompts, to demonstrate Sora’s capabilities.
Sora combines features of text and image generating tools in what is called a “diffusion transformer model.”
Transformers are a type of neural network first introduced by Google in 2017. They are best known for their use in large language models such as ChatGPT and Google Gemini.
Diffusion models, on the other hand, are the foundation of many AI image generators. They work by starting with random noise and iterating towards a “clean” image that fits an input prompt.
A video can be made from a sequence of such images. However, in a video, coherence and consistency between frames are essential.
Sora uses the transformer architecture to handle how frames relate to one another. While transformers were initially designed to find patterns in tokens representing text, Sora instead uses tokens representing small patches of space and time.
Leading the Pack
Sora is not the first text-to-video model. Earlier models include Emu by Meta, Gen-2 by Runway, Stable Video Diffusion by Stability AI, and recently Lumiere by Google.
Lumiere, released just a few weeks ago, claimed to produce better video than its predecessors. But Sora appears to be more powerful than Lumiere in at least some respects.
Sora can generate videos with a resolution of up to 1920×1080 pixels, and in a variety of aspect ratios, while Lumiere is limited to 512×512 pixels. Lumiere’s videos are around five seconds long, while Sora makes videos up to 60 seconds.
Lumiere cannot make videos composed of multiple shots, while Sora can. Sora, like other models, is also reportedly capable of video-editing tasks such as creating videos from images or other videos, combining elements from different videos, and extending videos in time.
Both models generate broadly realistic videos, but may suffer from hallucinations. Lumiere’s videos may be more easily recognized as AI-generated. Sora’s videos look more dynamic, having more interactions between elements.
However, in many of the example videos inconsistencies become apparent on close inspection.
Promising Applications
Video content is currently produced either by filming the real world or by using special effects, both of which can be costly and time consuming. If Sora becomes available at a reasonable price, people may start using it as a prototyping software to visualize ideas at a much lower cost.
Based on what we know of Sora’s capabilities it could even be used to create short videos for some applications in entertainment, advertising and education.
OpenAI’s technical paper about Sora is titled “Video generation models as world simulators.” The paper argues that bigger versions of video generators like Sora may be “capable simulators of the physical and digital world, and the objects, animals and people that live within them.”
"a giant cathedral is completely filled with cats. there are cats everywhere you look. a man enters the cathedral and bows before the giant cat king sitting on a throne."
If this is correct, future versions may have scientific applications for physical, chemical, and even societal experiments. For example, one might be able to test the impact of tsunamis of different sizes on different kinds of infrastructure — and on the physical and mental health of the people nearby.
Achieving this level of simulation is highly challenging, and some experts say a system like Sora is fundamentally incapable of doing it.
A complete simulator would need to calculate physical and chemical reactions at the most detailed levels of the universe. However, simulating a rough approximation of the world and making realistic videos to human eyes might be within reach in the coming years.
Risks and Ethical Concerns
The main concerns around tools like Sora revolve around their societal and ethical impact. In a world already plagued by disinformation, tools like Sora may make things worse.
It’s easy to see how the ability to generate realistic video of any scene you can describe could be used to spread convincing fake news or throw doubt on real footage. It may endanger public health measures, be used to influence elections, or even burden the justice system with potential fake evidence.
Video generators may also enable direct threats to targeted individuals, via deepfakes — particularly pornographic ones. These may have terrible repercussions on the lives of the affected individuals and their families.
Beyond these concerns, there are also questions of copyright and intellectual property. Generative AI tools require vast amounts of data for training, and OpenAI has not revealed where Sora’s training data came from.
Large language models and image generators have also been criticized for this reason. In the United States, a group of famous authors have sued OpenAI over a potential misuse of their materials. The case argues that large language models and the companies who use them are stealing the authors’ work to create new content.
It is not the first time in recent memory that technology has run ahead of the law. For instance, the question of the obligations of social media platforms in moderating content has created heated debate in the past couple of years — much of it revolving around Section 230 of the US Code.
While these concerns are real, based on past experience we would not expect them to stop the development of video-generating technology. OpenAI says it is “taking several important safety steps” before making Sora available to the public, including working with experts in “misinformation, hateful content, and bias” and “building tools to help detect misleading content.”
OpenAI shares a first glimpse at its new generative AI text-to-video tool Sora, which instantly generates videos from just a line of text.
February 22, 2024
Posted
February 22, 2024
AI Solutions for Climbing (Captioning) the Content Mountain
TL;DR
Artificial intelligence can do remarkable things, and its transcription capabilities are only expected to expand over time. Currently, however, AI transcription technology is not without its limitations.
Those limits, particularly in accuracy, and the lack of a legal framework around synthetic voices, means broadcasters are advised to experiment and use AI with care and human supervision.
Dubbing is likely to be transformed by AI where human talent might morph into“AI Dubbing Managers,” or Creative Directors.
Transcription is one process that stands to be uniquely impacted by recent developments in AI. Thanks to ever-evolving language and learning models, transcribing audio to text has never been faster or easier. But there are also limitations to new AI-powered transcription solutions.
The global translation service market will exceed $47 billion by 2031, largely driven by media and entertainment. Yet current costs to caption titles for distribution on streaming services ranges between $60-$100 per program hour, and typically takes between 1-3 days to complete “because of excessive manual intervention” claims Cineverse CTO Tony Huidor.
“Captions, and localization more broadly, are generally major pain points for content owners seeking to monetize their assets across the many streaming services,” Huidor added.
That’s because content companies need to generate far more revenue by broadening their audiences at significantly reduced costs.
“Companies have been priced out of bringing their entire content catalogs to market due to the extremely high costs of captioning and localization,” Huidor said.
The traditional transcription process involves an individual transcriber listening to a piece of audio and manually converting every audio element they hear to text. It is clearly very labor intensive using trained specialists, and costly.
But it does produce accurate results.
AI transcription eliminates the need for a human transcriber and relies instead on automatic speech recognition technology. ASR uses language and learning models to interpret human speech and convert specific sounds (or phonemes) as written language.
Some of the most popular speech-to-text software is provided by Google, Azure, IBM, and Dragon Professional.
The upside of using automated transcription is the ability for companies to scale more of their output, to keep pace with huge global demand and to slash the costs of the whole exercise.
The main downsides, as outlined by Vitac, are inaccuracy. AI system tend to deliver poor quality results when the input recording is poor, when there are more than one speaker and when the audio contains a substantial amount of overlapping speech. Other factors that can inhibit the AI’s ability are when speakers have diverse accents or dialects.
“All these variables can substantially impact AI’s ability to interpret and represent the audio of a recording and result in a final transcript containing a substantial number of errors,” Vitac says.
Its prescription to achieve “exceptionally high rates of accuracy” is to match automation with human experts. Not coincidentally this is exactly the service it offers.
Broadcasters and publishers are a little reticent to rely on AI transcription given that tools to date have not proved fool proof. The BBC, for instance, values the trust that viewers put in the veracity of its output more than most broadcasters. It also faces increasing pressure to cut costs. It is exploring and evaluating AI tools which is a route that it advises others to follow.
Vanessa Lecomte, localization operations manager at BBC Studios, telling language information site Slator that for all the benefits that AI has in localization, it “must match BBC’s quality standards at a minimum.”
She said, “The main question is whether AI can improve current processes, increase speed to market, and reduce costs.”
Lecomte advised balancing opportunities against the risk. “These technologies offer the potential to speed up the process, which in turn enables you to localize more content, reach new markets, but it shouldn’t be done to the detriment of quality or of a well-respected industry. So do the right thing and commit to a thoughtful localization strategy.”
The BBC is also addressing AI in dubbing using synthetic voices. Lecomte described the current dubbing process as “time-consuming and expensive involving many technical and creative talents.” She said her division is exploring the capabilities of AI dubbing technology to try and deliver more content, faster, and still meet quality standards, adding that this should be done acting responsibly in regards to talent rights.
Anton Dvorkovich, CEO & Founder of Dubformer, also flagged the industry responsibility of establishing regulations around the ethical use of human voices.
He also believes AI dubbing is “poised to dramatically transform the media industry…with solutions that cut production costs by 30-50%.
“For now, investors and the media are struggling with the challenge of evaluating new solutions. However, the focus is shifting to the potential costs of emerging tools and their impact on the media industry,” he wrote in an op-ed for Streaming Media.
Solutions range from those like Papercup and Deepdub where humans finalize the AI-powered dubbing to “DIY translation tools” aimed at enabling freelance content creators to translate their videos with AI. One such solution, from Heygen, relies on natural-sounding speech synthesis and text-to-speech software developed by Eleven Labs.
He predicts that the introduction of an “AI Dubbing Manager,” or proof listener, tasked with fine-tuning AI dubbing systems or types of content. This role could include listening to the automatic voice overs to grasp cultural nuances, refine voice modulation, and make corrections. Some actors and interpreters may transition into this profession as it evolves, he suggested.
There could be Creative Directors for AI-enhanced productions to guide creative content developed through AI dubbing while the market for actors to license their AI-generated voices will grow. “More tools will enter the market, enabling individuals to generate their voices with AI. Actors will be able to create new voices based on their own.”
Software developer Enco introduced AITrack and ENCO-GPT, which both use ChatGPT to generate language responses from text-based queries for automated TV and radio production workflows.
AITrack, for instance, integrates with Enco’s DAD radio automation system to generate and insert voice tracks between songs. It leverages synthetic voice engines to produce natural-sounding, engaging content between songs.
ENCO-GPT could be used to condense a lengthy written news article into a few sentences, or inject breaking news updates within live ad breaks or automatically creates ad copy on behalf of sponsors.
Company president Ken Frommert sees an opportunity to go bigger with both solutions. “We see opportunities to convert a morning or afternoon drive radio show into a short-form podcast, or summarize an 11:00 p.m. local news program for the TV station’s website…. It offers a seamless way to publish content in diverse forms.”
LEXI Recorded, a VOD automated captioning solution from Australian firm AI Media, claims 98% accuracy, “comparable to human captioning,” and even higher with the use of custom dictionaries or topic models. Its use is priced from 20 cents per minute.
“We are not just meeting but exceeding the demands for high-volume, quick, and precise captioning of recorded content,” said AI-Media’s Chief Product Officer, Bill McLaughlin who will present the product at NAB Show in April.
Captions offers an AI-based video editing app and a solution for automatically generating subtitles. Both products are aimed at content creators and marketers.
It also offers an in-house voice cloning tool trained on licensed audio recordings to translate users’ audio into 28 other languages or use an AI voiceover to narrate the content from scratch.
Gaurav Misra, CEO and cofounder says Captions’ approach to video editing software is different because its tools are designed for specifically editing talking videos. “Most video production editing is focused more on aesthetics like filters and colors, whereas our focus became more about conveying an idea or experience,” he told Rashi Shrivastava at Forbes.
Vitac’s claims its own AI captioning solution, Verbit Captivate, stands apart from “generic” ASR engines in being designed, developed and built, inhouse. “Whereas other AI captioning vendors either provide an engine or a service, Vitac is unique in that we own both. And because of that, we can change, update, upgrade, and customize customer offerings, tuning our solutions to individual customer needs, creating an offering that achieves accuracy and results on a personal level.”
Additionally, it pairs the tech with “human backup” — specialists who boost performance with prep, pre- and post-session research, and live-session monitoring.
Cineverse’s MatchCaption, targets bulk film, television and video libraries localization “at significant scale.” It claims its generated captions are “perfectly timed and formatted according to industry standards, then auto converted into multiple caption/subtitle formats, to meet the specifications of all streaming platforms.
It also claims its system can complete the same tasks which currently cost content owners $60-100 for less than $10 per program hour, “and a full feature film can be completed, and quality checked in less than one hour — an 85% reduction in cost and 90% reduction in time.
OpenAI’s Sora: It’s the Beginning or the End of Video and Either Way It’s a Big Deal
TL;DR
OpenAI shared a first glimpse at new tool Sora that instantly generates videos from just a line of text.
The apparent capabilities of Sora are deemed perfect for stock footage, presentations, and commercials with developments likely to lead to longer form films.
AI tools that can generate videos indistinguishable from video shot with a real camera raise concerns again about content creation industry jobs and misuse by spreading deepfakes
OpenAI seems to delight in pulling rabbits from a hat and was more than aware of what its latest research project would do when it alerted the internet.
Everyone’s gone wild for Sora, a new diffusion model being tested which can generate one minute video clips from just a single text input. To prove what it can do OpenAI dropped some videos online generated by Sora “without modification.” One clip highlighted a photorealistic woman walking down a rainy Tokyo street.
The Sora prompt for this video was: A stylish woman walks down a Tokyo street filled with warm glowing neon and animated city signage. She wears a black leather jacket, a long red dress, and black boots, and carries a black purse. She wears sunglasses and red lipstick. She walks confidently and casually. The street is damp and reflective, creating a mirror effect of the colorful lights. Many pedestrians walk about.
“Every single one of [them] is AI-generated, and if this doesn’t concern you at least a little bit, nothing will,” tweeted YouTube tech journalist Marques Brownlee. “This is simultaneously really impressive and really frightening at the same time,” he added on his YouTube channel.
A blog post on the website of nonlinear editing software Lightworks declared, “Sora’s almost magical powers represents yet another seismic shift in the possibilities of content creation.”
“Sora is a glimpse into a future where the lines between creation, imagination, and AI blur into something truly extraordinary,” says Conor Jewiss at Stuff.
Benj Edwards of Ars Technica thinks OpenAI is on track to deliver a “cultural singularity” — the moment when truth and fiction in media become indistinguishable.
“Technology like Sora pulls the rug out from under that kind of media frame of reference. Very soon, every photorealistic video you see online could be 100 percent false in every way. Moreover, every historical video you see could also be false.”
What has excited the AI and artistic community so much is the cinematic photorealism of the videos produced by OpenAI’s algorithm which seems “to understand how things like reflections, and textures, and materials, and physics, all interact with each other over time,” said Brownlee.
In its research paper Open AI states the model deeply understands language, enabling it to accurately interpret prompts and generate compelling characters that express vibrant emotions.
Sora can also create multiple shots within a single generated video that accurately persist characters and visual style.
OpenAI further states it is teaching the AI to understand and simulate the physical world in motion, with the goal of training models that help people solve problems that require real-world interaction.
Two videos in particular grabbed attention. “This is one of the most convincing AI generated videos I’ve ever seen, says Brownlee of a video made with this text prompt: “A movie trailer featuring the adventures of the 30 year old space man wearing a red wool knitted motorcycle helmet, blue sky, salt desert, cinematic style, shot on 35mm film, vivid colors.”
“This looks like it could be an actual film trailer,” says Theoretically Media’s Tim Simmonds.” I mean that there’s nothing really in here to majorly indicate that this is AI generated.”
The other, featuring an aerial flyover, was spun-up from the prompt: “Historical footage of California during the gold rush.”
“The drone footage of an old California mining town looks really, really pretty great,” Simmonds says. “And even as the camera makes this turn here, the buildings stay intact, they don’t start to shift and warp and morph into weird things.”
Brownlee thinks it demonstrates “all sorts of implications for the drone pilot that no longer needs to be hired, and all the photographers and videographers whose footage no longer needs to be licensed to show up in the ad that’s being made,” he says.
“It’s also very capable of historical themed footage,” he adds. “This is supposed to be California during the gold rush. It’s AI generated but it could totally pass for the opening scene in an old western.
Which begs the inevitable question, How long until an entire ad with every single shot is completely generated with AI? Or an entire YouTube video, or an entire movie?
Simmonds still thinks we are a way out from that “because [Sora] still has flaws and there’s no sound [no audio/dialogue sync] and there’s a long way to go with the prompt engineering to iron these things out,” he says.
Naso agrees that Sora “could change the game for stock footage,” adding that the next stage for AI prompt filmmaking is dialogue-based scenes. “So far, these examples are more like b-roll.”
Nonetheless, even at the pace of AI development it seems OpenAI has caught everyone napping.
Rachel Tobac, a member of the technical advisory council of the Cybersecurity and Infrastructure Security Agency (CISA), posted on X (formerly known as Twitter) that “we need to discuss the risks” of the AI model.
“My biggest concern is how this content could be used to trick, manipulate, phish, and confuse the general public,” she said.
OpenAI also says it is aware of defamation or misinformation problems arising from this technology and plans to apply the same content filters to Sora as the company does to DALL-E 3 that prevent “extreme violence, sexual content, hateful imagery, celebrity likeness, or the IP of others,” as Aminu Abdullahi reports at TechRepublic.
Others flagged concerns about copyright and privacy, with Ed Newton-Rex, CEO of non-profit AI certification company Fairly Trained, maintaining: “You simply cannot argue that these models don’t or won’t compete with the content they’re trained on, and the human creators behind that content.”
Anticipating these concerns, OpenAI plans to watermark content created with Sora with C2PA metadata. However, OpenAI doesn’t currently have anything in place to prevent users of its other image generator, DALLE-3, from removing metadata.
OpenAI said it is engaging with artists, policymakers and others to ensure safety before releasing the new tool to the public. However, its get-out clause is that despite extensive research and testing, “we cannot predict all of the beneficial ways people will use our technology, nor all the ways people will abuse it.”
The Microsoft-backed company is valued at $80 billion after a recent injection of VC funds. “It will become impossible for humans to detect AI-generated content by human beings,” Gartner analyst Arun Chandrasekaran warned TechRepublic. “VCs are making investments in startups building deepfake detection tools, however, there is a need for public-private partnerships to identify, often at the point of creation, machine-generated content.”
Sora joins a chorus of other text to video generators such as Runway and Fliki, the Meta Make A Video generator, and the yet-to-be-released Google Lumiere.
Question: Has Apple taken its eye off the ball? Answer: Maybe not. Its researchers have just published paper about Keyframer, a design tool for animating static images with natural language.
As Emilia David at The Verge points out, Keyframer is one of several generative AI innovations that Apple has announced in recent months. In December, the company introduced Human Gaussian Splats (HUGS), which can create animation-ready human avatars from video clips. Apple also released MGIE, an AI model that can edit images using text-based descriptions.
Led by Google Lumiere, a new wave of AI video generators coming to market bring long form-quality narrative video one step closer.
February 26, 2024
Posted
February 20, 2024
Generative AI Could Impact Hollywood Like… Well, Like the Invention of the Camera
TL;DR
Attention is turning to how AI will not just be used in production but how it will transform every aspect of storytelling.
As storytelling becomes more personalized and interactive, films will change and so will gaming, an industry where people can choose their own adventures more easily than moviegoers can. The amount of entertainment available.
Just as the arrival of the internet led to an explosion of user-generated content posted to social media, generative AI will accelerate the creation of video content online.
The first claimed long form feature derived from a single text prompt has already been released — and made money.
The new frontier of artificial intelligence is text-to-video and, though it may be a few years before a blockbuster is produced entirely by AI, it seems incredible to be even reading words barely a year after the first text-to-image generative models were launched.
The first generative models to produce photorealistic images exploded into the mainstream in 2022 — and soon became commonplace. Tools like OpenAI’s DALL-E, Stability AI’s Stable Diffusion, and Adobe’s Firefly flooded the internet with jaw-dropping images.
Runway, a startup that makes generative video models (and the company that co-created Stable Diffusion), released its latest version, Gen-2, with a quality that is striking, says Will Douglas Heaven in a 2024 trends piece for the MIT Technology Review. “The best clips aren’t far off what Pixar might put out,” he gushed.
Just as the arrival of the internet led to an explosion of user-generated content posted to social media, generative AI will accelerate the creation of video content online. Some predict that as much as 90% of online content will be AI-generated by 2025.
“As storytelling becomes more personalized and interactive, films will change and so will gaming, an industry where people can choose their own adventures more easily than moviegoers can. The amount of entertainment available will also balloon.”
Film historian David Thomson has compared GenAI to the advent of sound. “When movies were no longer silent, it altered the way plot points were rendered and how deeply viewers could connect with characters,” notes Suich Bass. Meanwhile Cristóbal Valenzuela, who runs Runway, says AI is more like a “new kind of camera,” offering a fresh “opportunity to reimagine what stories are like.” Perhaps both are right.
There is already one filmmaker claiming to have made the first feature-length film from a single long-form prompt.
At the end of last year artist Dan Sickles released a new version of the classic black and white documentary Man With A Movie Camera. Made in 1929 by Dziga Vertov, it captured a day in the life of Russia’s citizens and used a number of groundbreaking techniques.
From “Man With AI Movie Camera”
Sickles has used AI to generate 480 unique iterations of Vertov’s original film in what he calls a homage to — and interrogation of — the original masterpiece, TV Tech’s Phil Kurz reports.
Each iteration of Man With AI Movie Camera was generated from a prompt created by the artist that describes Vertov’s original film shot-for-shot with the exact timing to match the frame. The prompt is trained on a data set curated by the artist to give each iteration a distinct aesthetic while retaining the length and essence of each shot to mirror the original film.
The series was created in using Stability AI’s open-source models (Dreamstudio, ClipDrop and Stable Audio and are being sold online via the NFT marketplace SuperRare. It grossed more than $25,000 when the first sale went live mid-December. Each of the works in the series will be revealed individually throughout 2024.
Sickles said his project “serves as a model for how AI can function as an equitable public good for creative production.”
Outside of artworks and experiments, could these new video generating AI tools actually serve up a feature film that might give Marvel a run for its money?
It’s no surprise that top studios are taking notice. Paramount and Disney are both exploring the use of generative AI throughout their production pipeline.
In fact, AI presents bigger questions about the future of stories and the nature of collective storytelling. For example, poses Alexandra Suich Bass, will Gen-AI simply imitate previous hits, resulting in more derivative blockbuster films and copycat interpretations of pop songs that lack depth, rather than original stories and art forms?”
And as entertainment becomes more personalized, will there still be stories that become part of humanity’s collective consciousness and move large numbers of people, who can talk about them together?
The Zone of Interest opens by introducing viewers to a husband, wife and their family as they picnic by a river. The scene seems idyllic. Shortly after, the family drives home. Everything seems normal — but on closer inspection, their number plate is adorned with the insignia of the SS, the elite guard of the Nazi regime.
This scene encapsulates why The Zone of Interest is so unsettling. It depicts the everyday life of Auschwitz commandant Rudolf Höss (Christian Friedel), his wife, Hedwig (Sandra Hüller) and their family — yet, industrialized, genocidal violence moves along, continuously, in the background and periphery.
Through meticulous camera setup and editing, writer and director Jonathan Glazer’s film depicts scene after scene of the daily routine of Rudolf, Hedwig and their children. Just as the viewer is becoming immersed in these scenes — perhaps even disturbingly aligning themselves with these characters — the film jarringly cuts away.
It cuts to abstracted, pure, bright red or white frames, to Mica Levi’s gut-wrenching, cacophonous soundtrack, or scenes disruptively shot with night vision cameras.
Creating the Höss Home
Production designer Chris Oddy and his team built the Höss house and planted the garden from scratch. The set was constructed on location in Poland, directly next to Auschwitz, using detailed, historical records and photographs.
While domestic scenes of Höss and his family are routine, Johnnie Burn’s sound design is abrasive and incessant.
A continuous, low-pitched, industrial humming sound plays throughout. Its source is not shown, but it could be the sound of the crematoria. As the family go about their daily lives, there are distant screams, shouting, dogs barking and gunshots. The Höss family don’t seem to register them, apparently desensitized.
The film is formally rigorous. It makes use of the hidden cameras Glazer previously deployed in docu-fiction, Under the Skin (2013). With director of photography Łukasz Żal, Glazer positioned 10 fixed cameras and 20 microphones around the Höss house.
Żal has described how he and Glazer endeavored to use exclusively natural lighting, to avoid aestheticizing the unthinkable.
The Problem with Making Films About the Holocaust
The problem with making films about the Holocaust is that formal decisions inevitably also become ethical ones. The choices filmmakers make about camera movements, angles, lighting and editing have as much ethical significance as what is in front of the camera.
Audiences are familiar with (perhaps anesthetized to) the typical iconography of the Holocaust in film, such as watchtowers, barbed wire and smoke. In her comprehensive study of Holocaust film, Indelible Shadows (1983), historian Annette Insdorf describes these images as a kind of figurative shorthand: “the visual part representing the unimaginable whole.”
In The Zone of Interest, these recognizable images are disturbingly estranged. For example, from the perspective of the garden, a watchtower is visible in the background, partially obscured by pristine white sheets hanging on a washing line.
Instead of showing the operation of the crematoria as a closeup, the pulsating orange-red light from their chimneys illuminates Hedwig’s mother’s bedroom, disturbing her sleep. Children play in a pool in Hedwig’s cherished garden, atop of which is a shower, symbolically associated with the extermination of innocents in the camp.
Representing Gas Chambers on Film
Mainstream cinema relies on narratives of conflict and resolution, and so tends towards a binary notion of “good” and “evil.” The Zone of Interest, however, in Friedel’s disquieting, cold portrayal of Höss, shows that genocide is perpetrated not by “evil,” but by administrators — concerned with numbers, timetables and blueprints.
Höss is shown meeting with several men in suits, matter-of-factly explaining detailed plans for more efficient gas chambers immediately after a scene where Hedwig gossips with her friends. Such a contrast underlines that the ordinariness of perpetration.
Critics have praised or condemned films about the Holocaust such as Son of Saul (2015), Schindler’s List (1994) or The Boy in the Striped Pyjamas (2008) for the techniques they use to represent gas chambers. Claude Lanzmann, director of Shoah (1985), once said that if direct footage of the act of genocide in the gas chamber were to exist, he would destroy it. Watching it would transgress the most profound moral taboo.
One of the starkest moments in The Zone of Interest is its representation of the gas chambers. Rudolf Höss is leaving a party for the Nazi leadership in Berlin. He looks at something off screen, down a dark corridor. The film cuts to a close-up of a small, white circle, surrounded by blackness — the spy hole on the door of a gas chamber.
This is followed by a disorienting jump to the present day. Cleaners dust the piles of victims’ suitcases, shoes and other personal belongings in the Auschwitz Memorial Museum.
Glazer was stunned to discover that one of the walls of the Höss family garden directly bordered the extermination camp. He has said that the concept of his film “became about that wall.”
Glazer’s film underscores how walls, borders or geographic distance “compartmentalize,” in producer James Wilson’s words, the suffering of others. Such visual barriers become the mechanism through which genocide has been — and continues to be — enabled.
The Zone of Interest is unsettling because of how it portrays perpetration of the Holocaust as normal, therefore leaving the viewer with the disquieting notion that anyone, in the right conditions, can be responsible for unspeakable things.
Director Jonathan Glazer pursued an immersive naturalism in “The Zone of Interest” by removing the artifice and conventions of filmmaking.
February 13, 2024
Posted
February 13, 2024
The Creator Economy: What’s Now and Next?
Dive into the trends shaping the creator economy in 2024 with insights from Inside the Creator Economy editor and publisher Jim Louderback. Watch the full conversation below and/or read on to discover what changes are ahead for creators.
Louderback is confident that “short form [content] is not going anywhere,” despite some articles proclaiming a return to longform media. “People will be consuming a lot of it in those gaps of time between when they’re living their real life,” he says.
On the subject of content formats, Louderback says video podcasts are also on the upswing: “We’re seeing that video podcasts, whether it’s on Spotify or on YouTube, are actually outperforming audio podcasts.”
Another trend that’s accelerating “is the move to live content,” per Louderback. He points to several reasons behind the trend.
“We see TikTok and others really leaning into live shopping, making live streaming super helpful. There are great ways to monetize yourself, if you can build a live audience on Twitch on TikTok, and on other platforms,” Louderback explains.
It’s also a reaction to one of 2024’s super trends: AI. “There are a lot of problems out there with AI deepfakes and not knowing if something is real or not real. That’s going to ratchet up this year,” Louderback says.
In contrast, live content is a reassurance to your audience that your content is real. Your mistakes prove that you’re “not an AI construct because… AI doesn’t really mess up like that.”
(In-person events, in addition to streaming content, are also predicted to benefit from this deepfake backlash.)
Monetization Strategies
“Creators are waking up to the fact that they cannot rely on platform-only revenue to make a living,” Louderback says.
Frankly, “A lot of the revenue that creators counted on in years past has gone away. So Facebook [is] not sharing as much revenue; TikTok [is] cutting back on funds and doing other strange things. And just a lot of revenue sources are going away.”
So in 2024, he says, “It’s all about building your own revenue mixes and doing yield management to figure out ‘How do I take people and think about the platforms as more top-of-funnel and awareness and move my biggest fans over to my own platforms?’ Whether it’s a community, a course, merchandise, or other ways for them to pay you directly.”
A common spin off of this is creators “launching their own products,” which Louderback notes has come with its own set of headaches (not to mention unsold inventory). He emphasizes considering “What are you best at as a creator?” before moving into a different model.
Content Distro
Windowing has come to the creator economy!
Louderback points to the recent kerfuffle over MrBeast’s X content experiment (repurposing existing YouTube videos to the Elon Musk-owned social media site, FKA Twitter) as an example of windowing.
“You may not see the same success on X” by repeating MrBeast’s trial, he says. “But there are other ways to do this sort of windowing of content, including putting them all together in a single, like, 20-, 50-, 100- hour stream and creating a live stream platform on Roku on Pluto or others.”
Additionally, Louderback points out, “we’re starting to see new AI tools that let you take, let’s say, that 20-minute long form that you did, and cut it up into shorter form videos.”
Not only do these tools help you make a quick cut, some can even reformat your content so that it appears in the appropriate dimensions for the destination platform. AI can get creators quickly to “50, 60, 70% of the way” there in these expedited workflows.
The Rise of the Global Creator
Speaking of transforming one piece of content into many: Translation and dubbing services have also been revolutionized by generative AI, Louderback says. In 2024, he says, “If you’re creating great content, and you dub it the right way, and get it out there, you can reach a global audience.”
We’re going to see “the rise of the global creator,” he predicts.
“Using AI and other things, you now can take the content that you do in one language, dub it in another language, it will automatically use your voice speaking that language,” he explains. “And in some ways, it will actually change your face so that it makes it look like your face is speaking French instead of English.”
For example, he says, “YouTube now allows you to have a single channel with multiple voice versions, and it’ll serve [the appropriate] voice version to the native speakers.”
Understanding the Creator Life Cycle
Content creators face the same challenge that has dogged professional athletes forever: You’re not going to do this job forever.
“Creators have a life cycle,” Louderback says. “Short form creators, maybe one to three years; longer form creators, five to seven years. What do you do after that?”
Louderback predicts: “We’re going to hear a lot more about life after creating.”
No matter how long you stayed in the creator game, Louderback says, “You’ve just gone to the University of YouTube or the College of TikTok. And there are a lot of opportunities for creators who no longer have those big audiences, but can use that expertise, working in brands, working at events, doing other things.”
And by the way — creators are starting to determine their own course of study long before the senior thesis.
“Creators realize they are more and more in charge,” Louderback says. “And they’re the ones saying ‘no’ to brands that don’t fit their audience because they’ve realized their audience is more important than the brand dollars.”
Part of what’s driving this change is that “social video platforms have become a mature industry,” according to Louderback.
Of course, maturity doesn’t necessarily equate to stability for creators. These platforms, Louderbacks says, are “doing things that mature businesses do. They’re stealing creators from other platforms. They’re shoving a bunch of different features out there.” The companies are experimenting constantly to retain audiences or, ideally, to incrementally grow their market share.
Louderback points out that YouTube and TikTok’s forays into the world of television are great examples of this.
A Controversial Prediction About AI
Louderback is not convinced that we can rely on AI getting exponentially better, at least not forever. He wonders if 2024 is going to be the year of “garbage in, garbage out,” in programmer parlance.
He says, “AI is going to start feeding on itself” because “tier one sources” (ahem New York Times, Wall Street Journal, etc) will “have been shut off” via legal and societal pushback.
If this does indeed come to pass, Louderback says, “The quality of AI, particularly in writing and creating content, will go down because … it’s gonna start training on all those crap AI articles that are flooding the internet.”
Those attending this spring’s NAB Show will have the opportunity to dive into these trends and much more at Creator Lab. (Louderback serves as one of content curators and program producers, along with Robin Raskin.)
Louderback emphasizes that this is not another meet-and-greet. Creator Lab is focused on developing “infrastructure for creators” and treats creators like “direct-to-consumer CEOs trying to build a business,” he says.
A big part of that infrastructure is literal. Louderback says Creator Lab will feature “the studio and the technology and the gear to actually create great stuff” and point attendees to “do it so that you can compete with other people or not waste a lot of money or time.”
But then there’s also the metaphysical system surrounding content creation. Louderback says they’ll cover: “How do you find the resources? How do you use new tools to upskill and uplevel what you do? How do you find and build the right support teams?”
Creators may be “the CEO of their small business or a large business, direct-to-consumer enterprise. But they need help. They can’t do it all. So how do you find that, too?” Louderback and Creator Lab will provide the resources creators need at NAB Show 2024.
The 2024 NAB Show will feature Creator Lab, a new show floor experience helmed by Jim Louderback and Robin Raskin.
February 9, 2024
The Visual Poetry in “All Dirt Roads Taste of Salt”
TL;DR
Shooting on celluloid and creating tactile imagery was key for cinematographer Jomo Fray with director Raven Jackson’s debut feature “All Dirt Roads Taste of Salt.”
Frey lensed a love letter to the last three generations of women in Jackson’s family with the aim to create evocative visuals in which the viewer would not only see the story, but feel it as well.
Much like the “Dogme 95” manifesto, the DP and director wrote a visual manifesto they read to each other every day during production that included stripping back the paraphernalia of on-set distractions.
Director Raven Jackson and cinematographer Jomo Fray use the language and techniques of poetry to create her debut feature All Dirt Roads Taste of Salt.
The mediative experimental drama explores the influential people, places, and events Mack (played by Kaylee Nichole and Zainab Jan as younger and older versions of the character) has encountered while growing up in Mississippi.
A mantra for Fray (whose credits include Tayarisha Poe’s 2019 feature Selah and the Spades and Jackson’s 2018 short Nettles), is to shoot the way a director dreams.
The film has no set narrative structure. “It was about the texture of that emotion rather than covering it more traditionally,” Fray said. “It’s about building a scene that has specific coverage knowing that those scenes could be put into a different order but that’s why you find motifs naturally as you’re shooting.”
Fray elaborated on finding the visual language for the film to Bale, saying that they wanted to make cinema more sensorial.
“The conversations we had early on were like, “Can we smell this image? Can I feel this image? In this image, I want to literally feel the salt on my brow from the sweat of being in the sun so long. How do we conjure that? how we can create more sensorial feelings and textures in every moment, every image, every gesture, every detail.”
To push themselves to adhere to this aesthetic they drew up a 12-point visual manifesto inspired by the work of director Terrence Malick, which they read to one another at the start of every single day.
One of the points on the manifesto was “to be present to the cinema on set.”
“Raven would direct the action and we would find the scene, and we would watch it together in rehearsal and start to see the small gestures that are built into the rehearsal that we just find our eyes drawn to,” Fray said when interviewed by Stephen Saito for The Moveable Fest.
Another manifesto point was to speak in “slant rhymes,” which was a phrase they used to remind themselves that it was okay to be inspired by the same thing multiple times in multiple scenes.
“It was our attempt to try to create these motifs in the film, but to create them in a naturalistic way, so we didn’t go in thinking, “We want to shoot a lot of hands,” but we would find ourselves being inspired by what a character’s hands were doing at a certain moment in the scene, and we didn’t try to stop ourselves to have visual diversity with the film.”
The manifesto also required them to be elemental which meant being emotionally open day-to-day, scene-to-scene, moment by moment on set than was necessary.
“We chose tools so that the camera and the lighting can get out of the way a bit. Raven never really wanted a lot on set and didn’t want any artifice when we could avoid it, so my gaffer Jay Warrior and my key grip Forrest Penny Brown created a lot of our lighting from outside in, a lot of the time was using mirrors, to create that feeling.”
Inevitably, this meant the film was shot on 35mm. They did a lot of testing for different ways to process the film as well as with different perforations mixed with different lenses.
“So much of this movie has to do with interiority, so we wanted a tool that could really make us be inside their emotions, and have it be incredibly textural, so that you feel the coarseness of the image between your fingers,” Fray told Bale.
They selected 500T 5219 Kodak stock, making a push process for the entire movie.
“We only used one stock, even though this movie takes place in different time periods,” Fray explained, “For Raven and me, these are not flashbacks or flash-forwards. Every single moment, every single frame in this movie, is about Mack dealing with the present-tense stakes of her life at that given moment.”
The A24 release has received multiple nominations on the festival circuit and won several, including most recently for Best Cinematography at the Black Reel Awards.
He described the visual style of All Dirt Roads Taste of Salt as less about setting out to make poetic imagery, and more about “trying to create a process where poetry could find us.
“I don’t necessarily want the image to be unlocked in its meaning. I want it to be a metaphor,” he told Cioffi. “So that the viewer is actually grafting their histories, their loves and fears onto the image and the image is welcoming them to engage with it. It is an invitation for the viewer to put their histories on it, and in that way, hopefully make the images more robust and also make the image one where a bunch of different people can all have different interpretations.
“All I really want is for the audience to be active while they’re watching the movie, not passive.”
Studio Tours Amplified: Inside Coffeezilla’s “$10 Million” Virtual Production Setup
TL;DR
Stephen Findeisen, aka Coffeezilla, is a YouTube content creator with more than three million subscribersknown for debunking online scams.
Coffeezilla’s content is set against a virtual backdrop of a cyberpunk, film noir-inspired city, created within a sophisticated virtual production environment humorously dubbed “the $10 million studio.”
Starting with no background in production or film, Coffeezilla was inspired by other YouTubers and a TED Talk by David Korins, leading Findeisento explore virtual production to create original content.
Coffeezilla combines practical lighting solutions, like Aputure’s gobo cutouts, with post-production visual effects to enhance the realism of his virtual sets.
Coffeezilla’s approach offers insights into the future of virtual production for indie creators, highlighting the potential for innovation within resource constraints.
Stephen Findeisen, known online as Coffeezilla, is a figure of intrigue and respect on YouTube. With a subscriber count north of three million, he’s renowned for debunking online scams, primarily within the crypto space. Yet, it’s not just his investigative prowess that sets him apart — it’s the virtual world he’s crafted for his content. This cyberpunk, film noir-inspired city, complete with a detective’s office, is where Coffeezilla’s stories come to life.
Findeisen generally records his videos in front of a green screen; his backgrounds feature elaborate computer graphics, and he inserts animated graphics to illustrate his content, including a recurring character of a robot bartender. This elaborate world is crafted inside what he has jokingly dubbed “the $10 million studio,” a high-tech virtual production environment run by a lean three-person team.
Findeisen recently sat down with VP Land’s Joey Daoud to recount his journey from a basic bedroom setup to a sophisticated production powerhouse, revealing tips and tricks and breaking down the virtual production studio he built for the Coffeezilla Cinematic Universe.
Go Big or Go Home
Embarking on YouTube with no background in production or film, Findeisen was driven by a desire to create something genuinely original that also informed his audience. Watching other YouTube channels like Linus Tech Tips and a TED Talk on set design by David Korins, who has designed sets for artists such as Kanye West, he was struck by the idea that one’s environment can shape their identity and creativity.
A breakthrough moment arrived when Findeisen realized he could create an infinite number of virtual production sets in his own bedroom. He began scouring the internet for information on virtual production, including behind-the-scenes videos for The Mandalorian. But while he was able to find plenty of details for big-budget productions employing virtual production techniques, there was very little guidance for indie productions.
“I remember thinking, really early on, I want to be one of the people who really pushes this landscape of indie virtual production forward in a practical way,” he tells Daoud. “I want to show that not only can it compete, it’s actually, I think, in a lot of cases more competitive, and people just sort of haven’t figured it out yet.”
Early experiments riffed off of tropes popularized by the internet scammers Findeisen featured to show off their fake wealth, such as placing a virtual Lamborghini inside his studio. “I just basically stole what they were doing and made it like a satire, like pastiche.” The idea of the $10 million studio started off as a joke, he says, until it wasn’t a joke anymore. “Because eventually, if you push your production enough, it is almost plausible that it’s a $10 million studio.”
Lights, Camera, Action
In the expansive realm of YouTube content creation, where the scale of production can range from solo vloggers to full-fledged studio teams, Coffeezilla has carved out a niche that defies the norm. At the heart of Findeisen’s operation is a lean three-person team — Findeisen himself, a 3D artist/animator, and a video editor.
Production time for a full show can range from a month to two months, depending on the complexity of the project and the amount of work done at the desk versus in-depth investigations. The production process is meticulous. It begins with research and script compilation, followed by storyboarding and shooting. “We’ll research stuff, then I’ll start to compile it in a script,” he describes. Post-production involves editing, sound design, and final touches. “We start to shoot it, then we send that to my editor… then we’ll send it for sound design and final editing.”
For capture, Findeisen opted for the Sony FX3 and FX9 full-frame cameras, augmented with the ARRI Alexa. Practical lighting solutions, such as Aputure‘s gobo cutouts, are employed to achieve specific visual effects, like Coffeezilla’s signature noir window blind lighting.
Open-source 3D software Blender serves as the backbone of Coffeezilla’s virtual set design. “I’ve just kind of always had a bad experience with Unreal, it never really runs fully real time on my computer,” he explains. “So I’m like, if we’re going to just have to do it in post, let’s just do it in Blender, because I think the renders out of Blender are slightly better.”
ICVFX vs Post-Production
Visual effects, such as the steam coming from Coffeezilla’s coffee mug, are also added in post. “I thought of so many ways to do that practically,” Findeisen laughs, recounting his process of trial and error with everything from fog machines to a boiling pot of wat placed underneath the desk. “If you if you like problem solving, virtual production is for you. Because it’s constant problem solving.”
All of Coffeezilla’s shots where he’s sitting at his desk speaking to the camera are final pixel, using Blender video renders of the room and composited in real time in Aximmetry, a node-based video editing solution for live workflows that allows the integration of live-action footage with virtual environments. “You basically pull in a camera input, but then you can apply a node layer,” he says, describing how camera correction, blur, color keys and more can be added to live footage in real time.
The more advanced VFX shots, such as conversations with his robot bartender or whenever the camera is moving, are composited later in post-production. Findeisen initially tried match moving, which tracks camera movement through a shot so it can be reproduced in 3D space, but found it to be tedious and repetitive.
“One thing with me is I’m always trying to cheat work,” he says with a laugh, describing how he’ll work all night just to avoid ever having to do the same thing twice. “Like, I’m always trying to make things faster. But then it ends up making me do more work.”
In the end, camera tracking, which analyzes data tracking the camera in physical space, provided the answer. Findeisen tested a number of tools, ultimately deciding on Mo-sys StarTracker, which he says delivers the closest thing to final pixel that he’s been able to find.
“It’s more about like figuring out how to solve for camera lens distortion on different lenses and like, you know, focus breathing, the things that you take for granted, or you don’t even realize are problems when you’re just trying to solve that tracking quality part.”
Findeisen also quickly learned the importance of practical elements in virtual sets. “Anything you touch on set should be practical. It’s so exhausting trying to manually track whatever stuff you touch, because you’re going to create all these shadows. It’s going to be a nightmare,” he says.
“Just do yourself a favor. Floors, try to make them practical. Things you touch, like chairs, or, like, if you’re touching a table, you know, please just make a practical cup, make it practical, these things are supposed to help you. And I think if you get too overly zealous about ‘I’m 100% virtual production,’ you’re going to be like me and trying to key out a green table where your arm’s touching it,” he cautions.
The Future of Indie Virtual Production
Findeisen also warns that indie virtual production for moving cameras may not be accessible to the layman YouTuber. “I’ve developed my pipeline as my resources have expanded,” he explains. “But I know when I started on YouTube, I could have never afforded the tools I needed for moving a camera. There’s just so much you need to get a working workflow for moving camera virtual production, that I think it’s worth being really honest about that,” he says.
It’s not just about the money, however; it’s also about time. “You can do a lot with a basic green screen, some basic lights and a camera. But as far as, like, trying to get into the moving camera shots, unless you’re willing to spend some time I think match moving is maybe the closest thing to a cheap workflow for that. But even then, it’s hard.”
Findeisen says that he’s come to appreciate green screen as a tool for simplifying workflows. “James Cameron, I think, is famous for saying, ‘on green screen, things that are hard in real life are easy. And things that are easy in real life are hard.’
“I think that’s really true. If you want some epic sweep of the Himalayan Mountains, you could build it on Fiverr with a CG artist for five bucks in, like, five minutes. Or you could hire a helicopter to go out there and shoot it for real on an ARRI Alexa. So they’re very two different things in scope. But if you want to have a person open a door and walk through it, fully in CG, it’s actually pretty hard. And if you want to do that, practically, it’s so easy,” he explains.
“Nobody comes to my videos thinking, ‘Oh, I’m gonna see virtual production today.’ It’s just this invisible tool that just helps things look a little bit better, helps the narrative flow a bit better. And I’m happy to have that in the background.”
Was It All a Dream? Developing the Visuals for “All of Us Strangers”
TL;DR
Cinematographer Jamie Ramsay calls his collaboration with gaffer Warren Ewing and Company 3 colorist Joseph Bicknell “the holy trinity”of his process for developing the look of “All of Us Strangers.”
Based on a novel by Taichi Yamada, The Golden Globe and BAFTA-nominated film is directed by Andrew Haigh and features beautifully appointed production design by Sarah Finlay.
Ramsay and his team created a single overarching look for the two worlds depicted in the film, separated only subtly by the differences between fabrics and dyes prevalent in each era.
Within that framework, Ramsay’s photography was motivated by the emotional content of each scene, not by opposing looks designed to delineate the different environments.
Although shot on film, it was still vitally important to develop a LUT to guide the dailies grade so the entire production team could see how materials, skin tones and lighting would ultimately look.
The highly praised Golden Globe and multiple BAFTA-nominated feature All of Us Strangers takes its lead character through some very odd situations that could have been presented in the form of a ghost story but are arguably far more impactful as the intimate drama it is. In the film, we meet Adam (Andrew Scott), a writer who has the opportunity to better understand his fears and the reasons for his loneliness when he visits his long-deceased parents (Claire Foy and Jamie Bell) and has a chance encounter with a handsome but mysterious stranger (Paul Mescal).
In addition to Andrew Haigh’s direction and script (based on a novel by Taichi Yamada), the fine acting and Sarah Finlay’s beautifully appointed production design, the film’s unusual approach succeeds in large part because of the work of cinematographer Jamie Ramsay — work that began long before shooting commenced.
A key to the film’s overall visual tone concerns the look of Adam’s contemporary London apartment, which suggests a somewhat luxurious space with a lovely view of the city but a rather sterile, lonely space, versus his parents’ suburban home, which evokes the era of the 1980s, when he’d last seen them, and feels a bit more inviting.
Early on, the question arose of whether the cinematography would offer a clear delineation between these two worlds, with the past looking warm or hazy perhaps and his modern world colder? No, Ramsay says. The overarching look would be similar between the two worlds, separated only subtly by the differences between fabrics and dyes prevalent in each era. Within that framework, Ramsay’s photography would be motivated by the emotional content of each scene, not by opposing looks designed to delineate the different environments.
“Once I’ve had these types of conversations with the director and production designer,” he says, he starts to build a look book of “color treatments, images, textures, tones and thoughts that encapsulate moments and anchor points in the story and how colors should shift and by how much.”
Then, Ramsay starts the process of developing an approach for the cinematography itself and this is when discussions expand to include gaffer Warren Ewing, who will oversee the type, intensity and placement of lighting units based on Ramsay’s ideas, and colorist Joseph Bicknell of Company 3, who would work with the cinematographer to develop a show LUT which reflects the story’s concepts and provides a visual representation to everyone involved in shooting editing the film, how color and contrast will be rendered in the final film.
This threesome comprises what Ramsay refers to as “the holy trinity” of his process. “Color, light and texture and the mood and emotional and intellectual context created by color is so important to a film,” he explains, “and the relationship between my colorist, my gaffer and myself is key to how we bring all that to life.”
Though Ramsay shot All of us Strangers on film, it was still vitally important to develop a LUT that would be used to guide the dailies grade so that everyone involved, from the director to production and costume designers and, of course Ewing, could all see how materials, skin tones and lighting would ultimately look.
“We wouldn’t see [the results of the LUT] on set as we would if we were shooting digitally,” Ramsay explains, “but it was important to do this work in advance so that everyone could get a good idea of what the LUT would do so we were all on the same page,” he says. “After the first week of shooting, one of the most important things we did was having a big-screen projection of some of the choice dailies.”
Creating the LUT involved shooting tests on various Eastman Kodak emulsions, which Ewan lit, and Bicknell graded. “We could [test] production design and wardrobe design and color palettes and various materials and tonal boards and really see the see where the colors go,” he says. He and Bicknell worked with the film scans at Company 3 to design the show LUT. “And from that, we can see where we will need to push and pull colors. Do we need to wash a color down a little bit or keep it where it is?”
Communication among the DP, gaffer and colorist from this early stage, Ramsay says, helps to ensure control over the imagery as Ewing and Bicknell compare notes on how “the quality and intensity of the light on set interplays with the handling in the grade of the contrast and the placement of blacks and highlights within the frame.”
So much of the film is ultimately about the gravity of the loss that Adam suffered when his parents were killed in an accident and much of what Ramsay endeavored to create was based on the idea of the character’s memories. Portions of the film were designed to feel “almost like a time capsule in the sense that Adam’s parents were locked in this point in time in the late ‘80s.”
This didn’t mean that the show LUT should bring an overarching ‘80s feel to everything, though. “We wanted to be able to evolve the color in such a way that there could be a growth to it,” the DP says. “So, for instance, in the current day costume and production design choices would include intense primary reds while the ‘80s reds would be more like a burnt orange. A primary green would be more of a pistachio. And so on. So, the relationship of color and time was dealt with on the set.”
When Ramsay and Bicknell worked on the final grade, a great deal of the look of the film had already been worked out in the well-thought-out interplay between the lighting, design and the show LUT. Says Bicknell, “I love working with Jamie as we have a lot of the creative conversations and world building in preproduction, and that allows us to be inspired by the changing emotion and tone of the film in the final DI, helping to advance the story with color.”
Quite a bit of suspension of disbelief is demanded from the audience, the DP says, “but we all believed that if we gave the audience the respect — if we gave them a tableau that felt honest and truthful and real — that would serve us, because it would just put the responsibility in their hands just to just stay with the flow.”
Filmmakers Andrew Haigh and Jonathan Alberts discuss how they blended supernatural elements into the deeply personal “All of Us Strangers.”
February 8, 2024
Posted
February 7, 2024
Reading Between the Lines: Gen Z Really Loves Closed Captions
TL;DR
More than half of Gen Z and millennial media consumers prefer subtitles, according to new survey results from YPulse and Preply.
While subtitles haven’t always been seen as a first choice, they’ve grown in ubiquity — especially with the rise of online videos that include automatic captioning.
Captions help watchers keep up with murmuring dialogue, distinguish thick accents and get a head start on a scene.
Closed captions aren’t just for the hearing impaired — the rise in its popularity is being driven by younger viewers who are in fact making the use of subtitles while watching television the norm.
In a new “TV and Entertainment report,” YPulse found that more than half of 13-39-year-olds prefer to use subtitles.
And it’s not just because they need them; the younger generation makes use of reading text while watching movies/TV to keep up with murmuring dialogue, to distinguish less familiar accents, and some say just to get a head start on a scene and go back to looking at their phone.
Per the report, 59% of Gen Z survey respondents and 52% of millennials said they use subtitles. 61% of Gen Z males say they prefer to use them.
These are no outliers. A 2023 report by Preply found Gen Z overwhelmingly the generation most likely to be turning on subtitles (70% of Gen Z respondents said so compared to 53% of Millennials, and just 35% of Baby Boomers).
As to why Gen Z likes to turn on text while watching their shows, part of it, according to Wilson Chapman at IndieWire, is that people in that generation grew up watching videos on social media, where subtitles are the algorithmically encouraged default.
Sara Fischer at Axios writes that TikTok helped normalize captions for young media consumers, who are now turning regularly to subtitles as part of their streaming habits.
“TikTok has an auto caption feature that a lot of content creators will use,” Axios reporter April Rubin told WGBH Morning Edition co-host Jeremy Siegel. “And so people are just a little bit more used to reading as they watch. Another factor that may play into this is that it has been a little tougher to maintain quality sound in the streaming era. So they could be watching subtitles just because they’re missing some of the dialogue with background noise or changing volumes.”
Younger kids actively need subtitles to enjoy the content they are watching, according to a Kids Industries survey of US and UK parents with kids 5-15 years old. In this case, subtitles add an increased dimension of understanding to viewing. Watching content with closed captions can reportedly improve literacy, vocabulary, and the speed of reading, the report said.
“For kids’ media brands, the widespread use of closed captions should be a sign to improve accuracy and make sure subtitles are available for all programs,” suggests YPulse.
But closed captions are being used more by all of us. A 2022 report by Netflix revealed that 40% of its global users use closed captions on all the time, while 80% switch them on at least once a month.
In its survey Preply determined that half of Americans used closed captions with the top reason (cited by 72% of respondents) being that subtitles make dialogue easier to understand.
As Chapman lays out in IndieWire, the causes behind muddled dialogue are many and might vary between person to person. For some, the problem is the design of modern televisions; the majority of which place internal speakers at the bottom of the set instead of facing towards the audience, causing significantly worse audio quality. Other issues are caused by sound designs optimized for theatrical experiences, which can result in compressed audio when translated to home.
“A lot of people struggle to hear dialogue now, so turning on closed captioning to decipher what people are saying has become a no brainer move,” he says.
An article in British broadsheet The Guardian also focuses on the issue of hard-to-hear dialogue which is a known issue in the industry, according to sound mixer Guntis Sics (Thor: Ragnarok), who is quoted in the piece.
Where once actors had to project loudly towards a fixed microphone on set. more portable mics has allowed a shift towards a more intimate and naturalistic style of performance, where actors can speak more softly — or, some might say, mumble.
“Antony Hopkins on Thor spoke like a normal human being, whereas on a lot of other films, there’s a new style with young actors — it’s like they just talk to themselves. That might work in a cinema, but not necessarily when it gets into people’s lounge rooms,” Sics says.
The Guardian’s Katie Cunningham also suggests sound mixes have become more complicated — fine for the 22.2 speakers of Dolby Atmos in a theater but indistinct when played back through a TV’s tiny and tinny speakers.
“When sound is mixed with the best possible audio experience in mind much of that detail can be lost when it’s folded down to laptop speakers, or even your television. It’s often the dialogue that suffers most.”
If you haven’t invested in an expensive speaker set up at home then reliance on the TV’s speaker output alone “could leave you with a subpar experience.”
Of course, the volume of foreign language shows and the phenomenal popularity of some of them — from Squid Game to Money Heist — demands subtitles, but even English-language shows seem too hard for many Americans to understand.
British comedies and dramas that aren’t the usual period dramas like The Crown are often acted with authentic local accents. Peaky Blinders (Birmingham), Derry Girls (Northern Irish) and even contestants on reality TV shows like Love Island are called out. As is Irish Oscar-winning drama The Banshees of Inisherin.
“If people get used to using subtitles where it’s basically required, it becomes a matter of habit to keep them in use even when watching American productions,” says Chapman.
In her article, she states the captioning services market in the US as valued at nearly $170 million in 2022. Studios however often outsource the work to companies like Rev, which in turn has 75,000 international freelancers on its books for transcription work.
Some studios issue very specific subtitle requirements. Netflix’s style guide includes rules like a limit of 42 characters per line, a set reading speed of up to 20 characters-per-second for adult shows (up to 17 for children’s programs) and an emphasis that “dialogue must never be censored.”
To prepare for live events like awards shows, captioners are given a script in advance of everything from the teleprompter — except for the names of the winners. When people ad-lib or give their acceptance speeches, the captioners are working from scratch.
“The person gets up and thanks someone with a very complicated name. We take a guess at it, but we’re going to spell it wrong. That’s bound to happen,” says Heather York, VP of marketing for captioning company Vitac.
Streamers often ask for subtitles in up to nine languages before their shows drop, creating a new challenge for service providers.
“We’ve got to pivot with our workflows, with our resources,” says Deluxe senior VP Magda Jagucka. “That process to bring non-English original content to global audiences requires multiple translation and adaptation steps.”
AI is already being used to give a first pass at transcription, with human editors then going through to make corrections — but there are current limitations.
“There’s a lot of nuance, and the audio-visual translation isn’t really just based on text,” says Jagucka. “When you’re thinking about AI, it goes through that textual base, but translators get our cues from the sound, from the visual, from the picture, from the tonality of the dialogue and the actors acting, as well.”
It is another instance where AI is a tool to assist rather than replace humans, at least at this stage.
Pat Krouse, VP of operations at Rev, tells THR, “AI is really helpful where it speeds up … moving from a pure typist to an editor and a proofreader, and eventually a summarizer. It makes humans focus on higher value things, as opposed to just pure typing work.”
A new survey finds 50% of US consumers use captions and subtitles while viewing content most of the time, making them an essential tool.
February 8, 2024
Posted
February 7, 2024
Why Captions Are Now (Almost) Essential in Video Content Consumption
TL;DR
Captions and subtitles are an essential tool for individuals with hearing loss and, for a lot of people, they’re a constant on TV and video screens. A recent survey showed that 50% of Americans watch content with captions and subtitles most of the time.
There are many reasons for this increase in caption viewing – advances in technology, societal expectations and changes in the way programs are broadcast which this article explains.
If entertainment trends of dim lighting, loud background music, and muddled audio continue, it’s likely that the use of subtitles will only increase in popularity.
No longer considered an optional feature, captions and subtitles have become an essential part of content creation. Recent data has shown that younger generations overwhelmingly prefer to watch content with subtitles on. So popular have captions and subtitles become that a third of Americans think subtitles should be the default on streaming services and cable TVs, while 26% think they should be the default at movie theaters.
A survey of 1,260 Americans conducted by online language learning platform Preply found half of TV viewers watching content with subtitles most of the time, with younger (Gen Z) demos much more likely to be frequent users (70%). Millennials are also more likely to use the feature than the average respondent, at 53%. Older respondents, including Gen X and Baby Boomers, were actually the groups least likely to be frequent subtitles users.
One reason for the younger skew is that this is the generation that grew up with streaming sand social media and have become accustomed to watching TikTok, Instagram, or YouTube videos where subtitles and captions can be automatically generated.
Social media also influences how Gen Z consumes movies, TV shows, and other video. According to the survey, 74% of Gen Zs watch content in public on their mobile devices, meaning that captions and subtitles are a prerequisite if you want to follow your favorite show in a noisy environment.
Another reason cited for turning on the captions is that it’s become harder to hear the dialogue in shows and movies than it used to be. One reason for this, explains language services company Vitac, is that in movie productions, professional sound mixers calibrate audio for traditional theaters with large speaker systems to deliver a wide range of sound. But when that same content is streamed on a TV, smartphone, or tablet, the audio gets compressed to carry the sounds through much smaller speakers. Adding to this, the thinner design of today’s flat-screen TVs forces manufacturers to locate speakers in less-than-ideal positions (the sides, the back) that direct sound away from the viewer and can muffle character dialogue and on-screen actions.
The issue could also be being exacerbated by the production itself. Sasha Urban at Variety notes a recent phenomenon of film and TV releases such as The Batman and Euphoria using visuals so dark that viewers can barely tell what’s happening. Whether this is due to changing director taste or the limits of home entertainment systems, Preply confirmed that a huge 78% of Americans in its poll have difficulty hearing dialogue due to loud background music in films and TV shows, leading 55% of respondents to agree that it is harder to hear on-screen dialogue than it used to be.
When it comes to productions being overall not as well lit, 44% of Americans agree that recent productions are using darker visuals than past ones. Not only that, but 35% agree that actors and TV personalities are talking faster than they used to.
Appetite for Global Programs
The rise in popularity (and easy availability) of foreign language content on streaming platforms is another reason for the increase in caption and subtitle usage.
A 2021 report showed that non-US shows accounted for nearly 30% of the demand for TV in the US, with top content coming from the UK, Japan, Korea, and India. The trend has continued with shows like Squid Game, or Money Heist gaining popularity in recent years.
And even when a show is in English, that doesn’t always mean that it’s easy to understand or follow along with what people are saying. Shows from the UK contain regional accents, slang, and expressions that are unfamiliar to some viewers. Preply’s list of “The Hardest to Understand TV Shows” includes a number of UK-based shows, including Peaky Blinders, Derry Girls, Downtown Abbey, and Bridgerton among its rank.
Topping its list of hard to understand actors is Tom Hardy (Venom), Sofia Vergara (America’s Got Talent), Arnold Schwarzenegger, and Sean Connery, with Johnny Depp coming in fifth.
Pros and Cons
For viewers, using subtitles has clear pros and cons. Being able to follow along with the dialogue visually helps them understand the plot (74%), hold their attention on the screen (68%), and not rewind as frequently after missing things said (55%), which overall enhances the viewing experience.
However, subtitles also come with some cons. Splitting their attention from the visuals of the content makes 40% of viewers worried that they’re missing things. In fact, more than one in five Americans find subtitles more distracting than helpful.
And which streamer gets ranked best for its subtitling feature? Per Preply, Netflix is in the number one spot, with Amazon Prime coming in second and Hulu taking bronze.
The Vision Pro Is One Step Removed From Reality — Is That a Bad Thing?
TL;DR
Mixed reality — or spatial computing — is still an experiment, but Apple has spent billions of dollars developing the Vision Pro using passthrough technology that allows the wearer to see the real world while wearing the goggles.
The Apple Vision Pro may be a marvel of modern industrial design, but what will the killer app for its mixed reality be, if there is one at all?
The psychological effects of experiencing virtual and mixed reality for long periods have not been properly examined, but the evidence to date doesn’t bode well, in part because of how socially isolating it can be.
As Apple releases its new mixed reality system Vision Pro the media tech industry is pondering what it is for. No-one knows the answer, and probably not Apple, but given its track record in defining new categories in consumer electronics, interest in its approach and capabilities are high.
Apple’s entrance into VR has symbolic weight, because the company has had so much influence on computers and phones, Microsoft exec and VR pioneer Jaron Lanier writes in The New Yorker.
Apparently Apple CEO Tim Cook “knows” that VR (or spatial computing) is the future of computing and entertainment and apps and memories, according to Nick Bilton at Vanity Fair.
VR has long been an established industrial technology, used for designing cars and to train surgeons in new procedures, for example. It has also been used by artists to explore the nature of consciousness, relationships, bodies, and perception, writes Lanier.
In between the two extremes lies a mystery: “What role might VR play in everyday life? The question has lingered for generations, and is still open.”
Lanier considers the Vision Pro to be a virtual reality device, one that allows users to see the real world around them overlaid with 3D virtual objects. That’s because video of the user’s surroundings are streamed — almost live — and displayed onto a high resolution screen.
As Shira Ovide at the Washington Post explains, “When you strap on the Vision Pro, you can watch a movie through the screen on your face and see your living room around you. You can pull up a recipe app through Apple’s headset and position virtual cooking timers above your pots as you follow the instructions.”
She says, “But you’re not seeing the real world. You’re seeing a nearly live streaming video of your living room or kitchen with apps superimposed on there.”
Director James Cameron explained to Bilton that the imagery in the Apple Vision Pro looks so real because it is writing a 4K image into users’ eyes. “That’s the equivalent of the resolution of a 75-inch TV into each of your eyeballs — 23 million pixels,” he said, later adding that he thinks the product is “revolutionary.”
Bilton listed a number of problems with the product — none of which were insurmountable. For instance, the unit’s $3500 price tag could be subsidized by Apple if it wanted with “as much financial impact as Cook losing a nickel between his couch cushions.”
It’s not the weight or the size because V2.0 will improve on this, or how Meta, Netflix, Spotify, and Google are currently withholding their apps from the device: “Content creators may come around once the consumers are there, and some, like Disney, are already embracing the device, making 150 movies available in 3D, including from mega-franchises like Star Wars and Marvel,” Bilton notes.
No, what bothers Bilton about the Vision Pro is just how good the experience is. Clearly wanting to keep getting invites to interview Apple bosses and get behind closed doors previews, Bilton says that every other routine computing experience — and even the actual world round us — pales besides the hyper-real version of it viewed through Cupertino’s new googles.
“In the same way that I can’t imagine not having a phone to communicate with people or take pictures of my children, in the same way I can’t imagine trying to work without a computer, I can see a day when we all can’t imagine living without an augmented reality.”
This is because with the Vison Pro you “actually feel like the person is in front of you and you can reach out and touch them,” he gushes. “I saw the world around me. I didn’t feel closed off or claustrophobic. I left the Apple offices… and when I opened my laptop, a relatively new computer, it felt like a relic pulled from the rubble of a Soviet-era power plant.”
Around 180,000 people have been tempted to buy a Vision Pro in the opening weekend of online preorders, according to figures quoted by Vanity Fair. Morgan Stanley anticipates that sales will ramp up to two million to four million units a year over the next five years, and it will become a new product category for the company. But others, like Apple supply chain analyst Ming-Chi Kuo, thinks it’s going to remain a niche product for some time.
David Lindlbauer, a professor leading the Augmented Perception Lab at Carnegie Mellon University, doubts that we’ll see people talking to their friends while wearing Vision Pro headsets at coffee shops in the near future. It’s simply strange to talk to someone whose face you can’t fully see.
“Socially, we’re not used to it,” Lindlbauer told Vox. “And I don’t think we even know if we ever want to get used to this asymmetry in a communication where I can see that I’m me, aware of the device, can see your face, can see all your mimics, all your gestures, and you only see a fraction of it.”
Lanier notes that research by a Stanford-led team has found evidence of cognitive challenges with such camera-based mixed reality. They shared their findings in a new paper that reads like a cautionary tale for anyone considering wearing the Vision Pro anywhere but the privacy of their own home.
“Your hands are never quite in the right relationship with your eyes,” he says. “Given what is going on with deepfakes out on the 2D internet, we also need to start worrying about deception and abuse, because reality can be so easily altered as it’s virtualized.”
As explained by Vox reporter Adam Clark Estes, a big problem with the passthrough video technology is that cameras — even ones as high-tech as those in the Vision Pro — don’t see the way human eyes see. The cameras introduce distortion and lack the remarkable high resolution in which our brains are capable of seeing the world. What that means is that everything looks mostly real, but not quite.
So, when the headsets came off, it took time for the researchers’ brains to return to normal, so they’d misjudge distances again. Many also reported symptoms of simulator sickness — nausea, dizziness, headaches — that will sound familiar to anyone who’s spent much time using a VR headset.
Tech analyst Benedict Evans noticed that in videos Apple released to developers last year to showcase what the Vision Pro can do: “Apple doesn’t show this being used outdoors at all, despite that apparently perfect pass-through. One Apple video clip ends with someone putting it down to go outside.”
Lanier’s concerns run deeper than user experience. He thinks virtual reality apps for the Vision Pro will come from all kinds of companies, and “could agitate and depress people even more than the little screens on smartphones.”
He is also worried about the engineering and support effort it will take to keep a system as complex as this always up to date.
More problematically, Lanier just doesn’t think users are going to want to be in virtual reality for anything more than specific experiences.
“Apple is marketing the Vision Pro as a device you might wear for everyday purposes — to write e-mails or code, to make video calls, to watch football games,” he says. “But I’ve always thought that VR sessions make the most sense either when they accomplish something specific and practical that doesn’t take very long, or when they are as weird as possible,” he says.
“Venture capitalists and company-runners talk about how people will spend most of their time in VR, the same way they spend lots of time on their phones. The motivation for imagining this future is clear; who wouldn’t want to own the next iPhone-like platform? If people live their lives with headsets on, then whoever runs the VR platforms will control a gigantic, hyper-profitable empire.”
But Lanier doesn’t think customers want that future. He says, “People can sense the looming absurdity of it, and see how it will lead them to lose their groundedness and meaning.”
To Lanier, living in VR makes no sense to who we are as human beings. “Life within a construction is life without a frontier. It is closed, calculated, and pointless. Reality, real reality, the mysterious physical stuff, is open, unknown, and beyond us; we must not lose it.”
Spatial computing has been adopted by Apple to describe its latest “wearable,” Vision Pro. But there are those wondering if this isn’t the metaverse by another name.
February 7, 2024
Posted
February 4, 2024
What Consumer Technologies Could (Will) Change Media and Entertainment?
TL;DR
Learn about the five key trends Lori H. Schwartz identified at this year’s CES in Las Vegas: health intelligence, autonomous intelligence, immersive intelligence, as-a-service intelligence and creative intelligence.
Schwartz is joined by Boaz Ashkenazy of Simply Augmented, who provides perspective on the way artificial intelligence advancements underlie all of the trends that are expected to shape M&E this year.
What does 2024 have in store for us? Lori H. Schwartz, StoryTech principal and NAB Amplify content partner, has some thoughts.
Here, Schwartz teams up with Simply Augmented CEO and founder Boaz Ashkenazy to share five trends she identified at CES 2024 and their implications for M&E.
Watch Schwartz and Ashkenazy’s full conversation, below in two parts, and read on to get their take on how technology will change the way we work and play.
The first trend may seem obvious, but it’s extremely significant, according to Schwartz: “There was really a horizontal wave of AI, across all the exhibitors and all the experiences” at CES. She considers it to be a super-trend because it undergirds each of the themes.
“We’re really talking about the impact of artificial intelligence on all of these things,” Schwartz explains.
Health Intelligence
“This really has to do with the impact that digital has had on healthcare in this last year,” Schwartz says.
For his part, Ashkenazy says, “There’s two parts of health that are super interesting to me: One is with input coming into the system, and one with input being generated from the system.”
“Using natural language to” control both the inputs and create the outputs “is the big breakthrough,” Ashkenazy predicts. Currently, “there’s a lot of manual effort to bring content into the system” but in the near future, “that’s going to be taken care of by the machine.”
How will this impact M&E? These new products and solutions “will need storytelling surrounding them so that people will actually use them and know what to do with them,” Schwartz explains. For example, she says, “What we’re starting to see is the rise of content studios inside of healthcare systems.”
Autonomous Intelligence
“We’ve all, of course, been hearing so much about autonomous vehicles, about robots, about all these… machines taking over,” Schwartz says. “But the truth is that a lot of this is going to be about automating repetitive and tedious tasks.”
Helper robots aren’t exactly new, but Ashkenazy notes that there’s a trend toward on-device AI for these machines, eliminating the cloud connection. He explains this enables the robots to have faster reactivity to external stimuli, whether using computer vision (integrated cameras) or other interactive elements.
“The more that we personalize and create these solutions that ease tasks, the more there’ll be opportunities to free up for higher level spending of time,” Schwartz observes.
Immersive Intelligence
Sphere in Las Vegas is perhaps the most famous example of this as of 2024. Once you see it, Schwartz says, “You realize that the future of display has changed forever.”
Avatars also fall into the category of immersive intelligence. “Being able to actually communicate intelligently with an avatar is one of the most exciting things,” Ashkenazy says. “And I think the only way that that can happen is through natural language and AI being able to be fast enough so that you’re getting responses in real time.”
Introducing these avatars also prompts the need for real-time translation “so that anybody with any language can get into these environments and have interactions in these environments that feel real and that feel compelling,” Ashkenazy says. He adds, “That has real implications for content creators, as they think about ‘What kind of intelligence do we want to bring into these avatars?’, based on the conversations that we might be having?”
Schwartz agrees, noting “We’re really heading towards this immersive world, AI-driven but also really highly dependent on content creators.”
As-a-Service Intelligence
“Products are now going to be moving towards a product-as-a-service-model, which means that instead of just buying a product, using it, and then throwing it out, when you’re done, you’re actually going to be subscribing to services through that product manufacturer,” Schwartz explains.
For his part, Ashkenazy says, “It involves IoT devices that live in all the spaces and all the buildings that we occupy. And you’re going to be seeing product-as-a-service inside spaces as well, where you know, the monitoring of devices that understand where we are, what temperature we want to be in, what the sound quality is like for spaces. All of that is going to actually have AI on device and very, very small chips, making calculations and helping our environments react.”
This trend is “going to be a little bit invisible,” Ashkenazy predicts. He says, “I think AI is going to be in every product. And we’re not going to know it, but it’s going to be helping us in different ways.”
Schwartz agrees: “I think the data point we heard at CES was that there’ll be 200 billion devices that are going to be connected to the internet over the next few years. And that each of them are going to be able to make decisions.”
This also means “brands …are going to have to become publishers. They’re going to have to generate content,” she says. “They’re going to have to bring value to what their product is, besides the actual product, so that they can actually deliver on a service. And the service could be, you know, as simple as news.”
Creative Intelligence
The final trend, creative intelligence, is “changing the model for how content is created,” Schwartz says.
“Last year was really a year of Gen AI and text,” Ashkenazy says, “and I think this year and the year to follow is going to be the year of multimodal,” referring to images, video, and audio content creation.
“It’s something to celebrate because it’s going to allow us to have a lot more iterative ideas early. And that’s going to mean…better designs at the end,” he predicts.
“It will also launch newer businesses that will be able to really see Gen AI as a tool. And it requires talent; it still requires that human input,” Schwartz says.
“The bottom line for me when it comes to creators and these tools is that smaller teams and individuals can do so much more now than they could before. It really gives creators superpowers,” Ashkenazy says.
Want more? We’re excited to share this exclusive “5 Trends” document, meticulously curated by Schwartz’s team of professionals in advertising, technology, and media.
Spatial computing is still an experiment, but Apple has spent billions of dollars developing the Vision Pro using passthrough technology.
February 11, 2024
Posted
February 1, 2024
Cinematographer Cristina Dunlap’s Real/Surreal Approach for “American Fiction”
TL;DR
DP Cristina Dunlap talks about the critically acclaimed satire “American Fiction,” from the logistics of shooting a feature film in less than 30 days to working with first-time feature director Cord Jefferson.
Dunlap turned the tight shoot schedule into a signature style for the film by using longer Steadicam shots to generate coverage in a jazzy musical manner that suited the central character’s middle class affectation.
The film was shot on the ARRI Alexa Mini LF paired with Tribe7 Blackwing lenses in a widescreen 2.35 aspect ratio.
Dunlap created a custom LUT that aligned with her visual concept for the film in collaboration with Phil Beckner at PhotoKem, which was then fine tuned on set with DIT Mattie Hamer.
The pivotal scene in terms of narrative in the film American Fiction is when disgruntled academic Monk (Jeffrey Wright) sits down to write a book that is deliberately far removed from his own reality.
It’s also when the film “suddenly kind of takes a left turn into surrealism that hadn’t been in the film before,” says director of photography Cristina Dunlap. Speaking to Patrick O’Sullivan on the Wandering DP podcast, she says, “So a lot of what I built the world around was that scene. It was how far are we going to push the surrealism because it comes back again at the end of the film. It’s a satire but it’s also a really grounded family drama as well. It’s a really moving story at times. And also funny. And then there’s the surrealist element. So it was kind of trying to find what the tone was going to be and how to weave that throughout the entire film without it feeling like a bunch of different movies mashed together.”
Cord Jefferson’s adaptation of Percival Everett’s novel Erasure has been nominated for an Oscar in five categories, including Best Picture and Best Actor, having previously won the People’s Choice Award at Toronto International Film Festival, two Golden Globe nominations, and five nominations at the 29th Critics Choice Awards, including Best Picture.
It is Jefferson’s feature directorial debut so Dunlap came prepared with a look book and suggestions, taking care not to overwhelm or steer him in the wrong direction.
“I want to hear what you’re thinking,” she told him, as she relates in an interview with fellow DP Lawrence Sher, ASC for the ShotDeck: Shot Talk podcast. “We talked a lot about different references and movies, and because it’s so subjective, what a funny shot looks like just to really get on the same page. Cord was such an open book, he asked a lot of questions. Like, why does that look funnier than being a long lens? I was able to pull up different shots from different movies and sort of show him what I meant.”
The central character’s nickname, Thelonious “Monk,” gave Dunlap a clue as to the style of camerawork, but her decision to use extensive Steadicam was also a practical response to a tight 26-day shooting schedule that included a ton of locations and scenes with many cast members that would usually require extensive coverage.
“I knew we didn’t have time to cover everybody, the way you might with a longer schedule,” she told the Frame & Reference podcast. “So we orchestrated these shots [with Steadicam Xavier Thompson] that were pretty Steadicam heavy, where you’re flowing from one room into another room and coverage rotates around.
“Being that the main character’s name is Felonious Monk, we knew we wanted there to be this like rhythm and jazzyness to the way the camera was moving.”
She adds, “I didn’t want to shoot it like a comedy where you’re just an a wide and you see everything. I really tried to watch the rehearsals and then we’d have an idea of what we were going to do so that the camera was always flowing and moving through people and panning to reveal. Having that sort of slow is really important to us – it felt almost musical moving through everybody because there’s such a rhythm and that editing in the acting. And tonally it’s such an all over the place movie that I really wanted there to be some because consistency.”
Elaborating on the style to Matt Mulcahey at Filmmaker Magazine, she said, “We never wanted it to feel chaotic or loose, because Monk is so composed and tightly wound. I wanted to always feel like there was a sense of control, except for in two moments, both times with his mother. She’s one of the only things that can make him actually lose composure and show his internal world to the outside.”
Dunlap shot on the ARRI Alexa Mini LF paired with Tribe7 Blackwing lenses in a widescreen 2.35 aspect ratio. Collaborating with Phil Beckner at PhotoKem, she created a LUT that aligned with her visual concept, which was then fine tuned on set with her DIT, Mattie Hamer. In an interview for the ARRI website, she recounted the challenges of deciding on the film’s aspect ratio.
“Since all of our locations were practical, I realized that I was often seeing the ceilings if I was as wide as I wanted to be in order to highlight Monk’s isolation or distance from his family members. While the film is a comedy it’s also at its core a heartfelt family story and there’s a lot of emotion going on in Jeffrey’s face. Every twitch of an eyebrow has meaning, and you feel that. So, I wanted to be close to him while having that space which worked best in the 2:35 aspect ratio.”
Dunlap has been working as a DP for 20 years, starting out in music videos for the likes of Coldplay and Lizzo. It was a connection she developed on a music video set that led to her being one of a few cinematographers shortlisted to meet with Cord.
“It changed the course of my life, really,” she told Frame & Reference. “I mean I started young, and it taken me 20 years to get where I am now. I knew [Cord] was interviewing a lot of DPS and some with credits that were a lot more impressive than [mine] and I wasn’t sure I was gonna get it, but I think that that I was so passionate about the script that this came through.”
One of the references Jefferson gave his DP was a GIF featuring retired basketball player David Robinson at a game when an elderly white woman stands up in front of him and completely blocks him out of the shot.
“Cord told me how it’s a metaphor for the entire film,” Dunlap explained to Filmmaker Magazine. “I don’t know that he intended for us to use it visually, but when we were blocking the scene where Issa Rae [reads an excerpt from her character’s novel at the Massachusetts Festival of Books] it was absolutely the perfect moment to use that shot. It’s Jeffrey’s character watching everything he’s up against and everything he finds irksome about the book world unfold before his eyes, then a white woman stands up and completely obscures him and take over the frame with wild applause. I’ve never had a director give me a GIF as a reference before, but that shot ended up being one of the most talked about in the movie.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 13, 2024
Posted
January 31, 2024
Watch This: Refik Anadol, “AI Is an Extension of My Mind”
WHY YOU SHOULD WATCH
Hear directly from one of art’s most innovative and experimental creators, Refik Anadol.
His work combines data and aesthetics in fascinating ways and can be seen in museums and galleries as well as public art installations, such as Sphere in Las Vegas.
Anadol views generative AI as an artistic collaborator, not just another tool.
Anadol is the director of Los Angeles’ Refik Anadol Studio and also serves as a visiting researcher and lecturer in UCLA’s Department of Design Media Arts.
In January, Anadol revealed his latest work, Living Archive: Nature, in Davos, Switzerland, during the 2024 World Economic Forum, according to Artnet. It’s also the debut work from Refik Anadol Studio’s new Large Nature Model, which Artnet reports is an ‟open-source generative A.I. model dedicated to nature” and ‟trained on data from National Geographic, the Smithsonian Institute, CornellLab, the Natural History Museum in London, and the Conservation Research Foundation Museum, as well as data his team has personally collected.” The data was collected from ecosystems around the world using LiDAR, photogrammetry, and captured ambisonic audio and high-resolution visuals.
‟Our vision for the Large Nature Model goes beyond being a repository or a creative research initiative. It is a tool for insight, education, and advocacy for the shared environment of humanity,” Anadol told Design Boom.
“Refik Anadol: Unsupervised” uses an AI model trained on 180,000 artworks from MoMA’s collection to produce a stream of digital images.
March 11, 2024
Posted
January 30, 2024
It’s the Remix: Sarah Sze and the Role of AI in Art
Artist Sarah Sze shares her perspective on the intersection of generative AI and the fine arts, as well as how we should think about artificial intelligence as creators, on the Possible podcast.
WHY YOU SHOULD LISTEN
Sarah Sze says, “As a visual artist… your whole purpose is not to be predictable.” She believes this gives human an edge over LLMs and generative AI, which work by “predicting” the next right word (or combination of images).
However, that doesn’t mean Sze is adverse to utilizing AI. Rather, she thinks it’s a new tool to showcase creativity. “The driver is your question and how interesting your question is.”
When it comes to generative AI tools, Sze says, “All of these things are [a] medium for fine arts. I don’t think they replace… the artwork itself. We’re the ones asking the questions. We have to ask the right questions. And that’s what makes it interesting. And if we ask the wrong questions, it’s really not interesting and potentially dangerous.”
How does she feel about her work being fed into AI training data sets? “My work is just a continuum of everyone else’s work anyways,” Sze says. “I hope that I’m like talking to — I mean, it sounds like hubris — but I hope I’m talking to Vermeer. I hope I’m talking to Rembrandt. I hope I’m talking to, you know, Murasaki. … That’s what all artists are doing.”
Sze is a Boston-born contemporary artist known for multimedia, genre-defying work.
In 2023, the Guggenheim hosted Sze’s complex and site-specific “Timelapse” installation, which combined the building’s interior and exterior architecture with projections of digital imagery and sculptural elements. The New York museum also showcased her “Untitled (Media Lab)” and “Timekeeper” pieces to round out the collection.
One of contemporary art’s most innovative creators, Anadol considers generative AI as his artistic collaborator.
March 11, 2024
Posted
January 28, 2024
This Is Actually a Very Self Aware AI-Generated Documentary
TL;DR
“The Wizard of AI” by British artist, animator and video essayist Alan Warburton is potentially the first AI-generated documentary, exploring the impact of generative AI on creativity and the arts.
The film navigates our current Epoch of “Wonder-Panic,” reflecting the mixed emotions of awe and anxiety that AI evokes within the creative community.
Warburton employed a range of AI tools like Midjourney, Stable Diffusion, and DALL-E 3, alongside Adobe’s creative suite, to craft this visual essay.
Commissioned for the ODI Summit 2023 by the Data as Culture program, the short film challenges conventional documentary filmmaking and delves into ethical, aesthetic, and legal considerations of AI in art.
The project has sparked significant discussion on the future role of AI in art and design, emphasizing the need for a balanced approach to technology that considers both its potential and pitfalls.
Diving into the murky hopes and fears surrounding generative AI, The Wizard of AI by British artist, animator and video essayist Alan Warburton stands out as an exploration of how the burgeoning technology is reshaping creativity. This visual essay, largely crafted by artificial intelligence, ventures into what might be the first AI-generated documentary, offering a nuanced narrative that captures the complex emotions AI evokes within the creative community.
“Using creative workflows unthinkable before October 2023, [Warburton] takes us on a colorful journey behind the curtain of AI — through Oz, pink slime, Kanye’s ‘Futch’ and a deep sea dredge — to explain and critique the legal, aesthetic and ethical problems engendered by AI-automated platforms,” the ODI states about the project. “Most importantly, he focusses on the real impacts this disruptive wave of technology continues to have on artists and designers around the world.”
Through the lens of a hoodie-wearing faceless “AI Collaborator,” Warburton guides viewers through what he calls “our current epoch of Wonder-Panic.” The meticulously crafted narrative blends a variety of visual styles that pay homage to the rich histories of comic books, animation, visual effects, and cinema, embarking on a journey that oscillates between the marvels of AI’s new aesthetic possibilities and the sobering recognition of its implications for creative practices.
“I hope that I strike enough of a balance between wonder and panic that I can represent good actors on both sides,” he comments.
Commissioned as a reflection on the cultural impacts of generative AI, Warburton’s work is a critical and provocative examination of the current state of “wonder-panic” surging through creative communities. It challenges viewers to contemplate the dualities of AI — the excitement of uncharted artistic territories and the apprehension about the future of human creativity.
The AI Filmmaker’s Toolkit
Crucially, The Wizard of AI was produced exactly one year after the release of Midjourney v4, a tool Warburton views as a watershed moment for visual cultures and the creative economies.
“The animation was done over an intense three-week period where the updates to the tools I was using were significant — historic, even,” says Warburton. “Videos that I generated for inclusion [on October 20th] were generated again in early November (just days before the Summit), with improvements in quality analogous to the kinds of improvements we saw in digital cameras between 1995 and 2000. This meant that as I developed the film I was utilizing the most advanced generative AI tools, within hours or minutes of them becoming available.”
The film is a product of both creative vision and a practical application of AI tools. Central to its production was Runway Gen 2, which was used to generate the “AI Collaborator” video clips that guide viewers through the narrative.
For the visual content, Warburton employed Midjourney, Stable Diffusion, and DALL-E 3 to produce the wide array of still images that give the film its distinctive look. The Wizard of AI also features animated elements such as 3-second fish loops created with Pika. The narrative is further enriched by synthesized speech, with TikTok used for creating detective dialogues, and HeyGen for animating a talking detective head.
To ensure high quality visuals, Warburton used Adobe Photoshop AI for image expansion and Topaz Gigapixel AI for upscaling, ensuring clarity and detail. Adobe After Effects played a crucial role in bringing together these diverse elements, stitching them into a coherent visual narrative.
The Cultural Ripple Effect
Warburton’s visual essay serves as a mirror, reflecting the complex emotions and debates surrounding AI’s role in art and creativity. At the ODI Summit 2023, where the film was first unveiled, it became a focal point for discussions on how AI is reshaping the landscape of visual cultures and creative economies.
Warburton’s narrative navigates through the exhilarating possibilities and the unsettling uncertainties that AI introduces to the creative process. “The film is deeply insightful and humorous — prepare to make the phrase ‘Wonder-Panic’ a part of your AI vocabulary,” the ODI notes. This duality encapsulates the awe inspired by AI’s potential to push artistic boundaries and the anxiety over its implications for traditional creative roles.
“Generative AI is something that will have a deep and permanent effect on the ‘culture industries’ — by which I mean curators, art institutions, art schools, design firms and so on. It’s not another trend, it’s a tectonic shift in the currency and culture of images that we can’t reduce to ‘deepfakes’ or ‘post-truth’ but to a relationship between humans and images. It’s an epistemological break!” says Warburton.
“Yet instead of boycotting, I’m playing in the sandbox and seeing what the tools tell me. I do this to demystify and educate, but also because no matter how succulent and seductive an AI image is, the real juice is in analysis, criticism and reflection.”
Steven Da Costa, chairman of Link Digital, praised Warburton’s work for highlighting the need for the creative community to come to terms with the changes AI brings. “Absolute genius and full of perspective we all need to become comfortable with, one way or another,” he remarked.
The Wizard of AI delves into the complex interplay between artificial intelligence and creative expression, highlighting the ethical and aesthetic questions that arise with AI-generated art. The project serves as a critical examination of the wonder and panic that permeates the creative community, and explores the potential for exploitation and the challenges of bias within AI systems, raising important questions about the authenticity and originality of AI-generated art.
The implications of “The Wizard of AI” extend beyond its immediate cultural impact, hinting at a future where AI plays a significant role in documentary filmmaking and creative processes. The film exemplifies how AI can offer new perspectives and storytelling techniques, potentially transforming the documentary genre. However, it also underscores the need for filmmakers to navigate the ethical terrain of AI use carefully, ensuring that the integration of AI into creative workflows enhances rather than diminishes the human element of storytelling.
Generative AI for Content Creation: The (First) NABiQ Live Challenge
TL;DR
NAB Show introduces “NABiQ Deep Dive,” a series of virtual challenges and live workshops leading up to the dynamic in-person innovation sprint and creative networking event in Las Vegas.
Facilitated by innovation consultant, certified design sprint master and startup coach Maria Duloquin, the January 25th Deep Dive features a panel of industry experts and NAB Amplify senior editor Emily M. Reigart.
The free workshop will be followed by two more Deep Dives on February 29 and March 28, addressing AI in content delivery and audience engagement.
Join NAB Amplify for NABiQ Deep Dive with a series of virtual challenges as we gear up for NAB Show 2024, running April 13-17 in Las Vegas.
NABiQ stands out as an unparalleled education and networking opportunity to collaborate, share ideas and overcome industry challenges. Structured like a hackathon, in-person participants form small groups, tackling specific challenges and presenting their innovative solutions for challenges around the three show pillars, Create, Connect and Capitalize.
This year, the dynamic innovation sprint and creative networking event is breaking new ground by introducing live challenges on NAB Amplify leading up to the main event. The digital, three-part series of Deep Dives topics revolve around the future of AI including content creation, content delivery and audience engagement and interaction.
Deep Dive: Unlock Gen AI in Content Creation
Targeting the Create community, the first — free — workshop was held January 25 at 1:00 p.m. EST. Innovation consultant, certified design sprint master and startup coach Maria Duloquin facilitated the event, joined NAB Amplify senior editor Emily M. Reigart for a discussion on unlocking the potential of generative AI.
Participants identified their locus in the M&E universe, with significant representation for the education sector and studios/production work.
Attendees discussed what is understood so far about AI in M&E, along with what we don’t know, as well as how to embrace the future of AI in the industry. Participants raised concerns about AI ethics, legality and regulation, and how to educate ourselves and others about generative AI.
Providing a space for participants to share their insights and experience, and the opportunity to meet and learn from other industry professionals, this workshop helped to map out challenges and opportunities for AI.
Next at Bat
The workshop will be followed by two more Deep Dives on February 29 and March 28, addressing AI in content delivery and audience engagement.
NAB Show itself will feature four daily NABiQ brainstorming sessions, each followed by a happy hour pitch showcase for further networking opportunities. Insights from the Deep Dive sessions will inform the on-site NABiQ brainstorm sessions at NAB Show in Las Vegas.
Is This New Era of Spatial Computing Really… New? Or Are We Just Remaking the Metaverse?
TL;DR
Apple is preparing to bring to market its new head-mounted display Vision Pro which it describes as a spatial computer.
Spatial computing is being presented as different from the metaverse though the distinction is moot. It is one in which virtual experiences and content will interact with the physical world in new ways through spatial interfaces, and that in turn will change human-to-computer interactions and human-to-human interactions.
New head-mounted and hands-free internet gateways could mean the end of the smartphone era.
Today’s phrase is spatial computing, a term adopted by Apple to describe its consumer electronics “wearable,” Vision Pro. But as much as companies like Apple, Sony and Siemens might claim that this initiates a new era, there are those wondering if this might be the metaverse by another name.
So scarred is the tech industry by the failure of the metaverse to take off (and so synonymous with Mark Zuckerberg’s Meta has the name become), that the 3D internet and successor to flat, text-heavy web pages, appears to have been essentially rebranded.
Futurist Cathy Hackl offers this subtle distinction: “Meta is on a mission to build the metaverse, and Quest 2 is their gateway. Apple seems to be more interested in building a personal-use device. One that doesn’t necessarily transport you to virtual worlds, but rather, enhances the world we’re in.”
The term spatial computing has been around at least as long as the term metaverse but is being given a new lease of life by the second coming of augmented reality (AR), virtual reality (VR) or mixed reality (MR) glasses or goggles; the collective term for those acronyms is XR or eXtended reality.
Snap, Sony and Siemens are just some of the companies with new XR wearables due to launch over the next few months. Undoubtedly, all will be a step up in terms of comfort and tech specifications on the early round of such hardware which was led by Google Glass, Meta’s Oculus and Magic Leap.
Apple’s Magical Step Into the Metaverse
“The era of spatial computing has arrived,” said Tim Cook, Apple’s CEOpromoting the Apple Vision Pro. In the same sentence he then described it as having a “magical user interface [which] will redefine how we connect, create, and explore.”
Let’s get beyond the smoke and mirrors. There’s no “magic” in the Vision Pro other than a brand name for apps (think Magic Keyboard and Magic Trackpad).
The tech community has, however, been keenly looking toward Apple to bring such a product to market. Having defined and popularized categories for consumer tech, including the tablet and the smartphone, the best bet for XR wearables to go mainstream was always going to come from Cupertino.
Encounter Dinosaurs, a new app by Apple that ships with Apple Vision Pro, makes it possible for users to interact with giant, three-dimensional reptiles as if they are bursting through their own physical space.
One reason why Cook and others prefer the term spatial computing is because there is greater confidence that this iteration of the tech can better blend the actual and the digital world with seamless user interaction.
As Cathy Hackl put it, spatial computing is an evolving 3D-centric form of computing that blends our physical world and virtual experiences using a wide range of technologies, thus enabling humans to interact and communicate in new ways with each other and with machines, as well as giving machines the capabilities to navigate and understand our physical environment in new ways.
From a business perspective, says Hackl, it will allow people to create new content, products, experiences and services that have purpose in both physical and virtual environments, expanding computing into everything you can see, touch and know.
It is an interaction not based on a keyboard but on voice and on gesture. As Apple puts it, the Vision Pro operating system “features a brand-new three-dimensional user interface controlled entirely by a user’s eyes, hands, and voice.”
It’s not “Minority Report” just yet, but you can see where this is headed. Here’s Apple’s description: “The three-dimensional interface frees apps from the boundaries of a display so they can appear side by side at any scale, providing the ultimate workspace and creating an infinite canvas for multitasking and collaborating.”
Its screen uses micro-OLED technology to pack 23 million pixels into two displays. An eye-tracking system combining high-speed cameras and a ring of LEDs “project invisible light patterns onto the user’s eyes” to facilitate interaction with the digital world. No mention is made of having to sign away your right to privacy — this being a pretty invasive aspect of the technology. Do you want Apple to know exactly what you are looking at? If so, expect hyper-personalized adverts pinged to your Apple ecosystem when you do.
Or as Hackl — a tech utopian — writes: “AR glasses will turn one marketing campaign into localized media in an instant.”
Apple’s Competition
Such features are not exclusive to Apple. A new head-mounted display from Sony, designed in collaboration with Siemens and due later this year, also has 4K OLED Microdisplays and an interface called a “ring controller” that allows users to “intuitively manipulate objects in virtual space”. It also comes with a “pointing controller” that enables “stable and accurate pointing in virtual spaces, with optimized shape and button layouts for efficient and precise operation.”
The device is powered by the latest XR processor by Qualcomm Technologies. Separately, Qualcomm has unveiled an XR reference design based on the same chip that features eye tracking technology. The idea is that this will provide a template for third party manufacturers to build their own XR glasses.
The Sony and Apple head-gear are aimed at different markets. Both are hardware gateways to the 3D internet — or the metaverse, even if Apple studiously avoids referencing this and Sony only does so when talking about industrial applications.
Apple Vision Pro is targeting consumers, even if early adopters will have to be relatively well heeled to fork out the $3500 ($150 more for special optical inserts if your eyesight isn’t 20/20).
This Changes… Some Things
Chief applications include the ability to capture stills or video on your latest iPhone, which users will be able to playback in Spatial 3D (i.e. with depth) on their Vision Pro. The video and stills will appear as two dimensionally flat as viewed on any other device.
FaceTiming someone will also be possible in a new 3D style experience within the Vision Pro goggles. According to Apple, this “takes advantage of the space around the user so that everyone on a call appears life-size.” To experience that users will have the choice to choose their own “persona” (which Apple chose to differentiate from Meta’s colonization of the term “avatar”).
In addition, Apple had loaded Vision Pro with TV and film apps from rivals Disney+ and Warner Bros’ MAX (but not Netflix) to be viewed “on a screen that feels 100 feet wide with support for HDR content.” As a reminder, the screen is millimeters from your face.
Within the Apple TV app, users can access more than 150 3D titles, though details are not provided. It could be that these are experimental 3D showcase titles or stereoscopic conversions, in a revival of the fad a decade ago for stereo 3D content.
More significantly, Apple Immersive Video launched as a new entertainment format “that puts users inside the action with 180-degree, 3D 8K recordings captured with Spatial Audio.” Among the interactive experiences on offer in this format is Encounter Dinosaurs.
No details were given of how this content is created or at what production cost, but Sony’s new XR glasses are targeting the creative community.
Indeed, Sony is marketing its development as a Spatial Content Creation System and says it plans to collaborate with developers of a variety of 3D production software, including in the entertainment and industrial design fields. The device includes links to a mobile motion capture system with small and lightweight sensors and a dedicated smartphone app to enable full-body motion tracking.
In Sony speak, it “aims to further empower spatial content creators to transcend boundaries between the physical and virtual realms for more immersive creative experiences.”
Where Is This Headed?
Spatial computing unshackles the user’s hands and feet from a stationary block of hardware and connects their brains (heads first) more intimately with the internet.
Hackl thinks Vision Pro is the beginning of the end for the traditional PC and the phone.
“Eventually, we’ll be living in a post-smartphone world where all of these technologies will converge in different interfaces. Whether it’s glasses or humanoid robots that we engage with we are going to find new ways to interact with technology. We’re going to break free from those smartphone screens. And a lot of these devices will become spatial computers.”
She thinks 2024 will be an inflection point for spatial computing.
“Eventually you’ll have a spatial computing device that you can’t leave the house without,” she predicts, “because it’s the only way that you can engage with the multiple data layers and the information layers and these virtual layers that will be surrounding the physical world.”
She admits that right now “there’s a bit of chaos” and that Apple Vision Pro may not be the breakthrough everyone expects in its first iteration.
“To me, the announcement of Apple offers a convergence of the idea of seamless interaction, breaking through the glass and a transformation from social media-driven AI to personal, human AI,” she says. “Will all that happen with the release of Apple’s first headset? No, and I wouldn’t expect it to. That’s a lot to put on one company’s shoulders. But Apple is different from other headset makers which gives us an opportunity to see a different evolution of AR.”
How “The Woman in the Wall” Merges Mystery, History and Memory
TL;DR
“The Woman in the Wall” stars Ruth Wilson as a woman grappling with the unresolved trauma of time spent in one of Ireland’s Magdalene laundries as an unwed mother, further complicated by the untimely death of two people connected to her incarceration in the local Mother and Baby Home.
The series intentionally weaves together genres (mystery, thriller, drama, comedy) as it tackles subject matter that would otherwise be too dark for many viewers.
The six-episode show was one of the first TV series to be shot using an ARRI Alexa 35 camera. DP Si Bell found its features to be extremely useful when shooting under challenging lighting conditions and a tight schedule.
Showrunner Joe Murtagh told the Washington Post, ‟[W]hen we’re talking about the Magdalene Laundries, we’re really talking mostly about the period between 1922 and 1996. And these were institutions for so-called ‘fallen women.’ Originally, that was meant to mean prostitutes, but over the course of several decades in Ireland, that came to include mothers who had children out of wedlock.”
The show centers on County Mayo resident and sleepwalker Lorna Brady (played by Ruth Wilson, who also served as an executive producer for the series), whose traumatic past is brought straight to her living room when she awakes to discover a corpse with a direct link to her time at the local Magdalene laundry.
British Cinematographer’s Robert Shepherd describes the series as “a six-part gothic thriller by Motive Pictures that combines history and in-depth research to create a sobering narrative.”
But that doesn’t fully capture the tone that creator and head writer Joe Murtagh envisioned for the BBC One drama. For one thing, “gothic” calls to mind “Wuthering Heights;” the series is set in 2015 and centers around a national scandal that continued into the late 20th century in Ireland. It is not historical fiction.
For another, Murtagh imbues the writing with plenty of dark humor. He tells the BBC, “Maybe that’s an Irish thing in general. It’s definitely my natural way of writing, just to go at it with some comedy. I also find that the dark humor, it’s weirdly more realistic — in my life experience anyway — than just doing a straight drama.”
For those who’d find that odd, Murtagh says, “I find that in the most horrific experiences in life, there’s always weird moments of humor and things that don’t quite belong. Someone saying something the wrong way, or it not coming out quite right — that I think is just realistic.
“So, I would say it’s a natural instinct. But at the same time, if I stop and think about it, it’s probably the perfect way to tell a story like this.”
‟To the extent it works, the show is a testament to patient and precise world-building,” Angie Han writes for THR. ‟Kilkinure might be fictional, but the portrait creator Joe Murtagh paints of it has the texture of reality — individuals with prickly personalities or idiosyncratic senses of humor, intricate webs of friendships and grudges, rumors and secrets tracing back years.”
Of course, the series’ genre bending doesn’t sit well with every viewer. A review in the Independent concludes: “For some, this will make the story more watchable — less grueling — but for others the introduction of cliché and the imposition of a murder mystery will feel crass.”
KQED’s Rae Alexandra puts it another way: ‟It’s not always an easy watch, but ′The Woman in the Wall‘ is consistently impossible to look away from — a degree of attention Ireland’s real-life laundry survivors have long deserved.”
Cinematography
Even though the subject matter is Stygian, cinematographer Si Bell, BSC, says, “We shot it at the end of the summer in 2022 in the bright and colorful village of Portaferry and there’s a lot of color there naturally, so we were kind of going against the darkness really.”
In addition to the setting, Bell explains, “We created quite a saturated grade and we tried to push that with the colors we used in the lighting and production design. The red room is all red. We used primary green light sometimes and we had vibrant production design and colored houses to give the show even more vibrancy.”
The production relied on the “versatility and handling” of the ARRI Alexa 35 4.6k Super camera, Bell says. For example, “It enabled us to use a pretty slow old Cooke zoom lens when we were doing night, high speed work, pushing the extended ISO using the enhanced sensitivity range. We were doing a lot of night scenes and we used ES ISO which was amazing. In terms of the sensitivity, it doesn’t get noisy when you bump up the ISO, so the biggest difference is how clean it is compared to other cameras.”
Bell found the camera’s viewfinder and flipout monitor to be “so clear, with high resolution and sharp.”
He was not the only crewmember who found the camera’s features to be useful. DIT Cel Bothwell-Fitzpatrick, for one, told British Cinematographer that the internal NDs and Enhanced Sensitivity EI options were crucial, and the remote access to the settings (via HI-5 focus handset or Camera Companion app) was extremely handy.
The Alexa 35 also features 17 stops, which Bell says was crucial because “[t]here were several scenes where the light massively changed outside. I was worried that when we got to the grade it was going to be clipped, and we weren’t going to be able to pull it back, but it was all there.”
Although Ireland’s natural light is famously moody and beautiful, the production often supplemented its interiors for a very specific look.
For example, a scene in episode one featured Lorna’s home lit in a way to simulate sodium-vapor lamp street lights via “Tungsten 12K Fresnels with an urban sodium paper gel,” courtesy of gaffer gaffer Seamus Lynch.
Bell says, “Seamus was amazing and the lighting setup that he rigged in the studio was very flexible. Everything was on motors and easily controllable and he also created different soft boxes, so we could flip between day and night super quick. We had large Tungsten lamps, plus lots of LED for the soft boxes and options of softer light.”
In another scene, Bell recalls, they “lit it with really big soft bounce on a machine outside the window where we basically put a 20 by 40 foot ultra bounce with two ARRI MAXs bounced into it. You could see the whole window from the inside as we positioned the bounce above it out of shot.” This Bell says, created “really soft, natural ambient coming in that balanced the exposure.”
Additionally, every location was rigged to achieve as natural lighting as possible, which was made more challenging because the sets featured hard ceilings.
Northern Ireland’s Yellowmoon was tapped for post work. Bell says, “We had a live grade set up on set and with Yellowmoon we created a LUT and tweaked the CDL shot by shot on set. We started this information in the grade making the process very streamlined.”
Streamlining was important because they needed to create three versions of the show: HDR Dolby vision grade, SDR and HLG. In addition to Yellowmoon’s skilled team, Bell says ARRI’s Color Science and ARRI Look File 4 were important to facilitate workflows.
Bell explains, “ARRI has separated the Color Transform from the creative look file and it’s all LOG to LOG. Therefore, as it’s not baking in the Color Transform, the process of creating HDR and SDR versions is more streamlined.”
Showrunner, writer and director Issa López discusses her approach to the new season of anthology series “True Detective: Night Country.”
January 18, 2024
Darkness and Light (But Mostly Darkness): Production on “True Detective: Night Country”
TL;DR
Showrunner, writer and director Issa López discusses the influence of Fincher’s “Seven” film on the new season of anthology series “True Detective.”
The new series has an undercurrent of the supernatural but it also layers in the politics of environmental change, of marginalized communities and, most clearly, male violence on women.
Shooting in the extended daylight and extended dark days of the frozen north required a new approach to lighting the show for DP Florian Hoffmeister.
Mexican horror filmmaker Issa López may not have been an obvious pick to spearhead Season 4 of True Detective, the latest in HBO’s anthology series, but it’s actually a perfect pairing.
López, who created, wrote and directed all six episodes of True Detective: Night Country, is best known for her 2017 crime film, Tigers Are Not Afraid, which earned rave reviews from critics and gained a cult following after its relatively small opening in the US.
“If I can bring back the feeling of two characters that are carving entire worlds of secrets within them, and trying to solve a very sinister crime in a very strange, eerie environment that is America, but it also doesn’t feel like America completely, and then I sprinkled some supernatural in it — I think we’re going to capture the essence of True Detective,” López told TheWrap’s Loree Seitz.
Season 4 of True Detective introduces the franchise’s first female detective duo in detectives Liz Danvers (Jodie Foster) and Evangeline Navarro (Kali Reis), former partners who reunite to investigate the disappearance of six men working at the Tsalal Arctic Research Station in small town Ennis, Alaska.
During lockdown Lopez had been developing her own murder mystery scripts when HBO came calling. “I believe they saw Tigers Are Not Afraid, which is very gritty and ultra-real and violent, but at the same time has elements of the supernatural and [is] very atmospheric,” she told Matt Grobar at Deadline. “So I think that [they saw] something in that movie, they were like, ‘Oh, this could be an interesting point of view for True Detective.’”
On developing her idea for the format created by Nic Pizzolatto she revealed that David Fincher’s Seven was a big influence. Like True Detective it has two different odd-couple characters who come together to solve a mystery.
“I’m sure that was one of the references that informed Pizzolatto’s writing, at least unconsciously, so I was thinking of Seven. It was two detectives, a forgotten corner of America with its own system of culture and rituals, and it just clicked massively with True Detective. It didn’t take a lot of effort.”
The new series shares with the first season an undercurrent of the supernatural but it also layers in the politics of environmental change, of marginalized communities and, most clearly, male violence on women.
“The environmental theme came when I started to understand the inner workings of northwest Alaska and the industries and the conflicts in the area,” she said to Grobar.
“You just start to create this town, and the forces that pull energy inside it. Mining is a huge deal in that area of Alaska, and there’s constant conflict around the benefits of a burgeoning energy industry, but at the same time, the damage that it creates in an environment where people need the environment to survive. So, it’s just rich grounds to create the story.”
After focusing on women that had been taken and killed in two of the four films López had previously directed, spotlighting the story of a missing Iñupiaq women felt like the “natural continuation” of her interests.
“It doesn’t matter if it’s Mexico, the US or Canada … this violence doesn’t care for borders,” López said.
When it came to casting, López knew she wanted at least one of the characters of the two main characters to come from a native community, and was impressed by Reis, a professional boxer who broke into acting with 2021 film Catch the Fair One. Reis is of Cherokee, Nipmuc and Seaconke Wampanoag ancestry.
“I knew that one of the characters had to be indigenous because I don’t believe in bringing agents of justice to fix the situation in the native community. I simply don’t believe in that,” López said. “It was a challenge because there were not indigenous stars in the tradition of ‘True Detective,’ but that needs to change,” she said.
“Now we have a Lily Gladstone [Killers of the Flower Moon] and now we have a Kali Reis,” López said. “This is the year that changes.”
López worked closely with Germany-based DP Florian Hoffmeister (Tár, Pachinko, Great Expectations) on each episode. Hoffmeister told Mia Funk and Halia Reingold at the Creative Process podcast why the story’s wider scope appealed to him.
“It’s about the transient nature of life up in the polar North,” he said, “Permanent settlement is only possible since the Industrial Revolution, because normally the indigenous cultures were living and communicating with the land in a whole different way.”
He describes his experience of filming in a region (Iceland stood in for Alaska) where for months on end daylight is restricted to just a few hours a day.
“Further north you get no sunlight at all. And it was an interesting creative decision to embrace how lighting has a whole different utility and necessity than just regular light. if I come home at nights here in Berlin, I might switch on a few lights but if you live in darkness, your relationship with lighting changes. If you live in darkness, you tend to over light.”
The locations in True Detective: Night Country included an ice rink and a police station, which he lit as if every light were on, “because you’re literally craving light, and you don’t want your workspace during the day to be moody and dark.”
Since the murder mystery genre tends to have moody lighting, this presented an interesting conundrum. “If you go to the supermarket, and it’s minus-20, you will keep your car running while you’re inside. Because if you switch the car off the engine might freeze,” he explained.
“So there’s a whole different way to deal with what we take very commonly as the achievements of our industrialized living environment. I wanted that to be reflected in the lighting. So [scenes] in the police station and ice rink are really bright, basically switched everything on.”
They started prepping the series in summer in Iceland, during which time it barely got dark because of the region’s “endless summer” but finished shooting in almost perma-darkness in the winter months.
“If you light at night in a snowfield, the first thing to burn and [overexpose] will be the snow. So the whole lighting chain outside had to be tackled differently,” he said. “I think there’s some really exciting footage where we shot right on the blink (of darkness) where we can still see and you get the scale of the landscape, but it’s almost disappearing into blackness.”
Hoffmeister also suggests that one of the themes of the show is this feeling of a disconnection. “It feels like it’s the end of the world because I think you have this disconnection between us and the environment. And the biggest contrast with the indigenous people that used to live there is that obviously they had to live connected with their surroundings, but we don’t. Not only are we disconnected from our environment, but we also disconnected from each other.”
This season of True Detective was primarily shot using an ARRI Alexa 35 with Panavision Ultra and Super Speeds, Hoffmeister told IBC365.
They also chose to shoot some scenes using a stereo rig — pairing two Alexa Mini LFs using Sigma glass, one sans infrared filter to capture the landscape. In post, they blended both feeds to create more of a sense of depth for some outdoor scenes.
“ARRI has introduced a new feature called Textures allowing you to can burn in parts of the look. So we built a LUT, and we took part of the LUT and built a texture, which was then burnt into the image,” Hoffmeister explains. (We refers to his collaboration with ARRI head colorist Florian ‟Utsi” Martin.) ‟I feel this is the closest in terms of workflow you can come digitally to photochemical image manipulation.”
In another interview, this time with Filmmaker’s Matt Mulcahey, Hoffmeister recalls telling Arri’s Martin that he hoped to achieve ‟really strong blacks and highlights that [popped] but maintained texture so that the highlights, when you switched on the lights, would almost feel like an electric guitar [wailing] in rock music. I really wanted them to pop. That created quite a steep contrast and I thought in the mids it would have made my life very, very difficult, because obviously the mids are where our faces lie. So, you want to be a bit gentler there. I didn’t want to force myself to constantly work with fill light to ease something that I’ve introduced to the picture. So, Florian built us a Texture that would influence exactly that middle part and would be slightly softer in the mids and also create a little bit of a noise that we would associate with grain. And, yeah, we burned it in.”
Critics have generally hailed the new season as a return to form for the anthology series. “The plot is unforgettable, even if the ultimate, gob-smacking denouement may test your credulity,” Carol Midgley reflected at The Times.
“Confidently helmed, stunningly shot and richly performed, it is spellbinding, bone-chilling and does just about the last thing you’d expect from True Detective four series in: it makes you want more,” wrote Rachel Sigee at Inews.
“Night Country is a brilliant inversion of the men-heavy, heat-oppressed, narratively bloated series that have gone before,” The Guardian’s Lucy Mangan found.
“The Boy and the Heron:” Studio Ghibli Storytelling Goes in a New (Digital) Direction
TL;DR
The darkness and shadows in the latest Studio Ghibli animation are a departure for director Hayao Miyazaki and give a glimpse into the 82-year old’s personal story.
Long time collaborators talk about working with the legendary anime creator and say that he is open to using digital technology in his animation process.
His eye sight may be fading but rumors of this being his last work may be premature
The legend of Japanese animator and manga artist Hayao Miyazaki grows greater with each year, not least because the 82-year-old creator of 2001 Oscar winner Spirited Away is shy about giving interviews.
At the release of his most personal work to date, The Boy and the Heron, some of Miyazaki’s longest serving collaborators consider both the man and his work.
Atsushi Okui, for instance, has been the director of photography for films at Miyazaki’s Studio Ghibli since 1992.
“I’ve worked on a lot of films with Miyazaki, and each time the most important job is creating something that matches what’s inside of his head,” Okui told Gemma Gracewood at Letterboxd.
“So I do what is called the ‘finishing work;’ by the time the material has come to me it already has the imagination of the artists and animators. I have to work out how to bring that all together.
“Whether we can recreate the images inside of Miyazaki’s head, or even if they’re different, as long as we can surpass his expectations then that’s okay. That’s what we’re aiming for.”
Working for so long for Miyazaki does come with the advantage of beginning to guess his mind.
Okui told Eli Friedberg at The Film Stage: “Because his storyboarding is so articulate, so detailed and meticulous, that — adding to the fact that I’ve been working with him for 30 years — I find it quite easy to tell what he’s aiming at just by reading his storyboards.
“I wouldn’t say I’m always 100% correct in answering to whatever he writes, but it’s not often that he comes back to me with any feedback other than ‘Okay.’ It’s usually quite a smooth process, in that respect.”
The Boy and the Heron draws from Miyazaki’s childhood, a source of inspiration he initially resisted while creating masterful anime like Ponyo and My Neighbor Totoro. It follows the story of a young boy named Mahito who recently lost his mother. Along with a cunning and deceptive grey heron, he journeys to a mysterious world outside of time where the dead and the living coexist.
To emulate the tenebrous aspects of the story, Okui suggested that they should darken the colors of the animation as well.
“With Studio Ghibli pictures, all of the backgrounds are hand drawn with poster color paints, and then we turn that into data,” he explains to Ryan Fleming of Deadline. “When we turn the handwritten art into data, we have the base be the black background that was painted.
“However, we never attempted to make that any darker in digital or any darker in data except for this one. That was the first time we took upon the challenge of dropping the black even blacker, because unless we did that, we felt that we wouldn’t be able to bring forth the darkness that the main boy in the film harbors.
“So that was kind of a departure from the other films that we had done up until then.”
The muted color palette at the beginning of the movie “matches and reflects Mahito’s interior and his repressed feelings,” according to the DP, as reported by Variety’s Jazz Tangcay.
The crew balanced the darkness of change and war — always implied, never seen — with a fantastical world filled with vibrant creatures and characters. The explosion of color “was intentional,” said Okui.
Okui has also acted as Studio Ghibli’s head of digital imaging for decades. Over time, he has encouraged the renowned animation house to adopt digital animation tools for a more immersive big-screen experience, bringing the CG team fully into the room for production meetings that had been long reserved for artists.
Incorporating Modern Tech
Despite this Miyazaki has something of a reputation for being distrustful of digital animation and computer technology. Okui disputes this.
“I do think the image of Mr. Miyazaki’s [technophobia] might be a little exaggerated,” he tells Friedberg. “But his concerns do make some sense: Mr. Miyazaki is an animator, so whatever he can do manually he wants to do. Where we draw the line at Studio Ghibli is with certain things that you can’t do with hand-drawn skills.”
For camerawork capturing background scenery, for example, sometimes it’s easier to employ digital techniques, he adds. “In these parts, if it’s easier to do it digitally than hand-drawn, Mr. Miyazaki won’t have any problem conceding it — indeed, we’ve employed digital technology in The Boy and the Heron as well.”
A Very Personal Film
The new film draws both from Miyazaki’s own childhood memories of being evacuated from bombed-out cities and his tuberculosis-stricken mother often away in care, as well as from Genzaburo Yoshino’s 1937 novel, ‘How Do You Live’?
“This is a film filled with a lot of Miyazaki’s own personal ideas,” Okui told Letterboxd. “Until now it was all about capturing the liveliness and freedom that came with the characters, whereas with this film it’s more about expressing their innermost thoughts.”
Producer Toshio Suzuki is long-time colleague of Miyazaki, as well as a co-founder and the former president of Studio Ghibli. He relates to Carlos Aguilar of the New York Times that, growing up, Miyazaki had trouble communicating with people and expressed himself instead by drawing pictures.
“I noticed that with this film, where he portrayed himself as a protagonist, he included a lot of humorous moments in order to cover up that the boy, based on himself, is very sensitive and pessimistic,” Suzuki said. “That was interesting to see.”
If Miyazaki is the boy, Suzuki added, then he himself is the heron, a mischievous flying entity in the story that pushes the young hero to keep going.
In contrast, Hisaishi, the composer who first worked with Miyazaki on the 1984 feature Nausicaa of the Valley of the Wind, has a strictly professional relationship with him.
“We don’t see each other in private,” Hisaishi told the paper. “We don’t eat together. We don’t drink together. We only meet to discuss things for work.”
That emotional distance, he added, is what has made their partnership over 11 films so creatively fruitful.
“People think that if you really know a person’s full character then you can have a good working relationship, but that doesn’t necessarily hold true,” Hisaishi said.
“What is most important to me is to compose music. The most important thing in life to Mr. Miyazaki is to draw pictures. We are both focused on those most important things in our lives.”
Miyazaki often declares that “this is his last movie” whenever he’s made a new film, but there’s hope for fans yet.
“At first, I could sense that he wanted this to be his final project,” veteran animator Takeshi Honda, who worked as The Boy and the Heron’s animation director, tells BBC Culture. “But I could sense time and again that he’s not finished, that there are other things that he wants to do.”
Speaking through a translator, Honda cites Miyazaki’s penchant for suggesting stories to adapt. “Sometimes he would just come to me and say, ‘Listen, this novel is really interesting, you should read it!’ and I was like ‘What is this all about? What is he trying to make me do?’ Moments like that made me doubt his intention to retire.”
Studio Ghibli Vice President Junichi Nishioka is even more forthright, telling Gracewood. “He’s still coming into the office every day and thinking of ideas for his next film.”
And adding to BBC Culture, “I don’t think he’s ever going to really let go. He will have a pencil in his hand until the very day that he dies.”
Past and Present Intersect in Steve McQueen’s “Occupied City”
TL;DR
Academy Award-winning director Steve McQueen’s film “Occupied City” is a four-hour documentary that provides two simultaneous portraits of Amsterdam — one, current and the other a record of atrocities during the Nazi occupation of the Netherlands in the early 1940s.
“Occupied City” is the second recent feature film, following “The Zone of Interest,” to address the Holocaust without resorting to overused archival imagery.
McQueen based “Occupied City” on a book written by his wife, the Dutch historian and filmmaker Bianca Stigter. They describe it as a collision of the ghosts of Amsterdam’s past and the reality of the city’s present.
Occupied City is the second recent feature film, following TheZone of Interest, to address the Holocaust without resorting to overused imagery. This four-hour feature documentary by British director Steve McQueen concerns the Nazi Occupation of Amsterdam during World War II but doesn’t use archive footage, talking heads, or dramatize any scenes.
The film is based on Atlas of an Occupied City: Amsterdam 1940-1945, a historical encyclopedia written by McQueen’s wife, the Dutch historian and filmmaker Bianca Stigter.
“Bianca had written this extraordinary book, and it’s all her research over the last 20 years or more,” the director explained to A.frame. “It’s not the first book you’d ever think we’d translate into a movie. It’s not an obvious choice.”
Using the text of Atlas as narration, McQueen (who won Best Picture with 2013’s 12 Years a Slave) juxtaposes the history of the city and explanatory narration by Melanie Hyams with footage of life in Amsterdam today, which he shot over the course of several years beginning in 2019 and through the pandemic lockdowns.
“What I wanted was, as you would do in a city, you get lost,” McQueen told IndieWire’s Filmmaker Toolkit podcast, adding that the film was a bit like an English garden. “Unlike a French garden, which is all about the avenues; it’s very symmetrical, very formal. An English garden [has] more to do with wandering and the contemplating and lots of ideas come from those places of wandering and pondering.”
Stigter describes the film is more of a free wandering through the city, and the book is more practically set up like a guide book.
One scene in which the elderly owner of an apartment in which Occupied City filmed showed the crew country line-dancing. Under Hyams’ narration of what happened there during the war, the joyful dancing of the owner adds the fact that she, also, might have her own story of the Nazi occupation.
“There’s something excessive about the movie because — besides from what you see, you also think, ‘What do these people [we’re seeing] have in their heads [from that time]?’” Stigter told IndieWire.
McQueen, who lives in Amsterdam with his wife, found the experience of living in a city that had once been Nazi occupied an unsettling one.
“My daughter’s school was once an interrogation center. Where my son went to school was a Jewish school, so these things were in my every day,” he told A.frame. “When it’s sinking into your pores, you start thinking about it. Coming from London, not having grown up in an occupied city but being here now, it felt like I was living with ghosts. It’s almost like an archaeological dig. This is recent history within the last 85 or 90 years, and I thought this could be fascinating. It is two existences: My presence and another presence.”
Initially, McQueen thought he’d find some archive footage from Amsterdam in WWII to project on top of the present day footage, but then decided to use narration based on Stigter’s text and to merge the two things together.
“There’s optimism in [Hyams’] voice, even though there was a dispassionate sort of description of what was going on,” he told NPR’s Asma Khalid. “And that was because I didn’t want to manipulate the audience. It was about the audience bringing the information, receiving the information for the first time.”
He described the process of shooting on 35mm — his favored medium — as a ritual. “It’s so precious this footage and it actually adds to the tension of being careful about how you how you approach the moment,” he shared with the audience at the New York Film Festival.
“It was shooting without a tightrope, in a way,” he added to A.frame. “Young people today shoot digitally; they spray the whole area, shooting for 60 hours and cutting it down to half an hour. You can’t do that with film. The process of making a film and working with Lennert Hillege, the DP, the sound people, and others, it was a beautiful ritual every time we took the camera. I think that was extremely helpful in capturing things, because everyone was very focused.”
Addressing the length of the film, McQueen said it couldn’t be told in an hour and a half. “It needed that contemplation, needed meditations to sort of get into the psyche of the cinema experience, and that time was very important for us,” he told NPR.
Speaking again to A.frame, Stigter said, “It’s essential to have ways to bring history to the fore. We have documentaries, books, and feature films, and this is trying to tell you things about the past in a different way. That’s also why the length is important. It turns it more into a meditation or an experience than a history lesson.”
McQueen, who began his career making video installation art, is also preparing a “36-hour sculptural version” as an art piece. “There are 36 hours of edited footage,” he informed A.frame. “From that 36 hours of edited footage, we took out these four hours, because making a feature film is a very different experience than making the sculptural element of it. Certain things are repeated in that, but you don’t want to do that in a feature film. In some ways, after a particular moment, it condenses itself, and then you decide what you want to keep in and what you want to take out to make it a certain kind of journey.”
Occupied City ends with a bar mitzvah ceremony because it was important to McQueen and Stigter to show the persistence of Jewish life in Amsterdam.
Speaking at the New York Film Festival, Stigter said, “For me the last scene is also very important to show something of contemporary Jewish life in the city, and that was a very beautiful and hopeful conclusion for the for the movie.
“I often think watching a movie is like a religious experience,” McQueen added to A.frame. “You’re trying to create meaning in what you see. In this case, the more you know, the less you know.”
He continued this theme with NPR, saying, “When you go to the movies, people try to connect the dots and try to make sense of things. But the lessons learned from this situation is that nothing makes sense. How can you even fathom or sort of get to an understanding of how, for example during this war, six million people died. Try and make sense of that.”
Director Jonathan Glazer pursued an immersive naturalism in “The Zone of Interest” by removing the artifice and conventions of filmmaking.
January 4, 2024
Jaron Lanier: Is Data Dignity the Answer for Regaining “Control” of AI?
TL;DR
Jaron Lanier, an influential technologist who also works for Microsoft, explains what it means to apply data dignity ideas to artificial intelligence.
Lanier argues that large-model AI can be reconceived as a social collaboration by the people who provide data to the model in the form of text, images and other modalities.
In thinking of AI this way, he suggests new and different strategies for the long-term economics of AI, as well as approaches to key questions such as safety and fairness.
Jaron Lanier, an influential computer scientist who works for Microsoft, wants to calm down the increasingly polarized debate about how we should manage artificial intelligence.
In fact, he says, we shouldn’t use the term “AI” at all because doing so is misleading. He would rather we understand the tech “as an innovative form of social collaboration.”
He set out his ideas in a piece published in The New Yorker, “There Is No AI,” and elaborated on them further in a conversation recorded for University of California Television (UCTV), “Data Dignity and the Inversion of AI,” co-hosted by the UC Berkeley College of Computing, Data Science, and Society and the UC Berkeley Artificial Intelligence Research (BAIR) Lab.
Lanier is also an avowed Humanist and wants to put humans at the center of the debate. He calls on commentators and scientists not to “mythologize” a technology that is actually only a tool.
“My attitude doesn’t eliminate the possibility of peril: however we think about it, we can still design and operate our new tech badly, in ways that can hurt us or even lead to our extinction. Mythologizing the technology only makes it more likely that we’ll fail to operate it well — and this kind of thinking limits our imaginations,” he argues.
“We can work better under the assumption that there is no such thing as AI The sooner we understand this, the sooner we’ll start managing our new technology intelligently.”
A Tool for Social Collaboration
So if the new tech isn’t true AI, then what is it? In Lanier’s view, the most accurate way to understand what we are building today is as an innovative form of social collaboration.
AI is just a computer program, albeit one that mashes up work done by human minds.
“What’s innovative is that the ‘mashup’ process has become guided and constrained, so that the results are usable and often striking,” he says.
“Seeing AI as a way of working together, rather than as a technology for creating independent, intelligent beings, may make it less mysterious — less like Hal 9000,” he contends.
Delivering on “Data Dignity”
It is hard but not impossible to keep track of the input of humans into the data sets that an AI uses to create something new. Broadly speaking, this is the idea of “data dignity,” a concept doing the rounds among the computer scientist community for helping us out of the impasse when it comes to AI’s ability to work for us, not against us.
As Lanier explains, “At some point in the past, a real person created an illustration that was input as data into the model, and, in combination with contributions from other people, this was transformed into a fresh image. Big-model AI is made of people — and the way to open the black box is to reveal them.”
Such “data dignity” appeared long before the rise of big-model AI as an alternative to the familiar arrangement in which people give their data for free in exchange for free services, such as internet searches or social networking. It is sometimes known as “data as labor” or “plurality research.”
“In a world with data dignity, digital stuff would typically be connected with the humans who want to be known for having made it. In some versions of the idea, people could get paid for what they create, even when it is filtered and recombined through big models, and tech hubs would earn fees for facilitating things that people want to do.”
He acknowledges that some people will be horrified by the idea of capitalism online, but argues that his strategy would be a more honest capitalism.
Nor is he blind to the difficulties involved in implementing such a global strategy. It would require technical research and policy innovation.
Yet, if there’s a will, there will be a way, and the benefits of the data-dignity approach would be huge. Among them, the ability to trace the most unique and influential contributors to an AI model and to renumerate those individuals.
“The system wouldn’t necessarily account for the billions of people who have made ambient contributions to big models,” he caveats. “Over time, though, more people might be included, as intermediate rights organizations — unions, guilds, professional groups, and so on — start to play a role.”
People need collective-bargaining power to have value in an online world — and that’s a loophole he doesn’t address. Lanier’s humanist side gets the better of him as his imagination runs to liberal thinking.
He continues, “When people share responsibility in a group, they self-police, reducing the need, or temptation, for governments and companies to censor or control from above. Acknowledging the human essence of big models might lead to a blossoming of new positive social institutions.”
There are also non-altruistic reasons for AI companies to embrace data dignity, he suggests. The models are only as good as their inputs.
“It’s only through a system like data dignity that we can expand the models into new frontiers,” he says.
So it is in Silicon Valley’s interest to remunerate humans whose data they collect, in order to then create better and bigger AI models that have an edge over competitors.
“Seeing AI as a form of social collaboration gives us access to the engine room, which is made of people,” says Lanier.
Context, Not Apocalypse
He doesn’t deny there are risks with AI, but he also doesn’t subscribe to the most apocalyptic end of species scenarios of some of his peers. Addressing the issue of deepfakes, the misuse of AI by a bad actor, he gives a stark example of how data dignity might come to the rescue.
Suppose, he says, that an evil person, perhaps working in an opposing government on a war footing, decides to stoke mass panic by sending all of us convincing videos of our loved ones being tortured or abducted from our homes. (The data necessary to create such videos are, in many cases, easy to obtain through social media or other channels.)
“Chaos would ensue, even if it soon became clear that the videos were faked. How could we prevent such a scenario? The answer is obvious: digital information must have context. Any collection of bits needs a history. When you lose context, you lose control.”
Tech visionary and Microsoft “Prime Unifying Scientist” Jaron Lanier calls for regulation in AI, in the interest of society… and Big Tech.
January 3, 2024
Posted
January 3, 2024
“May December” and Its Many Mirrors: How the Cinematography Tilts and Shifts Perception
TL;DR
Netflix’s “May December,” the new film from director Todd Haynes, examines how little people understand about how they appear to others.
The story of real-life teacher Mary Kay Letourneau, who married the much-younger object of her desire after serving time for second-degree rape of a child, serves as a springboard for the film.
As with his other films, Haynes employs stylistic techniques to remind audiences that they are watching an artistic construct.
The film was shot using ARRI’s new Alexa 35 by DP Christopher Blauvelt rather than longtime Haynes collaborator Edward Lachman.
Blauvelt appreciated the increased latitude of the Alexa 35, which allowed him to shoot in bright daylight without overexposure.
Few people really know what they sound or look like or how they appear to others. That is among the essential ideas that director Todd Haynes’ new film on Netflix, May December, examines. The film concerns Gracie (Julianne Moore), a woman who has married and raised children with Joe (Charles Melton) — a pairing that began when she was already married and in her 30s and he was just 13. We quickly learn that their relationship landed her in prison and resulted in a national scandal similar to that of real-life teacher Mary Kay Letourneau, who married the much-younger object of her desire after serving time for second-degree rape of a child. May December is clearly meant to summon audiences’ memories of Letourneau’s situation as its springboard.
The story kicks off when Elizabeth (Natalie Portman), an actress set to portray Gracie in an indie film, spends time with Gracie, Joe and their children in an attempt to absorb useful information about the character she’s scheduled to portray. Gracie sees this as an opportunity to shape Elizabeth’s portrayal into the sympathetic portrayal she feels is truthful, while Elizabeth is focused on gathering any detail, the more sordid the better, that she can use in her performance.
As is almost always the case with Haynes’ filmmaking, the director isn’t looking to make the audience feel like they’re watching something “real,” or to forget that they’re watching a movie. Instead, he searches for ways to remind audiences that they are watching an artistic construct, whether by using the stylistic techniques of a ‘50s melodrama in Far from Heaven, the approach of a sensationalist exposé, as in his well-known short Superstar, in which a Barbie doll stands in for singer Karen Carpenter, or a mid-century romance in Carol.
As Haynes told American Cinematographer back in December 2002, following the release of Far from Heaven, which was stylized to look like a Douglas Sirk “women’s picture” of the 1950s: “I think the best movies are the ones where the limitations of representation are acknowledged, where the filmmakers don’t pretend those limitations don’t exist. Films aren’t real; they’re completely constructed. All forms of film language are a choice, and none of it is the truth. … We’re not using today’s conventions to portray what’s ‘real.’ What’s real is our emotions when we’re in the theater. If we don’t have feeling for the movie, then the movie isn’t good for us. If we do, then it’s real and moving and alive.”
Among the techniques he uses in May December to achieve this type of aesthetic distancing: scenes framed by mirrors; coastal Georgia locations captured through large amounts of diffusion; and grain laid in during post for additional texture and a seemingly discordant music track, repurposed from 1971’s torrid romance The Go-Between from director Joseph Losey and writer Harold Pinter.
Distancing techniques notwithstanding, the film also avoids traditional filmmaking tropes that cue the viewer about how to feel about what they’re watching. Could there be some truth in Gracie’s notion that her relationship was borne of true love and evolved into normal family life? Is Elizabeth’s attempt to cut through Gracie’s public front a search for truth or just another form of exploitation? In that way sense May December could be said to be this year’s Tár, with final judgement ultimately left to the viewer.
As Haynes summarizes in his director’s statement from Netflix, “All lives, all families, are the result of choices, and revisiting them, probing them, is a risky business. But it’s hard to think of more volatile romantic choices than these, and all the more so when so many defenses have been called upon to shut out such unanimous contempt and judgment from the world.”
He adds, “But as Elizabeth observes and studies Gracie and her world, and gets to know her husband Joe, her reliability as narrator begins to falter. The honest portrait she hopes to erect, her own investment in revealing truths, becomes clouded by her own ambitions and presumptions, her own denials.”
Originally intent on working with frequent collaborator Edward Lachman to shoot the film, Haynes’s plans were disrupted when the cinematographer of his award-winning features, including Far from Heaven, Carol and I’m Not There, suffered an injury that prevented Lachman’s participation. Cinematographer Christopher Blauvelt, who knew both Haynes and Lachman quite well, stepped in during the brief prep period and handled the cinematography for the rapid-paced 23-day shoot in and around Tybee Island, just outside of Savannah, Georgia.
In conversation with playwright Jeremy O. Harris at the New York Film Festival, Haynes recounted that he had no compunction about working with Blauvelt, who’d shot several of the films of respected indie director Kelly Reichardt, including First Cow, Showing Up and Certain Women. “Kelly Reichert is a dear, dear friend and one of the great independent filmmakers working in the world today. She, her last, what, five, six films were all shot by Chris Blauvelt… I’ve known Chris for years because he worked under Harris Savides, who was one of the greats.”
It had already been determined that May December would be shot with ARRI’s new Alexa 35, which offered some features that would be conducive to the project’s fast pace and limited lighting crew.
“I was immediately interested in this camera because [I understood] that the latitude was even more the [previous Alexas] and I had never used it before,” Blauvelt explains to Nick Newman at The Film Stage. When I went to test it at Keslow Camera in Atlanta… we were in a warehouse with a giant door, and I had a person in there that I was shooting for my tests with some string lights and a chart and the other things you have at a camera test. But I had this door open, so I had sunlight out of the back, and I kept opening and opening and opening that door and [the camera] maintained [definition in] the clouds, like, forever!” he says.
“It felt like I couldn’t clip. I couldn’t make it overexposed! So that was what I needed. And I was really happy to have that that much latitude because going into [this] film, knowing that I would be stuck in bright daylight without the tools to slow things down. It was a tremendous help.”
As Blauvelt explained to Newman, “I think there’s a big interest in finding these older, beautiful [lenses] that we used to use because the digital can be super-clinical. You know high definition is not flattering if you shot everything clean and right to your sensor. You’re looking, now, at pores on skin and it doesn’t lean into a ‘cinematic look’ — like from the past — that we all are inspired by and love. So, there’s people that have been rehousing these old lenses to match all of our gears and make them more user-friendly.”
Of the 1930s era Balter glass, he says, “You can’t crack them open because it’s toxic — like poisonous gas — because they’ve been encapsulated for so long, and the materials they use was, like, pine tar to make the gears work. And so what they do now is: they cover them. Like the rehousings are just built over the old lenses. So, you can look at a lens that’s built to be this big, to be user-friendly with big marks and everything for the focus and aperture, and you look inside, and the lens is, like, this big. [Spreads hands]
“There’s a characteristic of each lens, right? Like, we tested Cooke Panchros; we tested Super Baltars, normal Baltars, Cooke S4s — which are the most contemporary ones I would use. But even still: I say that and it’s funny because those lenses were made 35 years ago. [Laughs] But those, to me, are as sharp as I’ll get because I’m always trying to find a way to sort of disarm the eye for perfection of digital.”
Blauvelt spoke to Vanity Fair’s David Canfield about Haynes’ desire to avoid crisp, clinically clear images for his brand of visual storytelling.
When he got to the location, he recalls, “Todd was showing me all these images and there was this inherent sea-worn glass, this sort of haziness on things because of the ocean air. I could tell that was just a natural occurrence. It reminds me of Todd. Todd has this old, really shitty phone, and he would take a photo of a set with it, and it would already look like that. [Laughs]
“So they were showing me images already discolored — it just became this throughline. This very texturized filmic look comes from a lot of the inspirations that Todd had already had. To me, we were all on the same page in regard to finding these places and these frames and the way we lit.”
Further elaborating to ICG Magazine, Blauvelt says, “We wanted it to feel texturized. We wanted to give the feeling of this place where the windows are covered in a marine layer, and there’s all this haze, and sunlight warming things, and leaving moisture between window panes. We embraced it and never cleaned a window. We were shooting through screen and brush, which helped to give a filmic look.”
Valentini also reports that Blauvelt made use of heavy diffusion from Schneider Optics Radiant Soft filters in front of the lens, in strengths from 1/4 up to five, sometimes stacking more than one for the right effect.
Another feature of May December involves the use of mirrors to frame the action, concurrently enhancing the thematic elements of characters’ inherent limitations of seeing themselves and others accurately, and also adding more of those layers of distance between the viewer and the characters.
In portions, the camera takes the position of a bathroom mirror in which Elizabeth studies Gracie’s approach to applying makeup. In one scene, Elizabeth delivers an extended monologue into a mirror, again with the camera pointing at her. Shots like these simply use the proscenium as if it’s a mirror and the actors perform directly to the lens. Some other scenes that actually show mirrors within the shot were more complex to execute.
In a scene that has been widely referenced in articles and reviews, Gracie’s daughter Mary (Elizabeth Yu) tries on dresses to wear to her high school graduation. Gracie and Elizabeth sit outside the store’s dressing rooms and in an extended oner, the camera is pointed directly at the two women sitting side-by-side surrounded by mirrors and framed to show Elizabeth surrounded by both Gracie and, on her other side, Gracie’s reflection. The dramatic point of the scene is to observe Gracie’s offhanded and crushing response to her daughter’s modelling the sleeveless dress she wants. But acquiring the shot as envisioned presented the problem of hiding a camera pointing directly at a mirror.
To accomplish this, Blauvelt, the camera and crew were placed behind a two-way mirror — one that is a typical mirror on one side and clear on the other. Haynes explains to Vanity Fair, “The challenge was how to hide the camera, and which angles the mirrors were going to be; when you have any mirror on any set, it’s difficult because you’re hiding lights and stands and everything. I always stare at the little vanity over Natalie’s shoulder because that’s where the camera is hidden. Also, it’s great conceptually. When I watch the film and see how it works and integrates into our multiplicity of what’s happening within the story, it makes so much sense. Your eye can go in any direction. We play it mostly as a one-er, and so it relies a lot on their performances, which are just immaculate.”
Haynes elaborates to Adam Chitwood at TheWrap, saying his initial idea for the shot was much simpler, but it evolved from there. The performers are surrounded by mirrors and the camera had to be positioned just right so it wouldn’t catch any errant reflections of the set or crew. It was one of the most complicated scenes in the entire shoot, and Blauvelt said it was a true team effort to nail it.
“It’s not exclusive to me, or even the departments, it’s like a collective that goes all the way back to the genius of the writing, and the characters, and Julianne Moore and Natalie Portman and Elizabeth Yu,” he continues. “When that happens, and all the pistons are firing and you know that we got there from everybody really understanding the intent and building something like that, it’s the best feeling you can have as a filmmaker.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
January 3, 2024
“The Zone of Interest:” Ways to Film the Unfilmable
TL;DR
Director Jonathan Glazer went to great lengths to pursue an immersive naturalism in his screen depiction of the holocaust, “The Zone of Interest,” removing the artifice and conventions of filmmaking.
The filmmakers gave their actors freedom to improvise by rigging multiple cameras for long takes with the actors often unaware if the cameras were even rolling.
Cinematographer Łukasz Żal also deployed a thermal imaging camera to juxtapose black and white scenes of energy and hope with the bleak world of color.
Writer-director Jonathan Glazer refuses to be drawn into making comparisons with other Holocaust screen depictions and his new film TheZone of Interest.
“I don’t like getting involved in a genocide-off,” he told Giles Harvey of The New York Times, commenting on repeated attempts by the press to get him to talk about why he felt his approach differs from the likes of Schindler’s List, Son of Saul or the documentary Shoah.
He went on to clarify that his decision to tackle this highly sensitive subject was rooted in his family history. Glazer’s grandparents were Eastern European Jews who fled the Russian Empire in the early 20th century. He also went to a Jewish state school in London.
The British director had not yet finished Under the Skin in 2013 when he told his longtime producer, James Wilson, about his idea for the next project.
“He did not want to do another, quote-unquote, ‘Holocaust movie.’ Jon has a very small filter when it comes to doing something that’s never been done before,” Wilson told Rolling Stone’s David Fear. “But neither of us knew what that something would be.”
Glazer’s idea was galvanized in 2014 by reading about the latest novel by the late Martin Amis, a story told from the viewpoint of a fictional Nazi commandant who ran and lived next door to a concentration camp in Nazi Germany.
In Amis’ book, the Dolls were loosely based on Rudolf Höss, the real-life commandant of Auschwitz, and his family. Glazer’s first big call was to revert to the originals. Before starting work on the script, he spent two years researching the Hösses, during which he came across a staggering data point: The garden of their villa shared a wall with the camp. What feats of denial, he wondered, would it have taken to live in such proximity to the damned?
“I wanted to dismantle the idea of them as anomalies, as almost supernatural. I wanted to show that these were crimes committed by Mr. and Mrs. Smith at No. 26,” he told the NYT.
“I looked at the darkening world around us, and had a feeling I had to do something about our similarities to the perpetrators rather than the victims,” Glazer elaborated to Rolling Stone. “When you say, ‘They were monsters,’ you’re also saying: ‘That could never be us.’ Which is a very dangerous mindset.”
Instead, he began to see the Hösses as “non-thinking, bourgeois, aspirational-careerist horrors” who’d simply normalized evil.
‟There were two givens on the film. … That it would be in its native languages — German and some Polish, obviously — and that we would film it in Poland. And Jon really wanted to film it in the real house,” Producer James ‟Jim” Wilson told Gold Derby’s Charles Bright.
Wilson describes visiting the Höss family home and experiencing its proximity to Auschwitz (which he says is ‟kind of holy”) as generating ‟one of the lightbulb moments.”
“The Höss house is still there, and the proximity to the camp is striking,” Glazer told The Hollywood Reporter’s Scott Roxborough. “I imagined myself at one point as a prisoner, imagining hearing the sounds of the Höss children splashing and laughing in their swimming pool on the other side of the wall. The idea of the film became about that wall, about how that wall is a direct manifestation of how we ourselves as human beings compartmentalize the things we were happy to indulge in and surround ourselves with and the things — sometimes horrible things — we want to disassociate ourselves from. That became the axiom of the whole endeavor.”
Because the Höss home is the heart of the film, Glazer tasked Production Designer Chris Oddy with building a fully functional set. Oddy deemed the actual building’s “80 years of decrepitude” insurmountable and instead opted to renovate a nearby building crafted in a similar style, according to Roxborough’s reporting. Glazer mandated that the end result should be as if it had been built yesterday.”
In fact, Oddy says, “The only scene shot in the original house comes late in the film, where Rudolf walks from his office through the real underground tunnel that connects the camp with his home.”
Another key set piece, the family garden, required a full-year head start to adequately mature before filming began, Oddy says.
The production goal was an immersive naturalism, and Glazer went to great lengths pursuing it, telling Vanity Fair’s David Canfield that he sought to “remove the artifice and conventions of filmmaking that lead you down a road which didn’t feel relevant here: screen psychology. The way that cinema fetishizes, glamorizes, empowers—in this context, none of those were appropriate.”
Instead, as Fear notes in Rolling Stone, The Zone of Interest uses suggestion and sound — what Glazer refers to as “ambient evil” — to conjure up how human beings could look upon the methodical killing of other human beings as background noise in their lives rather than a profound tragedy.
Oddy’s recreation of the Höss home at Auschwitz was rigged with 10 hidden cameras that would roll simultaneously.
“Cinema is at odds with atrocity,” he explained to the NYT. “As soon as you put a camera on someone, as soon as you light them, or make a decision about what lens to use, you’re glamorizing them.”
Cinematographer Łukasz Żal (Ida and Cold War) made some initial studies of the house. Glazer told him they were “too beautiful.” He wanted the images to seem “authorless.”
The two-time Oscar-nominated Polish DP explained to John Boone at Aframe, “You have to forget completely all the tricks you’re carrying with you as a cinematographer and all your knowledge and everything you were taught in your career. The whole idea of this film was just to put the cameras in the places where you can see what is happening in the most objective way, and that’s it. Very simple.”
This method gave the actors — principally Christian Friedel (Babylon Berlin) as Rudolf and Sandra Hüller (Anatomy of a Fall) as his wife, Hedwig — the freedom to improvise; they were often unaware if the cameras were even rolling.
Hidden Cameras
“There was nobody on set [except] the actors,” Żal informed Aframe. He remotely monitored from a shipping container outside the house. “The actors were just living their lives, and because we had 10 setups at the same time and were shooting everything, after one or two hours we had all the setups we needed. We had continuous action, and the sun was changing, the light was changing, the clouds were coming and going, the dog was running through the house, and everything was captured on the 10 cameras.”
Sony VENICE cameras were partly chosen because they could have their lenses detached from the camera body in order to be able to hide them all over the set, including in the garden, which is the Hösses’ pride and joy.
“The whole idea was to create a space for actors to just be there, be in the situation and for us to witness with no interruption,” Żal told fellow DP Mandy Walker in an interview for the ASC. “We were able to attach those cameras to the walls, hide them in cardboard, in the bushes, try and cover them with fabric. We’re just placing them in a different spaces in the garden, in the house. Everything was hardwired. The five focus pullers were in the basement of house, and we were in the shipping container behind the wall. That was our mission control and we were just shooting the whole scene without interruption, continuously.”
The film intersperses the desaturated color of the Höss family life with stark black and white footage of a girl leaving apples in a work camp for the Jewish prisoners. This effect was captured at night using a special thermal imaging camera, as Żal explained to Vanity Fair.
“We spent a lot of time adjusting this for filming in terms of focus and image and also software, because It’s not so easy to use this camera and get the image we would like to have.”
Glazer explains that the camera is recording heat, not light, adding, “there’s something very beautiful and poetic about the fact that it is heat, and she does glow. It reinforces the idea of her as an energy.”
In post, they used an algorithm to upgrade the resolution from lowly 1K to 4K and to match as well as they could with the 6K resolution footage from the VENICE.
While the visuals deliberately refrain from showing the inside of the extermination camp, the audience is not spared the harrowing sounds emanating from behind the wall. Neither are the Hösses, even though for them life seems to continue as normal.
Sound designer Johnnie Burn recalls to Aframe the original directive from Glazer. “He said, ‘It is going to be mandatory that we don’t go in the camp, and we don’t see the atrocities. We’re going to just hear them. It will all be sound.’ I panicked. I started reeling, because I realized that would be a leap of faith. And also, where are we going to get the sounds from? Jon said, ‘Well, that’s what you’re going to figure out.’”
Burn compiled a 600-page research document and spent the year before filming began and throughout the shoot and into post-production building the sound library with his team. According to Aframe, he recorded the industrial rumble of textile workshops and incinerators, boots marching on gravel, period-accurate gunfire, and death itself.
“It’s the sound of murder, and it has to be credible but we didn’t want to be sensational,” Burn said. “Anything sensationalized in the sound wouldn’t work, so understanding the difference between someone acting pain and actually being in pain at point of death, that’s to do with literally the cadence of the way people scream.”
The idea was to create an immersive experience where performances were able to transform into people going about their daily routines, and the cast was free not only to explore the environments but lean into boring, mundane everyday life — a contrast to the horror literally happening outside their backyard.
“Some takes were up to 45 minutes,” Sandra Hüller told Rolling Stone. “You didn’t know what was being filmed from what angle, or from where. The crew and monitors were in a separate building, so if they didn’t tell us to cut, we’d just restart a scene and it would end up being completely different.”
It was a concept that Glazer hoped to make explicit with the film’s ending, in which viewers are momentarily dropped into Auschwitz in the 21st century — a flash-forward that the director says came from his experience wandering around the grounds one morning and noticing the cleaning crew picking up litter and vacuuming in front of the exhibits.
“It was like they were tending graves,” Glazer says. “You know, Höss is long gone. He is ash. But the museum, and the importance of such museums, they are still there.”
Oscar-winning director Steve McQueen’s four-hour documentary feature, “Occupied City,” provides two simultaneous portraits of Amsterdam.
January 2, 2024
Jim Louderback: Want To Grow Your Creator Business? Get a COO
TL;DR
Creator COO will be the new big job category for the creator economy over the next decade.
Dedicating a part- or full-time team member to operations not only improves a creator’s short-term outcomes, but will be crucial in building a business that lasts.
Business mentor Jim Louderback shares how to determine whether to hire an operations expert and, if so, which tasks to offload.
He recommends considering what a creator could do with additional capacity, what the creator’s long-term vision is, and what structure needs to be in place for growth.
Want to know more? See Jim Louderback in person at NAB Show in the Creator Lab, a new show floor experience centered on the creator economy.Register now!
As the creator economy matures, the business structures of individual content creators are starting to look a lot like those of conventional startups with formalized rolls for positions like CEO, CFO and even HR.
The starting point is that every creator is building a media business and that by delegating responsibility to employees the creator can capitalize on the chance to generate more revenue than they could manage alone.
“I think it fundamentally comes down to how do you scale your business as a creator, right? You can take it only so far as an individual with maybe a couple of part time people working with you.”
At the same time, he said, “The revenue options have multiplied. There are a lot of opportunities to do things outside of the platforms to create revenue, whether it’s books, or merchandise, or courses. And the number of people taking advantage of it have multiplied.”
Because more and more creators and influencers are sitting at the center of the marketing funnel, the question is how do you take advantage of that and build a bigger business?
Louderback also points to another dynamic which is the lifecycle of a creator. Like elite athletes, there may only be a sweet spot of a few years where the creator can maximize their potential with an audience.
“Five to seven years, maybe, for a YouTuber. For TikTok it might be more like one to three years. But whatever it is, as a creator with some success, you’re thinking, how long can I last? And what do I do afterwards? Can I build something that exists without me having to be at the center of it all the time?”
He argues that, if creators are not already doing so, they should be casting around for a COO to help structure their business and move it forward.
“When you realize that you need to scale and you can’t do it all yourself, you have to ask yourself the hard questions. What are the things that you like doing? And what are the things you don’t like doing? What are the things that you’re comfortable delegating and what are the things you’re not comfortable delegating? Can you find people that fit those roles that you trust?”
Louderback acknowledges that most creators are “lone wolves, outdoor lions in the middle of the savanna” who find it “really hard to open up and bring people in that they trust.”
Nonetheless, Louderback urges creators to think of themselves as CEOs of their own startup.
“As a CEO your job is to hire the right people to make sure you have enough funding, to keep the company going and to set the direction of the business. That’s the job as a CEO whether of a traditional tech startup, or as CEO of a creator organization.”
He advises CEOs to look at all the aspects of a classic business and how it applies to them. That means thinking about marketing, sales, finance, production and HR, your workspace infrastructure, and your technology infrastructure.
“Put them all on a board and think about how much time and percent of your work week [this takes] and think to yourself, if I had somebody else doing this, what more could I do? And how much more revenue could I bring in? And then think about your vision for where you want to go as a creator business? Where you want to be in three years and five years and 10 years? And think, do you think you’re gonna get there with what you have? And if the answers to all of those points to ‘I need help,’ then that’s when you probably need to bring in somebody in an operational role to manage those things for you.”
The operational leader of the company essentially needs to make sure that they are expanding the CEO’s ability to get the job done. The COO role can evolve with the business, growing from a fractional role to a full-time position or even multi-person team.
Of course, the best operators won’t come cheap and it will probably mean a creator giving away equity in their business. Louderback maintains this will be worth it when creator’s realize the increased revenue and value that a COO brings that would otherwise be left on the table.
“It’s a bridge to get over because in the end, it’s not an equity in you it’s an equity in the business,” he said. “Having a strong operational partner for a creator is going to give you much better likelihood of success [to millions of dollars in annual revenue]. It’s a very untapped opportunity and there is a lot of wealth being created.”
He predicts billion dollar companies emerging in the Creator economy with creators as the leads or as the focus of those companies.
“I think we will see financial structures and capital put to work to help build these billion dollar companies.”
Show organizers say 2024 trend highlights include AI, “streaming universes,” virtual production, and the creator economy. The event will also emphasize live events and advertising.
New this year will be the Creator Lab destination, curated by Jim Louderback and Robin Raskin, and the Propel ME startup program.
Creator Lab is focused on the creator economy and features interactive workshops and networking events. Programming will cover topics including AI’s role in production, the utilization of video for corporate messaging and the establishment of a resilient infrastructure for the content ecosystem.
Propel ME will launch as a digital forum for startups (here on NAB Amplify!) in January. Then, during the April Show, Propel ME will be transformed into a live experience.
“Creator Lab and Propel ME epitomize the spirit of innovation we will be spotlighting at the 2024 NAB Show, providing a dynamic intersection where creativity meets technology,” said Chris Brown, executive vice president and managing director, NAB Global Connections and Events.
“Amid the swiftly transforming media landscape, it is important that the Show provide unique activations like these to act as a guide to the tech and companies that are redefining the horizons of the entertainment industry.”
Returning programming includes the Devoncroft Executive Summit, Post|Production World, Streaming Summit, TVNewsCheck’s Programming Everywhere and Core Education Collection, which features the Broadcast Engineering and Information Technology conference, Small and Medium Market Radio Forum and the Focus on Leadership Speaker Series.
Is It Technology Dread or Imminent Apocalypse? (Both?) Asking Sam Esmail
TL;DR
Writer-director Sam Esmail discusses his new film, “Leave the World Behind,” which sees Julia Roberts, Ethan Hawke and Mahershala Ali facing the end of the world.
The film makes a strong commentary on society’s dependence on technology, which is only going to grow as we continue to incorporate AI into our lives.
Esmail includes cheeky digs at Tesla and at Netflix, the studio that funded the film.
From the dystopian science fiction thriller Mr. Robot through Amazon’s military mystery Homecoming to his new film, Leave the World Behind, writer-director Sam Esmail’s thematic obsession is the impact of technology on society.
The film examines the reliance we have on technology as an apocalyptic series of events cuts off all communication.
“I’m not a technophobe,” Esmail insists in a Google Talk moderated by Josh Lanzet. “I think technology is agnostic, it has no morality to it. It’s the human side that I’m more fascinated with. I really do think that it’s our sort of complicity, or how we use tech that will, in the case of the film, kind of offer a cautionary tale of what could happen to our world if we go one way or the other with it.”
Can we can still have sort of a functioning community without technology? he asks.
“Ultimately, technology, is a double-edged sword,” he said during an interview on the RealBlend podcast produced by CinemaBlend. “When I think about… the positives, it gives us access to information, to people, to media, to content that we want to explore. I think it’s a tool like anything else [and] it’s what we do with it.”
Based on Rumaan Alam’s 2020 novel, Leave the World Behind is set mainly in a country house outside of New York City, where a couple played by Julia Roberts and Ethan Hawke travel with their children, for a weekend getaway. On their first night there, two strangers arrive at the door: played by Mahershala Ali and Myha’la who declare they are the owners of the house and ask to be let in, citing a blackout in the city. Distrust and paranoia ensues as Esmail uses the tropes of the disaster movie to explore relationships of race and class.
The Towering Inferno, Earthquake and The Day After Tomorrow were among influences but the idea that touched a nerve was about how people can lose sight of their common humanity in the face of a crisis.
“It’s pretty relevant today given what’s going on in the world,” Esmail told Matt Zoller Seitz at Vulture. “The other thing that interested me is that this book does the inverse of what a typical disaster film does. The disaster elements tend to be the center of the story in disaster films. The characters tend to be secondary. Here, I could invert that process and be with the characters and have the disaster element exist more in the distance. That instantly felt more authentic to how humans would experience a crisis like that.”
Esmail read the book during lockdown when the idea that people can easily lose sight of their common humanity in the face of their own danger was all too real.
“But prior to reading the book I had this idea percolating in the back of my head about trying to construct a sort of disaster thriller centered around a cyberattack,” he told Brenna Ehrlich at Rolling Stone. “Because I think cyberattacks — even though they’re out in the public consciousness — there’s something ominous but equally mystifying about them.”
Classic paranoia thrillers like The Parallax View and North by Northwest were other touchstones, the latter providing inspiration for a scene in which Mahershala Ali’s character runs from a crashing plane.
“It’s not very subtle,” Esmail admits to Rolling Stone. “In all honesty, I don’t think there’s a movie made in contemporary times that doesn’t show some influence by Hitchcock. I think he’s essentially invented modern-day film grammar, but clearly, his work was looming large over the film.”
We also learn from Vulture that Esmail cast Ali in part because he thinks of the actor as a modern day Hitchcockian leading man. “The prototype was Cary Grant or Jimmy Stewart in Hitchcock’s films. They are an Everyman. They’re not five steps ahead, like a superhero, but they’re half a step ahead. They’re savvy enough to size up any situation. Mahershala has that.”
The director also talks about the cinematography of Leave the World Behind, in particular the camera movement that seems to move through the architecture similar to The Shining, and another iconic Hitchcock film.
“That was a huge influence,” he admits, talking about Kubrick’s psychological horror film on the ReelBlend podcast. “I love big camera moves, especially when it’s relaying something the audience doesn’t know. It’s like what you’re saying: It’s almost as if the movie’s a little possessed, and you’re the demon looking down at those people.
“It’s that great shot in Rear Window: Jimmy Stewart’s asleep and the camera’s moving, and then you’re looking across the street seeing the thing he’s not seeing, and then you realize, “Wait a minute — who am I? What’s happening? Who’s seeing it?” It’s very unsettling. Ever since I saw that film as a kid, I’ve always loved the idea of a camera being its own sort of person.”
Esmail’s script exhibits an eerie synchronicity with current events. For instance he made the movie when conflict had not yet escalated in the Middle East. Yet there’s a startling scene where Ethan Hawke’s character is being pursued a drone that drops leaflets written in Arabic that say “Death to America” — and later, another character who heard about similar messages, this time in Korean.
“Honestly, I tried to follow the guidelines out of the playbook of how coup d’etats actually work, especially when it’s a foreign actor,” he told Rolling Stone. “Propaganda misinformation is an old tactic. I just took that and magnified it and heightened it to this situation. It plays on your own biases and your own beliefs about who our enemies are, and I always love it when you can remove the barrier between the audience and your protagonist.”
Turning on Tech
Another scene features a number of Teslas that turn on their self-driving functions to block the roadways. Esmail says he didn’t seek permission from Elon Musk for that.
“Look, I wrote it in the script. I asked my amazing props guy, Bobby, to bring a bunch of Teslas out on the street. We shot the scene. I edited it in post, I showed it to Netflix, I crossed my fingers. And to this day, no one has said anything to me. So yeah, I’m hoping the movie comes out and no one will say anything.”
What doesn’t get lost in a digital attack are physical media like vinyl, DVDs and VHS (though you’d still need a source of electricity to play them). These become a source of comfort and nostalgia towards the end of the picture. But how did that sit when making the movie for a streaming service?
Esmail wasn’t afraid to poke the hand that feeds. On the one hand, he claims to be a “great proponent” of physical media, but also explains that one of the advantages of streaming services like Netflix “is that you really have access to any movie from across history at your fingertips,” he says.
“So there’s, there’s always a conflict because I’m a proponent of theatrical. I’m a proponent of DVDs and Blu-rays. But I’m also not mad at a streaming service that lets me see all the classics at a moment’s notice.”
Nonetheless he includes a cheeky shot that he doesn’t think “the Netflix folks” have noticed: “In the very end, you see Rose’s thumb hovering over the remote, and it goes past the Netflix button to hit ‘play’ on the DVD player.”
Notes From a Former President
The film’s exec producers are Netflix stablemates Barack and Michelle Obama, who were more involved in production then lending the cachet of their name.
“He’s a huge movie lover and a huge fan of the book,” Esmail confides to ReelBlend. “He really was committed to making this into a great movie. And he was giving me notes at the script stage, multiple drafts, including, post rough cuts. It’s kind of a surreal because I do think he is one of the most brilliant minds on the planet and to get his insight on the disaster element, characters, the theme. It was the highlight of my career.”
Turns out Barak Obama is a fan of Mr. Robot to the extent that Esmail got a call from the White House when Obama was President.
“We were in the middle of second season, and it hadn’t aired yet. And we were cutting the episodes. And someone from the White House, contacted our office and said, he’d love to get rough cuts of the episodes. Imagine that.”
“Oppenheimer” director Christopher Nolan says his new film offers lessons on the “unintended consequences” of technology.
December 20, 2023
Posted
December 19, 2023
Evan Shapiro Amplified: What to Expect for M&E in 2024
Watch “Evan Shapiro Amplified: What to Expect for M&E in 2024.”
TL;DR
“Evan Shapiro Amplified: What to Expect for M&E in 2024” examines the pivotal trends disrupting traditional business models in the new user-centric era, and provides actionable strategies for industry players to adapt and thrive in a rapidly changing media ecosystem.
The advertising landscape within the Media & Entertainment industry is poised for transformation as we venture into 2024, says Shapiro, with digital media and connected TV at the forefront.
User-generated content is increasingly being considered on par with premium content in terms of quality and effectiveness for advertisers, Shapiro says.
Mergers and acquisitions will continue to be a major trend, with Big Tech companies using these strategies to solidify their hold on the media ecosystem and reduce competition.
The streaming wars have morphed into the “battle of the bundles,” as companies like Amazon and Alphabet create comprehensive service packages that cater to the evolving hierarchy of consumer needs.
As 2024 approaches, the Media & Entertainment industry stands at the dawn of a new user-centric era, shaped by relentless innovation and shifting consumer dynamics. In this ever-evolving landscape, media universe cartographer Evan Shapiro dissects the pivotal trends disrupting traditional business models and provides actionable strategies for industry players to adapt and thrive in the rapidly changing media ecosystem.
Our latest installment of Evan Shapiro Amplified, “What to Expect for M&E in 2024,” delves into Shapiro’s predictions for the upcoming year. His insights, rooted in a deep understanding of industry trends and consumer behavior, offer a glimpse into what 2024 holds for media professionals — from the rapid shift to digital in advertising and evolving measurement challenges, to potential big-name mergers and acquisitions, strategic service bundling, and Big Tech’s growing dominance. Watch the full discussion in the video at the top of the page.
The Digital and Connected TV Advertising Revolution
The advertising landscape within the Media & Entertainment industry is poised for transformative change as we venture into 2024, says Shapiro, with digital media and connected TV (CTV) at the forefront.
“From an advertising standpoint, this year was the year that digital and connected television caught up,” to and likely even surpassed traditional television on a month-by-month basis, he says.
He notes the significant shift in advertising revenues, highlighting the growing dominance of digital platforms. “But when you look at where the money is going, it is not going back to where it was pre-lockdown,” he emphasizes. “The money that’s been sitting on the side isn’t going to the same place as it was distributed to equally in 2019. It’s going to new places, and fewer places.”
Another shift, says Shapiro, is that user-generated content is increasingly being considered on par with premium content in terms of quality and effectiveness for advertisers.
“Creator-led content is increasingly moving to the big screen, and the two ecosystems are commingling on connected televisions,” he says. “The ad buyers themselves now see creator-led content on par from a quality and environment and — crucially — efficacy standpoint, as they do professionally produced Hollywood content.”
It’s important to understand “that most creator-led content actually is professional content,” Shapiro continues. “The people who create the most successful creator-led content are professional creators, often within larger ecosystems.” This perception has led audiences to shift to connected TV platforms like YouTube, which is now the single largest channel on TV, he notes, “but so are the ad dollars because both communities now see that as the destination for their resources.”
Big Tech Continues to Dominate in 2024
The dominance of Big Tech companies in M&E is expected to deepen in 2024, Shapiro predicts. He describes these entities as “trillion-dollar Death Stars,” strategically employing mergers and acquisitions to reduce competition in the marketplace. This trend, he underscores, transcends mere power consolidation, aligning closely with the strategic service and product integration seen in the Amazon Prime model.
“This year has been pretty tumultuous,” Shapiro notes, pointing to the significant increase in job losses within media and tech at a scale not seen since the height of the lockdown. “Part of that is just a reassembling of the industry itself. And part of that is companies preparing themselves for sale.”
Looking ahead, Shapiro anticipates several substantial mergers and acquisitions in the M&E ecosystem. “I think a lot of companies that find themselves at a fraction of their former valuations are going to be open to being subsumed by major corporations, I think Big Tech will solidify its hold on the media ecosystem, through either acquisitions or through, basically, the reduction of competitive forces in the marketplace.”
As a result, he warns, “these combinations are going to create even more job losses in 2024 in the media ecosystem.”
But in an increasingly Big Tech-dominated world, Shapiro reminds us, “content is still king. Big Tech is the throne, but content will remain at the heart of the ecosystem.”
The media ecosystem “can’t survive by Big Tech alone,” he explains, a fact that Big Tech companies already understand all too well. “You have to understand the rules of the game, and you have to understand what your leverage is in your ecosystem, your ability to deliver a very specific audience to all the platforms that you’re going to perform your content across.”
The Battle of the Bundles
The streaming wars have morphed into what Shapiro calls “the battle of the bundles.” This shift towards a more integrated approach is best exemplified by Amazon Prime, which aligns its media services to serve the diverse and evolving needs of consumers and their “hierarchy of feeds.”
The supremacy of the traditional “triple play” bundle of phone, cable TV, and internet is over, Shapiro declares. “We need to replace the high-margin value system created around video as a marketing hook for the triple play. And the best way to do that is Amazon Prime,” he advises.
“The companies that are able to do this are going to win in the user-centric era where the competition is for total attention, and total mindshare.”
Alphabet serves as another model of successful consumer engagement, Shapiro points out. “They’re currently operating the only growing MVPD with YouTube TV,” he notes, highlighting the company’s leading position as the top video platform on both mobile and connected TV, in addition to being one of the world’s top three music services and possessing “a great sports strategy with NFL Sunday Ticket.”
Challenges and Opportunities for 2024
As the Media & Entertainment industry marches into 2024, it faces a landscape rife with both challenges and opportunities to thrive in the user-centric era. Video games, in particular, shows potential for explosive growth.
“One of the fastest growing segments in the ad economy right now is in gaming,” Shapiro notes. “Most of that is mobile,” he continues, pointing out that a new Publishers Clearing House survey of 68,000 consumers found that more than of half of people over 25 are gamers, and game either daily or multiple times per week. “Most of those are on mobile, most of those are women. And most of the revenues generated in the mobile gaming ecosystem come from ad dollars. It’s a very effective ad environment and it’s growing very quickly.”
Shapiro advises media companies to look for extensions on the business they’re already doing with new forms of revenue and commerce. “Not only what your merchandise is, but how do you get rewarded for the products you’re selling for your ad partners,” he says. “So there’s a ton of different things you can do. But most of them aren’t necessarily stealing more share from your neighbor. A lot of times it is growing the revenue per user within the universe that you currently have.”
Media universe cartographer Evan Shapiro examines the pivotal shift to a user-centric era of media, supported by new consumer research.
January 5, 2024
Posted
December 18, 2023
How Erik Messerschmidt Post-Produced His Cinematography for “The Killer”
TL;DR
Cinematographer Erik Messerschmidt details his latest collaboration with David Fincher on “The Killer,” featuring Michael Fassbender as an assassin with sociopathic personality traits and an attention to detail that leaves nothing to chance.
Featuring an avocado-colored LUT, exquisite scene management and meticulous coverage, Fincher’s edict for “The Killer” was always to control the pace.
Multiple Paris interiors were constructed on sound stages in New Orleansin a series of three-walled sets that the actors were able to walk through.
The production shot practically whenever the outcome could be controlled, but lens flares and other digital effects were created during post-production.
A momentous fight sequence appears to have been captured using handheld cameras, but the footage actually had de-stabilization applied in post.
NAB Amplify caught up with cinematographer Erik Messerschmidt just as he was about to fly to the Camerimage film festival in Poland, where Ferrari, his first film with director Michael Mann, was in competition. “An extraordinary experience, once in a lifetime,” was his on-the-spot reaction to the question, “How was it?”
But we wanted to talk to him about The Killer, a Netflix movie with an appearance in selected theaters at very selective times. Most people would wait for the stream and live with the Internet compression artifacts for the treat of a Fincher film, this time about a man who kills for a living. Cue Michael Fassbender, with sociopathic personality traits and an attention to detail that leaves nothing to chance; some reviews suggested that this man was a depiction of Fincher himself.
If you have seen previous films or television shows from the Fincher/Messerschmidt duo, especially 2018’s Mindhunter, you would be in a comfortable place from the get-go of The Killer: An avocado-colored LUT, exquisite scene management, and meticulous coverage. “Is this all you?” the DP was asked.
“It’s a thing that David and I do together. I enjoy the process of camera direction; I view it as sort of my principal job, really. It’s thinking about the structure of the film and of each scene. Every director’s interaction in terms of coverage and camera direction is different. It’s the first thing David and I discuss: structure and pacing. It’s almost an editorial conversation in terms of what we’re going to provide Kirk [Baxter, the editor] and how each scene breaks down in terms of the pace,” he said.
“Then we watch the rehearsal, and I watch what he’s doing with Michael and what Michael’s doing. We look for the moments we need: the wide shots, the single setups, and what we need to address them. It’s quite scene-specific, but the edict on this movie was always to control the pace.”
When watching a Fincher movie, the joy is giving in to that control and mastery. For instance, when the killer is preparing for a hit, the pace differs from when it all goes wrong, and then that comforting LUT and algebraic camera direction deforms into something less exact.
“When we’re in the killer’s space, the camera is precise and classic. When his world falls apart and he’s no longer in control, then the camera follows suit,” says Messerschmidt.
But any comfort you may feel, especially in the first 20 minutes of the movie, is a ruse and preparation for the ride to come. Messerschmidt explains their methodology, “We’re not and never are pointing the camera at action. We’re — especially with David — quite concerned about using the frame to provide information for the audience. Those things can exist at the edge of the frame, at the center of the frame, and the relative depth sort of correlates to their importance,” he says.
“We’re quite cautious about the art direction and the composition of each shot with consideration about how we’re spoon-feeding that information to the audience. It’s like, ‘You need to see this now, so we’ll include it in the frame; you need to see this reaction, so it’ll be in a close-up.’ So you need to see what he’s looking at so you’ll be in a point of view,” the DP continues.
“That all comes from a place of blocking, really. The way that David sets it all up is quite holistic, and we now have a shorthand. I can see what he’s doing with blocking, so I can see the POVs and close-ups, so we’re generally in agreement.”
Without spoiling what comes next, the film stays with the killer almost all the time, but doesn’t empathize with him at any time. He is just a part of the tension that is Fincher’s most important gift to the audience.
In production terms, The Killer is as complicated as Fincher movies get. Ideas are suggested, and they are constructed or deconstructed differently if they don’t work. In the opening scenes, for instance, the apartment that the killer is watching isn’t real and was constructed thousands of miles away. There was just nothing in Paris that would work for the vision the director had.
Messerschmidt explains how they found the look they were after. “We’d gone to Paris looking for a location to do it all practically. We looked for a Penthouse apartment with a vantage point across the street that could be the killer’s lair, and we didn’t find it. We didn’t because we needed windows large enough to see all of this action clearly. The decision was made to build the apartment across the street; the final scene is an assembly of three different locations. The point of view when he’s looking out of the window at the cafe and all the actions on the ground is all real, and it is a square in Paris. The exterior facades are plates shot in Paris from the same vantage point. So we did a nine-camera setup looking out of this window that captured all that action and those plates. So we had matching light and light reference,” he details.
For the interior of the apartment, a set was built in New Orleans. “That’s where Michael’s action existed, and the window was real looking out to blue screen. The penthouse apartment where the target is was a build on stage as well on a different soundstage and it was a series of three-walled sets built together so the actors could walk through.”
During post-production, the set was placed on top of exterior plates that had already been shot, blending in the façade in front. “The facade was entirely digital, and the only thing that was real was beyond the windows — in fact, there was no glass either in those windows; all the glass was CG,” he says.
“We had to previsualize the action as they were all shot at separate times. The target’s movements had to be worked out because Michael’s points of view were all relevant to the edit.”
The telescopic sights were practical long lens shots, but Messerschmidt had the scope sent to post-production so they could see firsthand how the scope’s optics worked. “You then get that kind of warping around the edges, the drift of the crosshair and all the things that it really does.” This allowed the production team to capture the imagery the killer sees through the scope practically on a sound stage.
There’s no surprise that so much deconstruction is done on the movie. Fincher’s style has always been to find a way of getting the shots he wants, but this production is full of post-produced shots when doing it practically leaves too much to chance.
The lens flares employed for some of the street scenes came as a surprise to anyone who has watched Messerschmidt’s work with Fincher. He hadn’t done them before, and the flares looked different but beautiful. “Were they anamorphic flares?” we ventured to ask.
“I’m not a fan of anamorphic; in fact, I’ve never shot an anamorphic film; I’ve always shot spherical,” he replied. “I’ve shot some anamorphic commercials but have also been a bit disappointed, to be honest. It seems a little bit silly to shoot anamorphic with a digital camera. But I do sometimes like the qualities that anamorphic flares produce, but I never get them when I want them and get them when I don’t want them.”
There was some digital flare work in Mank, Messerschmidt said, “but it was very subtle. The Bell and Howell lenses of that era had particular flare characteristics that we wanted to copy. The CG artists got good at it, and I told David, ‘What if we play with that a little bit?’ So we would intentionally put bright things in the frame, practicals, or sun hits on roofs of cars where we would do an elaborate CG flare.”
Working in post-production, “I would mark it up and say that we needed a blue streak here, which should be very aggressive, and the guys would paint it in and make sure it was just right. It was an enjoyable experiment. It was cool to go in there and art direct them,” he said, adding, “I also rarely use diffusion on a lens, but there were moments in the Dominican Republic that I thought it would be interesting to try.”
For scenes the filmmakers wanted to appear very humid, they once again went digital, using a DaVinci Resolve plugin called Scatter. “It’s a bit of post-production cinematography. I would look to do that again. I think it’s all about control; I think the fear with that decision is that if you’re working with a team you don’t trust to implement it the way you want, it is not a fear I have with David. I’m pretty involved with that process,” he explains.
“There are certain effects that you have to do optically, but I’m not nostalgic about it like some people. I don’t believe you must be incredibly dogmatic about some things; it’s about the result.”
Veering back to what might be safer ground, and to more practical camera work, we inquired about the momentous fight scene towards the movie’s end, with its superb handheld camera work.
“That was quite an undertaking and a culmination of many people’s work, starting with the stunt coordinator. It was, of course, heavily choreographed and not something that was shot off the cuff. The thesis was that we wanted the audience to be geographically centered regarding understanding the space, so we didn’t deliberately disorientate them so they knew where we were in the house,” he said.
“Each room in the house has a distinct color palette that we key up in the beginning so you understand that you’re in the kitchen, gaming room, or bathroom. We go through the process of this fight, and we revisit all these spaces in reverse until he arrives back where he dropped the gun.”
However, Messerschmidt says, “there’s very little real handheld in that sequence. Almost all of that is post de-stabilization. It’s nice because we can go in there and art direct the level of shake by saying we can slow down a little bit here, quicker here. Sometimes I find that aggressive handheld is very hard to judge on the set and keep it consistent across five or six days of shooting that it took.”
For Messerschmidt and Fincher, this philosophy of only delivering the best experience for the audience is like a mantra. “It’s fun to see what we can do and get away with,” Messerschmidt concludes. “To be honest with you, anyone can shoot with a handheld shaky camera pointed at people fighting, and there’s a long history of success with that technique in action movies. There is a playbook for that. We wanted to see if we could do it differently.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
December 16, 2023
Posted
December 15, 2023
Editor Shelly Westerman Solves the (Post Workflow) Mysteries for “Only Murders in the Building”
Editing a mystery can be a delicate business. A reaction shot held a few frames too long can be a giveaway, too short and the eventual payoff could feel too obvious. This is challenging enough in a TV episode or a movie, but even more so in a ten-episode arc.
But that’s the kind of work the editors of “Only Murders in the Building” are responsible for. Season 3 is streaming now on Hulu.
The popular series, starring Steve Martin, Martin Short and Selena Gomez, is about to finish its third season, and editors Shelly Westerman, ACE; Peggy Tachdijian, ACE; and Payton Koch not only have to keep the reveals coming amid the show’s often absurdist humor and moments of pathos and drama, but they also have to attack some major musical numbers for the show-within-the-show “Death Rattle Dazzle,” at the heart of the season’s story arc.
In October, Westerman spoke with writer and film historian Bobbie O’Steen on stage at NAB Show New York’s Insight Theater. The duo discussed the “meticulous art of film editing.” Watch their full conversation (below).
The work of editing the series involves close collaboration among executive producers John Hoffman (the showrunner), Dan Fogelman and Jess Rosenthal; the writers, the directors and actors and the trio of editors, each of whom takes responsibility for particular episodes.
The editing process starts before cameras roll, when they receive that week’s script and virtually attend the table read in New York. Westerman explains, “Once you hear the words spoken, you hear the rhythms, you start to get an idea in your head, and you can begin visualizing an episode.”
This is followed by a concept meeting, featuring all the department heads. “We talk at a high level about the look and tone of the episode, and then we have a tone meeting specifically with the episode director and executive producers, and we go through the script scene-by-scene and talk about what’s happening.
“The director will propose all their questions and editors will chime in with questions, so there are a lot of very helpful discussions that happen early on.”
Each director helms two episodes, which are cross-boarded and shot in New York, usually with six or seven days allotted for each.
“Once they’re shooting one of your episodes,” Westerman explains, “we’ll start to get the dailies. What we see might match everything we’ve talked about to that point, or they might have discovered things on set that made the scenes go a very different way. But at least all the preparation lets us start with a grounding from which to work.”
Editors are given roughly two days to get their editor’s cut together and sent off to the director. “Then on a half-hour show like ‘Only Murders,’” she says, “the directors get about two days to work with the editor, before we need to turn that [cut] over to John and the other executive producers for their feedback.”
DRILLING DOWN TO THE WORKFLOW
Westerman, who a few years ago was adamantly opposed to the idea of remote editing (“I always said you must be in the room for creative collaboration,” she’d frequently asserted), has completely revised her feelings on the subject and acknowledges that she wouldn’t have even been able to have taken this job if it weren’t for the ability to work while also spending time in Florida caring for her parents. In fact, all three editors and each of their assistants work remotely.
While all work remotely, they are not actually doing the work on their computer or working with any of the media where they are. Boxes with Avid and media all sit securely inside the facility Pacific Post, where they are networked together via Avid-NEXIS.
Westerman, who works on a Mac “trashcan” wherever she’s set herself up to work, uses Jump Desktop to access her hardware and the network, as do the other two editors, though they happen to work on Mac Minis.
When dailies are ready, Westerman’s assistant, Jamie Clarke, is the first one notified. He will also have access to camera and sound reports and script notes, and he will QC the material to ensure that it’s all in sync and there are no technical issues.
Then Clarke organizes the scenes within Westerman’s system. Anything shot with more than one camera (most scenes in the show are covered by two and some of the musical numbers by three) into Group Clip projects, and he will load footage into bins to her specifications (each editor has their preferred method of organizing material).
“I don’t get the scenes in order,” Westerman says, “but I’ll start to build sequences pretty quickly, so that I can see how it’s flowing. By the time they finish shooting the episode, I’ve got a rough sketch of the acts put together. Then, for my two-day editor’s cut, I’m really trying to polish and tighten.”
MAINTAINING A SEASON-LONG MYSTERY
Westerman received a detailed briefing from Hoffman prior to commencement of production for the season. This provided her with a broad overview of all the episodes, “so we had some idea of what was going to happen as we got into the season.”
But that isn’t the only way to proceed, Westerman acknowledges. “Peggy said she didn’t want to know who the killer was,” she recalls. “She felt it helped her with the surprises because she was surprised, as well.”
Regardless, in a story propelled by a constant revelation and clues, there needs to be an ongoing overview provided. Hoffman and the other executive producers, Westerman says, “will sometimes look at a version we present and say, ‘Hey, we need to see this in Episode Three because we’re going to refer to it in Episode Six. So then, we go back and finetune the episode.
“The moment that Ben Glenroy [Paul Rudd] falls down the elevator shaft and the [three lead characters] run out of the elevator, turn back around to see what happened and Mabel says, ‘Are you fucking kidding me?’ — that scene comes back into play in a later episode where she’s looking at a hanky Ben is holding. I didn’t use that and one of the executive producers said, ‘The hanky’s important. We have to see her looking at it at that point.’”
There is also a moment where Charles (Martin Short) gets into a fight on the fateful opening night that kicks the season off.
“They shot the fight scene for use in Episode Nine, but then it turned out I needed to use some of it in Five, and Payton needed some of it for Six. But Peggy hadn’t cut Nine yet, so we all wound up pulling from her footage, using bits and pieces from the fight scene that worked for our episodes. Later, we went back to make sure we were all in sync with one another in terms of what we were using from the scene.”
This back-and-forth happens frequently, particularly for the recaps that show important moments from previous episodes. “One of us might do a recap and another one will say, ‘You’ve got to change that. That isn’t in the show anymore.’”
POLISHING PICTURE AND SOUND
Long gone are the days when editors turned in rough cuts with “insert effect here.” The final sound editing and VFX creation will continue after picture is locked, but directors and producers expect the editors to deliver scenes that are complete, and work as is. So much of that work commences while Westerman and the other editors are still sketching out scenes.
“The schedule is so accelerated compared to a feature,” the editor notes, “so as I’m going along and stringing together and polishing scenes, we’re also doing sound work, adding score, adding VFX. We’re doing all of that together so that by the time I get to the end of my editor’s cut, I’m hopefully in pretty good shape with a polished cut to present to the director.”
Editing assistants are generally skilled at basic VFX work, such as wire removal, and the show has a VFX artist on staff from the beginning of production who can step in and handle quite a lot of the work as it comes up.
“There’s a scene where one of the characters is in a basement threatening Charles and Mabel with a blowtorch,” Westerman recalls. “Of course, they couldn’t shoot with a real flame for safety reasons, so the VFX artist handled that.”
Sound Supervisor Matt Waters gets involved early on to build a wide variety of sound effects. As the season progresses, there are more and more sounds that can be re-worked and re-used. Fairly early in the season, the editors already had access to quite a few sounds of the theater where much of Season Three takes place. SFX such as specific doors opening and closing, and hallway background sounds were accessible to the editors and sound editors.
CUTTING MUSICAL NUMBERS
While the musical numbers are meant to be dramatic and advance the plot, they need to be approached differently from regular dialogue scenes. Especially, as “Death Rattle Dazzle” really gets on its feet and the routines get more elaborate.
Cutting musical sections is a different set of muscles, Westerman explains. “These are big numbers and big Broadway people like Sara Bareilles, Michael R. Jackson, Marc Shaiman and Scott Whitman were stepping in to help with the songs, so I’m not going to lie, it was intimidating at times.”
These scenes are generally shot with three cameras, and Westerman not only Group Clips all the angles from a take so she could watch them together, but she also has her assistant build what she calls a “super group” comprised of all the tapes of a certain setup, as well as all the coverage so she could observe every possible permutation of picture to each moment of the song.
When there is singing by say Steve Martin or Meryl Streep or one of the other performers, the songs were generally pre-recorded by the artist, who would then, during the shoot, sing live while being fed the playback through an earpiece so both the playback and live audio are available on their own clean tracks.
This approach leaves open the possibility of using the prerecorded audio or the live audio, depending on which plays best. In fact, many of the numbers are the result of the music and sound departments cutting extensively, sometimes syllable by syllable, to come up with the very best rendition.
“The performers go in and did recordings of all the songs a while before they were used,” the editor explains. “We get those early on so we can listen to them and get them in our heads and know the songs themselves. Then, once we get the scenes, we start assembling those right away because they took a little bit longer to craft. They’re technically more challenging. I’ll get it laid out first, and then I can go back and find these little moments that help tell the story.”
Once the musical scenes are cut, the music production team and music editor Michah Liberman goes in and re-works the sound, sometimes alternating between the prerecorded and the live versions.
“Sometimes, they’re literally cutting syllable by syllable in a very exacting, precise way. Finally, our sound mixer, Lindsey Alvarez, ties it all together.”
“There is a lot of teamwork on the show,” Westerman sums up, “and it’s been rewarding and fun to work on a show this good and be part of that collaboration.”
If FX’s The Bear reminds you of a Martin Scorsese film, you won’t be surprised editor Joanna Naugle is a devotee and used his movies as references for the show.
February 16, 2024
Posted
December 14, 2023
Translating “The Last of Us” From One Screen to Another
BY MICHAEL MALONE, BROADCASTING+CABLE
HBO’s adaptation of Naughty Dog’s wildly popular post-apocalyptic video game “The Last of Us” is a notable (and successful) example of seeking out IP from nontraditional sources.
(The Last of Us is set in a post-apocalyptic America, 20 years after a fungal infection has turned much of the population into zombies. Neil Druckmann created the show alongside Craig Mazin.)
Working with such detailed source material means you essentially need an army and Mazin confirms that “The Last of Us” takes “thousands of people” to make.
Mazin was joined on the NAB Show Main Stage by several of them for a panel moderated by THR’s Carolyn Giardina. The conversation also featured cinematographer Ksenia Sereda; editors Timothy Good, ACE and Emily Mendez; VFX supervisor Alex Wang; and sound supervisor Michael J. Benavente.
Mazin said, “There’s no way for a film to be by one person. There’s hundreds of people — in our case, thousands of people.”
Mazin described The Last of Us producers, cast and crew as “a big family.”
Mazin spoke of the “luck” involved in gathering the right producers to work on the show and how listening to them in interviews and chats tells him a lot more than their credits do. “I like talking to people,” he said, “and hearing their passion for things.”
Challenges of Adapting
Season one was shot in Alberta, Canada. The producers discussed the challenges of adapting the popular video game to series.
Alex Wang, VFX supervisor, described the game’s look as “so beautiful and so immersive. How do we use that as inspiration?”
Cinematographer Ksenia Sereda said the producers aimed for a balance between borrowing from the game and giving viewers something fresh. “We wanted to preserve the most iconic parts,” Sereda said, “but at the same time, we did not want to exactly copy the look.”
She spoke of the “massive” variety of choices for cameras and lenses, and said the ARRI ALEXA Mini gave the shots a realistic feel and helped the viewers get closer to the characters.
Mazin quipped: “I don’t understand any of that. I’m glad you do.”
Editor Timothy Good said he’d never played the video game before. Editor Emily Mendez, on the other hand, was a big fan. The two brought together their different perspectives to give the show a distinctive feel.
Key Moments
The editors spoke of the key moments in season one. Pedro Pascal’s Joel lost his teenage daughter in the pilot, and is reluctant to open himself up to another teen girl as he gets to know Bella Ramsey’s Ellie.
Ellie’s book of puns makes him smile for the first time in eons. “You can see the transformation between the two characters and how they sort of come together,” Good said.
Mendez mentioned Ellie stitching up Joel’s stomach later in the season, and the effort the producers went through to give the scene extra impact. “You’re with her in that moment,” she said.
Michael J. Benavente, sound supervisor, spoke of “a quiet world” in the show with no freeways, no kids on playgrounds, no airplanes. The viewer hears snowfall in one episode. “It really helps the story of the people,” Benavente said of the hushed vibe. “When you hear what they’re hearing, when you feel what they’re feeling.”
Season two will shoot in British Columbia. “This is what I do — I do The Last of Us,” said Mazin with a smile. “I couldn’t be happier.”
Cinematographer Patrick Capone, ASC, director Mark Mylod and senior colorist Sam Daley consider what made the series look and feel like that.
February 7, 2024
Posted
December 14, 2023
Editing “All of Us Strangers:” Shifts Between Real and Imagined
TL;DR
Director Andrew Haigh and editor Jonathan Alberts delve into the making of “All of Us Strangers,” revealing how the film holds a deeply personal significance for both filmmakers.
They explain that the tone of the film was tricky, noting the challenge of blending supernatural elements into its otherwise straightforward drama.
Haigh and Alberts wanted the audience to feel dislocated and consistently questioning the story’s reality and found music a creative help in achieving this effect.
Loosely inspired by Taichi Yamada’s 1987 novel Strangers, Andrew Haigh’s All of Us Strangers has garnered critical acclaim as a romantic-ghost story with a deeply personal touch. The British writer-director explored the film’s themes during a panel discussion at the New York Film Festival, describing it as an exploration of the desires, fears, and traumas unique to a specific generation of gay men.
“It was the most expensive therapy I’ve ever done. And it did feel like therapy, in many ways. The story is clearly not autobiographical but it’s definitely does come from a personal place. I wanted to tell an experience, as I see it, from a queer experience but not just my experience.”
The film is about Adam (Andrew Scott), a melancholy screenwriter living alone, who meets and begins a passionate relationship with the more extroverted Harry (Paul Mescal). At the same time, Adam begins another parallel journey to confront his troubled past and perhaps reconcile his unsettled present.
“A lot of the elements in the story are personal to me,” he revealed. These include filming in Haigh’s actual childhood home, that he last visited 42 years ago.
“But it was always about trying to tell a wider story about what it means to be a parent, what it means to be a child, what it means to be a lover and how we try and negotiate those complicated relationships that kind of come and go through our lives.”
Haigh’s script notably diverges from the original source material, where the character played by Paul Mescal was originally written as female.
“It has a different type of thing going on which works as a traditional ghost story,” he told NYFF programmer and panel moderator Florence Almozini. “It really does fit in with that traditional Japanese kind of ghost story style, which I like. But I knew that wasn’t the film I wanted to make. That wasn’t what was interesting to me about it. I wanted to find a more grounded reality of the story and then take it to somewhere different.”
In the film, Adam is preoccupied with memories of the past and finds himself drawn back to the suburban town where he grew up, and the childhood home where his parents (Claire Foy and Jamie Bell), appear to be living — just as they were on the day they died, 30 years before.
Haigh’s regular collaborator, editor Jonathan Alberts, found the script resonated personally with him too, telling Deadline’s Matt Grober that it felt like it was written with him in mind.
“We shared the experience of growing up in the eighties, growing up gay, kind of growing up with the specter of AIDS happening and trying to deal with all sorts of feelings of grief or trauma and shame and all of these things.”
While All of Us Strangers was tricky, both tonally and as a story rooted deeply in internal experience, another challenge of the project for Alberts was figuring out how to grapple with the way in which the protagonist ends up “slipping between these worlds of the 1980s and contemporary London” in the story.
“We wanted the audience to feel dislocated, but anchored, not mired in confusion, but consistently questioning, is this real? Is this not real?” says the editor. “I feel like you always want to have an audience ask those questions, and you want to keep them active, and to keep putting the puzzle together.
“But when you’re creating a film that is essentially a bit of a puzzle, it’s always a question of, is this puzzle going to fit together? Because you can create a puzzle that doesn’t quite fit together, and people are just like, ‘I don’t know what’s going on.’”
Alberts came to All of Us Strangers after collaborating with Haigh on numerous projects over the last decade, from films like Lean on Pete and 45 Years, to shows like HBO’s Looking.
“We’ve been working for about 10 years together. So when we’re busy working on a television show or film, he’s busy typing in the background, and I’m cutting. That’s when I first hear about the script. Then, typically, he’ll share with me a few months later.”
When they get to the first cut of the film, about a week after shooting, he says the director and he never sit in the same room and watch it together, “because you’ve worked so hard, it’s like you’ve spent a lot of time yourself and your assistants putting it together. It’s an extremely vulnerable time for a director and seeing all the problems or seeing all the things they didn’t quite get.”
Alberts explains that the tone of the film was tricky in not being a straightforward drama but one that introduces supernatural elements.
“We never wanted to be moving to a genre, we always wanted to keep it in a very subtle space. And it’s a very delicate line. I think music helped to draw that out.”
Through screenings they experimented with a lot of different notes to find what was working and what was not before hiring a composer.
“When we were shooting this film in London I would take the tube and the train and every day and I was listening to this Italian composer Caterina Barbieri, which we ended up using as temp soundtrack. She’s an amazing composer, we met with her and we thought about her doing a score. But eventually we kind of went in a different direction [hiring London based French pianist Emilie Levienaise-Farrouch]. But that evolved over several months and many discussions.”
Haigh adds, “It’s obviously quite an unusual film and I was always very scared that the central conceit wouldn’t work. There are a lot of turns in the story that I was worried would not work. I wanted, even in the present day of the story, to feel slightly shifted from reality, even though that is based on an apartment block in London. It was really important to me that the tone just felt [to an audience] like ‘I’m not quite sure when and where this is set’.
“We thought really long and hard about trying to create a tone that made you feel like you were somehow separate from time. And that would allow you to understand the kind of conceit of the story and make it feel real when you suddenly go back and see parents.”
Cinema auteur Yorgos Lanthimos’s “Poor Things,” combines cutting-edge virtual production methods with old-school filmmaking techniques.
March 23, 2024
Posted
December 13, 2023
“Poor Things:” Making This Crazy Fantasy a Reality
TL;DR
The Searchlight Pictures release and Venice Festival Golden Lion winner “Poor Things” is an awards season favorite on multiple counts, not least the cinematography of Robbie Ryan.
Ryan explains his use of various wide angle and vintage lenses used to shoot within large scale “composite” sets using virtual production techniques.
Production designer James Price built four large composite sets at Origo Studios in Hungary, which deployed painted backdrops and cutouts as well as LED walls.
Portions of the film are also shot in black and white, with the final decision to do so only agreed upon at the last minute.
Director Yorgos Lanthimos saysreferences for the creative team included “Dracula” and the films of Fellini and Fassbinder, as well as Powell and Pressburger — all famed for their surrealist and extreme screen visuals.
In what critics are hailing as his boldest vision yet, auteur Yorgos Lanthimos (The Lobster, The Favourite) delivers Poor Things, a punkish Frankenstein update that metamorphoses into a feminist fairy tale.
In the Searchlight Pictures release and Venice Festival Golden Lion winner, Emma Stone plays a peculiar, childlike woman named Bella who lives with a mysterious scientist and surgeon (played by Willem Dafoe). The movie is set in an alternate version the 19th century and based on the 1992 novel by Alasdair Gray.
“I read the book around 2009 and immediately fell in love with it,” the director explained during a Q&A session following a screening at NYFF. “I hadn’t read anything like it. And it was mainly the character of Bella Baxter that I was drawn to.
“I just thought she was just this incredible, unique human being. The world of the novel itself, all the characters and the premise of it allowed you to explore the story of this woman who has a second chance in life to experience the world in her own terms.”
Lanthimos says that other references for the whole creative team included Dracula and the films of Fellini and Fassbinder, as well as Powell and Pressburger — all famed for their surrealist and extreme screen visuals.
In the same on-stage discussion, production designer James Price explained how he built four large composite sets at Origo Studios in Hungary, “which become something more like an immersive set, a like a Disney theme park,” he said. “Nobody builds set this big anymore.”
They didn’t use vast canvases of green screen. Instead they deployed painted backdrops and cutouts “techniques that nobody ever does anymore,” Price said.
Costume designer Holly Waddington, also on the panel, said she drew inspiration for fabric color from anatomical drawings and bodily fluids, “the yucky ones and the beautiful ones and everything in between… pinks and saturated reds and lilac sort of tripe colors. I always tried to relate it to something a bit revolting.”
For The New York Times series “Anatomy of a Scene,” Lanthimos dissects a sequence that takes place in a restaurant in Lisbon, and explains how Emma Stone with choreographer Constanza Macras devised her deliberately awkward dance moves.
Arguably, the real star on the technical side is Irish cinematographer Robbie Ryan, ISC, BSC, working on his second film with Lanthimos after being nominated for an Oscar for The Favorite.
Describing the director himself as “an astral cinematographer,” Ryan says that the desire was always to shoot on 35mm.
“That sensibility is something that kind of lands with the rest of the film where you’ve kind of got a whole sort of universe that is unique,” he says in an interview with Denton Davidson for GoldDerby. “Yorgos wanted to create a world for Bella to be in that nobody else would see. [We see it] only through her eyes.”
The DP’s work started with three months of prep to try out new film stocks and various lenses. “We did one test where we had about 50 lenses that we had to look through, and we had to get through that in one day,” Ryan told John Boone at A.frame. “It was a process of evolving and discovering as far as sorting the language for what we were going to do.”
Shots with an extreme wide angle 8mm fisheye lens were used to explore Baxter’s lab, and are of a type that he employed on The Favorite. This time the wide angles give the impression of almost looking through a peephole or a magnifying glass.
“This is extension of the wide angle language that Yorgos has been of developing over other films,” he told Davidson. “We wanted [to recall] the old vintage photography, where you would see a lot of vignette sort of kind of effects, because the big plate cameras that would have been used in early photography had la lens that didn’t cover the full width of the glass plate that would have been used for the camera.”
The extreme wide angle lenses paired with a 35mm camera allows the viewer to feel like “you can almost step into the world,” he says.
Vintage Petzval lenses, originally ground in 1910 for projectors, were also deployed for period effect.
“They’ve been rehoused which made them possible to shoot portraits as a camera lens,” he told Screen Rant’s Caitlin Tyrrell. “They had this beautiful way of creating a soft fall, a shallow focus and a kind of a crazy bokeh. But they evoked a lot of early photography. It makes me feel that we are connected a bit to the old world of photography, almost painterly. I remember the production design team mentioning Hieronymus Bosch quite a bit in prep.”
VistaVision lenses from another antiquated filming technology were adapted for use in a specially constructed “Frankenstein camera,” Ryan told an audience at Camerimage, as Variety’s Will Tizard reports. This achieved the desired period look but was tricky to work with, he said.
He also said that the results at times bordered on “mystical,” citing an incident when the camera’s “crap batteries” began to run down as he was filming Bella awakening from the dead. The film’s slower transport speed resulted in a slightly sped-up Stone sparking to life in a way no one had quite anticipated.
With Lanthimos adamant that he wouldn’t do additional dialogue replacement in post-production, it also meant the VistaVision camera could only be used for scenes where capturing dialogue on set wasn’t an issue.
Augmenting these old-school techniques and kit was the use of a virtual production screen to help create the views from the cruise ship.
Ryan called the 70-meters-long by 20-meters-high wall a “moving painted backdrop” in an interview with James Mottram for British Cinematographer. “For the cruise ship, Yorgos was always very keen to try out an LED backdrop, because then we could have the waves moving and the clouds moving,” he said.
Even though the set was small enough relative to the wall, shooting on wide angle meant they had to mask the ceiling. There were also issues with needing to illuminate the foreground set with a lot of light because he was shooting on negative film.
“That spillage of light is really painful, because it makes the LED wall lose its punch,” he revealed. “So, you’re kind of having to balance out so much all the time… it was a technical head wreck to try to keep the light on the deck, but not on the screen. And the fact that the deck of the ship was only probably four meters away from the LED wall made it really very difficult to stop this light spill, which made my life hell!”
Even the film stock itself was pushed to the extreme. Portions of Poor Things are shot on black and white, while Lanthimos was keen to shoot other sequences using Ektachrome. Because he wanted Ektachrome in 35mm, Kodak had to manufacture it specifically for the film.
“They only ever made it a 16mm Ektachrome, so Kodak cut it to 35mm and we processed it as reversal for reversal,” he explained to A.frame. “That’s something that’s never been done before. It’s actually a lot more versatile a stock than I thought it would be, but when we were filming with it, we were under the impression that if you were to underexpose, it would be irretrievable. So, I was [thinking], ‘Oh my God, if this stock comes back underexposed we’re in trouble.’ So you had to get it right. But the results were beautiful.”
Another challenge for Ryan was learning to shoot a lot of the picture on zoom lenses. Since he also operates the camera he had to perfect zoom control, as he explained to A.frame.
“For me, the wide angles are not difficult. I just put the wide angle on and everybody else — production design and sound — has a nightmare. The challenge for me, camera-wise, was the zoom, because I didn’t want to mess up any of the acting. I got the hang of it, but it was still nerve-racking and it pushed me to my limits.”
The studio-bound film was also unusual for a DP who typically shoots on location. This presented particular challenges around the lighting.
“The great thing was it kind of still felt like we were on a location because they just built the locations in this amazing detail,” he told Denton. “So everything in front of the camera is there. It was the same approach I would do normally, just I had to do a lot more lights and we had to build skies for cities like Paris and Lisbon.”
The wide-angle scope was so extreme that the fantastically detailed Victorian-style sets had to be created to all but completely wrap around the camera — which also made hiding lights and sound gear a challenge.
“They created all these composite sets, where you can walk in the front door and every little thing is shootable,” Variety reports he said at Camerimage. What’s more, Ryan added, is that sets don’t fly away to make space for the camera as it passes — instead, it must move through real rooms, halls and up and down stairs.
Choosing to shoot half an hour of the film in black and white and on B&W film stock, rather than shooting color and converting to B&W in post, was another key creative decision.
Davidson gets Ryan to talk about this in relation to the opening shot of Emma Stone dressed in an elegant blue outfit which then cuts to the black and white footage.
The reasoning, says Ryan, “and I’m probably gonna get in trouble for saying it,” is that if audiences saw a black-and-white image at the beginning of the film they might think the entire film was going to be black and white, and might tune out.
“I think we put a color shot out of the start so everybody will think it’s a color film, and then it goes to black or white, [then] goes into color again. That was sort of the theory behind that,” he continued.
“What I love about the use of color and black and white in the film, is that usually when you see a film, flashbacks are in black and white. But in this film, the film is in black and white and the flashbacks are in color.”
Ryan revealed to Movieweb that the decision to shoot black and white came just nine days before principal photography. “Yorgos said he had to go ring the producers at Searchlight, and it was like touch and go whether they would let him do it.”
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
March 31, 2024
Posted
December 11, 2023
How Martin Scorsese and Thelma Schoonmaker Reworked and Reframed “Killers of the Flower Moon”
TL;DR
With her 22nd collaboration with Martin Scorsese, Thelma Schoonmaker, ACE, talks about the process of changing the film midway into production to focus on the central love story.
The celebrated editor discusses how they test cuts both with each other and audiences.
She and Scorsese are longstanding cineaste curators — and financiers — of the legacy of the great mid-20th-century British filmmaking duoMichael Powell and Emeric Pressburger
Veteran editor Thelma Schoonmaker, now 83, is a graceful, generous and fascinating interview subject as she discusses Martin Scorsese’s Killers of the Flower Moon.
“The love story is the basic thing that Marty decided to focus on,” she told Matt Feury of The Rough Cut podcast. “When the idea about the film changed, because Leo DiCaprio decided he would like to play Ernest instead of the role of the FBI man [Jesse Plemons]. That was a dramatic change you can imagine in the script, and they were still working on that as we were shooting. Lily Gladstone and DiCaprio were working with Marty to create scenes that would show the evolving love story.”
She describes how the film teases out the complex character of Ernest, as someone who seems both to have genuine affection for his Osage wife, and yet is capable of facilitating murder.
“The audience enter this world and learn and experience things through Ernest, but we’re not really aligned with him because we only get a true sense of who he is, the atrocities and the violence, over time.”
The opening scene, for instance, depicts Robert De Niro’s character sizing up his nephew, much as the audience is.
“The way we worked on the rhythm of that scene, was to make sure that we sometimes paused for a few seconds, more than you normally would. Because you see that De Niro’s trying to make up his mind. What questions should I ask next to find out if this guy’s going to be a tool? As Ernest is. It’s obvious in the film that he doesn’t read, for example. He’s been horribly educated, whereas his uncle is much better educated.”
She and Scorsese tend to screen the movies they work on in multiple different cuts, fine-tuning in reaction to select audiences, as she explained to Craig McLean at Esquire.
“With our movies, we do rough cuts — sometimes as many as 12,” she said. Those cuts-in-progress are screened for people in her and Scorsese’s New York and Hollywood inner circles. “Then we start opening up to people we don’t know. Then we go to bigger audiences. And we learn from what we’re hearing, and then we do another cut.
“Then we screen again, and then we do another… we’re very lucky. A lot of editors aren’t given that kind of time, which I think they should be.”
Schoonmaker explains to Art of The Cut, “The fact that somebody who doesn’t know the movie is in the room with you affects you deeply. You’re very very conscious of people moving, or do they laugh? Or don’t they laugh at the right place? Or the wrong place? How are they feeling afterwards? Of course, we do talk to people at length afterwards to find out how they’re reacting.”
Sometimes there are big changes in direction — as was the case for Killers of the Flower Moon. “We usually do move things around when editing, except for Goodfellas where everything was perfect right from the start,” she told Feury. “That movie was like riding a horse. It knew where it wanted to go. We dropped only one shot. That film was just there.”
Honoring the Osage and Recognizing Powerful Scenes
Killers of the Flower Moon is dedicated to the memory of musician and composer Robbie Robertson, someone who’s had a hand in the music in various ways for many of Scorsese’s films since he recorded The Last Waltz (featuring The Band’s last concert) in 1976
Schoonmaker says the score’s throbbing baseline was something that Robertson came up with. “This culture, as you see in the last shot, and the dances that they do, are very sacred, you have to be invited to them, they’re not tourist things. So the drums are incredibly important. The Osage actually consider the drum a person as they do the pipe.
“So I think that Robbie being half Mohawk, Marty definitely wanted an indigenous person to do the music, and felt that this would drive the movie all the way through to the end. You know, it also is probably blood running through your veins. The fact that he continuously employed it meant that in his mind, he was giving it to Marty as a way to move the film along.”
In addition to the scoring, Scorsese wanted to emphasize the indigeneity of the characters. He did so in part by including several pivotal scenes in which Osage is spoken but no subtitles are provided.
Schoonmaker tells Steve Hullfish in an episode of The Art of The Cut, “There are many times in the movie where you do hear the Osage purely, which is a very, very good decision which I resisted at first. Not hearing it by itself. You don’t need to know what he’s saying in the wedding ceremony, for example, you know, he’s marrying them, right?”
However, in a late scene in which DiCaprio and Gladstone’s characters are arguing, Scorsese again opted for no subtitles, which Schoonmaker says “was a very brave and correct decision.”
Although Schoonmaker and Scorsese have worked on many projects together over the years, her instincts don’t always mesh with his, at least initially.
Scorsese knows, she says, he “could trust me to do what was right for the movie, that we weren’t going to have ego battles in the editing room about who’s right and who’s wrong.
“So when there ever is a really major disagreement, which is rare, I am always more than happy to show him what he has asked for. And then if I want to show him options, then I show him options. And he’s very happy to look at those. And then we’ll decide which one is best. But it’s never a battle.”
But, she says, “There’s never a problem when something’s that powerful. There’s never a question” of what to do, referring to DiCaprio’s performance in the courtroom scene.
For maximum impact, Scorsese instructed Schoonmaker to “cut away only when we absolutely have to. I want to just hold on Leo for the entire duration of the testimony because he is so brilliant.
“And he is. So we only cut away when the prosecutor points to De Niro and says he is now talking about this man. And that switch pans over to De Niro because Leo has just incriminated him.”
(And by the way — Schoonmaker is adamant that the film should be viewed in one sitting, with no pauses (even at home) to fully appreciate these cuts, regardless of the 206-minute runtime.
Dazed Digital’s Nick Chen reports that Schoonmaker was incensed to hear some theaters screened the film with an intermission: “That’s really horrible. There’s a build. It’s very important. There’s a long build that you have to feel. If you cut it, you’re not going to feel that! Don’t pause it!”)
Powell Pressburger’s Oeuvre
Her custodianship with Scorsese of the film œuvre of Michael Powell (The Red Shoes, A Matter of Life and Death, Black Narcissus — with Emeric Pressburger — and Peeping Tom), her late husband, crops up time and time again. The BFI in London recently held a career retrospective including newly minted versions of films like The Red Shoes, and Schoonmaker is a more than able commentator.
She tells Feury, “Michael Powell and Emeric Pressburger used to do what they called place little bombs in a movie little things that you may just barely notice that explode later. That is something that Marty would have noticed in his, you know, devouring of the Powell Pressburger films.”
“Marty says The Red Shoes are in his DNA,” notes Schoonmaker to Esquire. It’s a film that she first saw aged 12 while living on the Caribbean island of Aruba, in an “American colony” created by Standard Oil.
Returning to the U.S., aged 15, she tuned into a “wonderful TV show called Million Dollar Movie, where they ran one film nine times a week.” She later learned of another avid viewer: “Marty would [try to] watch a Powell and Pressburger movie [all] nine times unless his mother said: ‘If you don’t turn that off, I’m going to start screaming.’
That’s because, with the rise of realism in British cinema — “kitchen sink dramas” such as Saturday Night and Sunday Morning (1960) and ThisSporting Life (1963) — the films of Powell and Pressburger fell out of fashion in the UK. They were viewed as conservative, colonial, old-fashioned.
Key in that canon is Powell’s transgressive horror from 1960, Peeping Tom. In 1979 Scorsese arranged for Peeping Tom to be shown at that year’s New York Film Festival, and then paid for its redistribution in U.S. cinemas.
To mark the moment, Scorsese held a dinner in New York in Powell’s honor. He invited along the editor he’d hired to cut his latest movie, Raging Bull, partly on the advice of Powell.
“I was just so struck by Michael,” recalls Schoonmaker, who had last worked with Scorsese on his debut feature, 1967’s Who’s That Knocking at My Door.
“He was so extraordinary. He came back to talk to me — I was editing RagingBull in a bedroom, and we had film racks in the bathtub.”
That was how Schoonmaker and Powell met. They married in 1984. He died a decade later. “Marty gave me the best job in the world and the best husband in the world!”
“I grew up watching westerns,” says Scorsese. ”So for me to see [this film] on the screen — beautiful palomino horses — was heaven for me.”
December 26, 2023
Posted
December 10, 2023
Deloitte Anticipates 2024 Will Be a “Transitional” Year for Generative AI
TL;DR
Deloitte’s annual Technology, Media and Telecoms report highlights generative AI,sustainability, and monetization in its preview of 2024.
Streamers will charge more for premium content, fight user churn with longer subscriptions, and satisfy bargain hunters with additional pricing tiers.
More companiesare expectedto develop their own generative AI models to drive greater productivity, optimizecosts, and avoid risk of training on public data.
More and more companies are measuring their carbon output, but are they doing enough to forestall obvious global warming?
Generative AI, sustainability and monetization are the major themes of the year ahead, with GenAI set to dominate strategies in Media & Entertainment, according to Deloitte.
In its annual review of Technology, Media and Telecoms, the consultancy says 2024 will be a transitional year for generative AI as companies begin to incorporate it into their tech stacks, while being cognizant of pending legislation.
Expect to see generative AI integrated into enterprise software, with enterprise spending on GenAI anticipated to grow by 30%. The challenge will be in finding a pricing model that captures its value, covers its costs, and is embraced by customers. Deloitte anticipates a conflict between vendors who want to charge per user and IT departments that believe generative AI features should be free.
“Generative AI is poised for a breakthrough in 2024, as it begins to follow through on its promise of improving productivity, creativity and enhancing the way enterprises engage with their ecosystems,” Deloitte Vice Chair Paul Silverglate says.
However, Deloitte Chief Futurist Mike Bechtel warns Fast Company’s Chris Morris: “Over-focusing on any one superhero technology tends to take the eye off the bigger picture.” In this case, Bechtel says an over-emphasis on AI might mean losing sight of “human/computer interaction, compute, the business side of tech, cyber, [and] core modernization.”
Nonetheless, more companies are expected to develop their own generative AI models to drive greater productivity, optimize costs and unlock novel insights. The creation of in-house AI models is intended to avoid the risk of training on public data.
To that end, 2024 will also see the first serious attempts to regulate AI, which is either a much-needed check on the misuse of the technology’s power or a stranglehold on innovation, depending on how you see it.
Most likely, legislators will scope a middle way that permits AI to grow while attempting to rein in the rampant impact of deepfakes, misinformation and copyright infringement.
Deloitte argues that “clear regulation” enables enterprises and vendors to “proceed with certainty,” and expects a to see a pragmatic balance between regulatory compliance and fostering innovation.
The European Union’s AI Act (perhaps not ratified until 2025) will likely be the global benchmark for regulation of generative AI. The AI Act and the General Data Protection Regulation (GDPR) address issues including individual consent, rectification, erasure, bias mitigation and copyright usage.
It is also likely that AI will keep lawyers busy for decades to come (until they too find their jobs automated out of the way).
Related is the note that the computer processors powering generative AI could represent half of the value of all semiconductors sold by 2027, valued at $400 billion.
Deloitte predicts that the market for specialized chips optimized for generative AI will be valued at more tahn $50 billion in 2024, up from close to nothing in 2022. The secure and reliable manufacture of AI chips is seen as vital for national and business specific innovation, economic success, and national security.
A Streaming Price for All
The drive to profitability is already a feature of content streamers who have switched from prioritizing subscriber numbers (with the domestic market saturated anyway) to appeasing itchy Wall Street investors.
As Deloitte puts it, M&E companies seem to be realizing how hard it is to recoup the historic profits of the pay TV business model.
Its expected market correction is the widening of the streamer business model to more paid for tiers and loyalty schemes, from an average of four options to eight. These range from cheap ad-supported offerings and gated content to premium tiers with instant access.
Deloitte predicts that the top five providers will offer a bewildering 17 SVOD tiers by the end of 2024, more than double the current number.
“Streamers are expected to shift from growth at all costs to making it easier for all their subscribers to get enough value for the price. Viewers may find it harder to wade through the options, but tiering could help them get more of what they want, and less of what they don’t.”
Related is the note that the audio entertainment market is on the cusp of “significant growth.” The global market is predicted to top $75 billion in 2024, a 7% rise across formats like podcasts, streaming music, radio and audiobooks.
Hollywood Looks to Game IPs
The success in 2023 of The Last of Us and animated feature The Super Mario Bros. Movie, not to mention the reality show Squid Game on Netflix, has convinced Hollywood that games can finally be adapted in a way that speaks to both fans of the original and lean-back newbie viewers alike.
“Hollywood is looking to games for new IP that they can expand and monetize, and game companies are eyeing TV and film collaborations to help make their IP work harder and offset soaring game development costs,” notes the consultancy.
It’s not just about capitalizing on IP, though; it’s about creating a new form of entertainment that captivates audiences across multiple platforms. High-performing game IPs are expanding across media formats, reaching broader audiences and increasing their overall franchise value.
“Gaming platforms are giving users the tools to create their own games, which could lead to a boom in quality content, but could be a threat to their own business longer term,” Jana Arbanas, vice chair of Deloitte and US TMT, noted in a statement. “And fans of top franchises will see their favorite characters and stories in both games and movies. It’s a crucial time as the industry finds new and profitable ways to keep audiences engaged.”
Further, user-generated content (UGC) in games could disrupt the industry. Platforms are projected to pay out almost $1.5 billion to content developers in 2024. The number of paid independent developers on 3D UGC gaming platforms will exceed 10 million.
“As this space grows, it risks disrupting the dynamics of the entire gaming industry by making endless cheap 3D content available, with generative AI possibly accelerating the trend,” Deloitte warns.
Green Commitments Growing and Questioned
It’s likely that 2023 will be the hottest year in recorded history, yet government leaders at climate summit Cop28 barely moved the dial on change. The Cop28 president, Sultan Al Jaber, even claimed there is “no science,” indicating a phase-out of fossil fuels is needed to restrict global heating to 1.5 degrees Celsius.
With that kind of leadership, is it any wonder that businesses in all sectors are back peddling on net-zero commitments?
Deloitte predicts that multiple regions will run short of precious metals like gallium and germanium in 2024, and may start seeing shortages of rare earth elements by 2025. If trade restrictions between China and the West escalate, the tech and chip industry could consider “bolstering supply chain resilience” by increasing investments in e-waste recycling, digital supply networks, stockpiling and sustainable semiconductor manufacturing.
According to Deloitte, modern, new greenfield plants for making AI chips could help improve the industry scorecard, but further manufacturing transformation can help both the greenfield plants and existing brownfield plants do better for energy, water and processed gas use.
What Deloitte doesn’t appear to address is the carbon impact of generative AI processing itself. Google, Microsoft and others are shy of revealing how much power in their data centers is being used to crunch through R&D on new LLM tools. The sustainability of GenAI should receive far more scrutiny in 2024.
On the plus side, and pushed by investors, regulators and employees, many more companies will likely systematize their environmental, sustainability and governance (ESG) tracking and reporting with software tools in 2024. Deloitte expects the market for these tools to rocket beyond $1 billion in 2024, growing as much as 30% over the next five years.
Industry analyst Forrester predicts wide adoption of bring-your-own-AI (BYOAI) tools, with organizations struggling to manage the impact.
December 10, 2023
Posted
December 1, 2023
Taryn Southern: AI Is Your Full Stack Creative Team and You’re the Director
TL;DR
Content creator and AI artist Taryn Southern shares practical tips for how you can use AI to transform your creative process and your storytelling.
Southern thinks that in five yearsAI will enable storylines to shift along with our intended physiological states.
She stresses that AI is not a silver bullet for creativity and that there will be issues to manage (such as copyright) but that it’s up to us storytellers to determine what we want to use it for.
“In the time it takes for me to even finish the sentence, AI can create a 4K photograph, craft a pitch deck, produce pop songs,” says content creator and AI artist Taryn Southern. “We knew the robots would eventually come for our jobs but how do we feel about it encroaching on the last bastion of humanity, our creativity?”
Southern has released the world’s first solo pop album composed with AI and directed an award-winning film on the future of human and artificial intelligence. In a video released by Vimeo she shares how you can use AI to craft powerful stories that inspire action and impact.
Like many other creators, her message is that AI is a tool that for creation for imagination and for saving time and money.
“The only thing that I think we need to fear at this moment is complacency,” she says. “If we don’t learn how to work with the tools, if we don’t learn how to build and synthesize our own ideas with them, then AI could be a serious threat. But if we want to push creative boundaries, if we’re willing to learn and adapt quickly and iterate, I do believe we will thrive.”
She breaks down the steps to creativity. These include “exposures” to what we’ve been taught or educate; an “operational” component, which is “how we actually get from point A to point B,” and also for a lot of artists appointed great friction; and our ability to “synthesize,” which means taking an insight from one domain and applying it to another domain.
The fourth component of creativity, for Southern, “is that elusive, magical moment that we all just crave as creatives — illumination. It’s the unexpected ‘A-ha’ moment that happens in the shower when you’re least expecting it.”
Every creator will have their own relationship to these four ideas. She then proceeds to detail how AI can be used to augment or “fill in the holes of our own creative expertise,” which is currently typically done in collaboration “with real life people.”
That’s fine if you have a budget or are part of a business, but what if you’re a DIY creator? That’s where AI really scores, she says.
“Now with AI, you have access to all of these skill sets and perspectives and tools… and you get to collaborate with them on your own time, no budget required.”
She asks us to think of AI as our “full stack creative team” in which you are the director.
“To be a great director, you need to have clarity and specificity in your direction. You also need to have meta-awareness. You’ll be able to filter through the good ideas from the bad ideas, allowing for novel ideas to seep in, and also to separate and improve on each component part of your project before assembling it all back together. That’s really what an incredible director does.”
In order to work effectively with AI, you need to train it on the various components. That means giving some thought to the project goals (am I trying to sell a product? Am I trying to build a brand? Am I writing a musical?); the constraints (resources, time and budget) and the audience you are targeting.
All standard issue checklist for any serious content creator. Then it’s about selecting your “AI team” from the hundreds of off the shelf algorithms out there capable of generating text to images, video to transcription.
“You are going to find your own just by experimenting. And once you have your AI team members, you can actually select the tools that you feel best represent their skill sets,” she says.
“Once you’ve gone back and forth with ChatGPT, and you have a finished script, you can ‘gut check’ your work to ensure it meets your goals and that of your audience. You can also ask GPT to identify if there are any biases or critical missing pieces of information that you should be aware of,” she continues.
“Finally, it’s time to move on to production. Now I’m not a cinematographer, I have very little skills here. So of course, I started by asking GPT about specific lenses that I can use to inform the starting images in my work, I can take that information over to Midjourney and insert into my prompts there.”
Southern thinks that, by 2030, AI will enable us to be able to listen to music tailored to our cognitive and emotional needs, “storylines will shift for our intended physiological states, art will change in real time to optimize our brainwaves,” she predicts.
“So much will be happening, and it will be combined in real time with our physiological signals. But AI is not a silver bullet for creativity. The impact of this tech means we will see a lot of content, a lot of noise and information, and a lot of misinformation and copyright issues,” she says.
“On the other hand AI also will allow us to accomplish incredible feats of the human imagination and empower new ways of being and thinking across the human experience and across our storytelling. It’s really just up to us as storytellers to determine what we want to use it for.”
Jaron Lanier: We Need AI Regulation and Data Provenance (ASAP)
TL;DR
Tech guru and Microsoft scientist Jaron Lanier adds his voice to those calling for regulation of AI.
He says all data should have its provenance tracked to ensure integrity, reward where reward is due, and to curb deepfakes.
Lanier thinks more executives in Big Tech companies (like him) should speak more freely and be prepared to criticize in order to build a better business and a better society.
Tech visionary Jaron Lanier is calling for regulation in AI, arguing that it is in the best interest of society — and that of Big Tech.
As part of that regulation, Lanier, who now works at Microsoft, also argues for all data used by AI models to have its origin and ownership declared, to counter the threat from misinformation and deepfakes.
“All of us, Microsoft, Open AI, everybody in AI of any scale is and saying, we do want to be regulated. [AI] is a place where regulation makes sense,” Lanier told Bloomberg’s AI IRL videocast. “We want to be regulated because everybody can see [that AI] could be like the troubles of social media, times a thousand. We want to be regulated. We don’t want to mess up society. We depend on society for our business. You know, markets are fast and creative. And you don’t get that without a stable layer created by regulation.”
Speaking to the idea of “data dignity,” Lanier explained that this is the notion that creators should be compensated, especially if their data is being used to train algorithms.
Provenance
“In order to do it, we have to calculate and present the provenance of which human sources were the most important to give an AI output. We don’t currently do that. We can though. We can do it efficiently and effectively,” Lanier says. “It’s just that we’re not yet. And it has to be a societal decision to shift to doing that.”
He admits to being “scared” of the potential for misinformation caused by unregulated AI use interfering with politics but feels the answer to deep fakes is provenance. “If you know where data came from, you no longer worry about deep fakes. The provenance system has to be robust.”
Lanier’s Role
Lanier’s bizarre title at Microsoft is “Prime Unifying Scientist,” something he admitted was a humorous attempt to encompass everything he does, like an octopus.
“I have come to resemble one, or so my students tell me, and I’m also very interested in their neurology. They have amazing nervous systems. So we thought it would be an appropriate title.”
However, this gives him something of a free-roaming role both inside and outside the company. He was at pains to point out that he was not speaking here in an official Microsoft capacity.
In fact, Lanier has become a fierce critic of the industry he helped build, but he wants to challenge it to do better from within.
“To be an optimist, you have to have the courage to be a fearsome critic. It’s the critic who believes things can be better. The critic is the true optimist, even if they don’t like to admit it. The critic is the one who says this can be better.”
Open Source Concerns
For example, he doesn’t think the open source model for AI or Web3 makes any sense. He poured scorn on the idea that open source would democratize and decentralize the internet and its reward system.
“I think the open source idea comes from a really good place and that people who believe in it, believe that it makes things more open and democratic, and honest and safe. The problem with it is this idea that opening things leads to decentralization is just mathematically false. Instead of decentralization, you end up with hyper-centralization and monopoly. And then that hub is incentivized to keep certain things very secret and proprietary, like its algorithms.”
Instead, he advocates for a market economy, in which people and businesses pay to use technology, like AI. He hints that doing so would fund data provenance and retain data integrity.
Lanier says he doesn’t agree with the founder of OpenAI, Sam Altman, on everything, including his notion of a universal cryptocurrency: “I think that some criminal organization will take that over, no matter how robust he tries to make it.”
The Benefits of Speaking Up
He says being able to criticize from within Big Tech is actually beneficial for Microsoft’s own business.
“I’ve tried to create a proof of that, where I can say things that are not official Microsoft. Look, I spend all day working on making Microsoft stuff better. And I really am proud that people want to buy our stuff and want to buy our stock. I like our customers. I like working with them. I like the idea of making something that somebody likes enough to pay you money for it. That to me is the market economy.”
Lanier wants to persuade colleagues at Meta and Google to speak their minds more, too.
“If the other tech companies had a little bit of [free] speech in it might actually be healthy for them. I think it would actually improve the business performance of companies like Google and Meta. You know, they’re notoriously closed off. They don’t have people who speak, and I think they suffer for that, [even if] you might not think so because they are they’re big successful companies. I really think they could do more.”
He says there are four or five other execs at Microsoft with public careers outside the company who speak their mind.
“I think it’s been a successful model. Do I agree with absolutely everything that happens in Microsoft? Of course not. I mean, listen, it’s as big as a country, you know.”
Russell Wald at Stanford’s Institute for Human-Centered AI argues for regulation that provides safeguards but doesn’t stifle innovation.
November 28, 2023
The Precision Editing Required for David Fincher’s Assassin in “The Killer”
TL;DR
David Fincher’s go-to editor, Kirk Baxter, ACE helped achieve a new kind of subjective cinema with Michael Fassbender’s assassin character.
Although “The Killer” proceeds on a fairly linear trajectory, Baxter says this made it challenging to cut because there was nowhere to hide.
The rules for the visuals, the movement, and the soundscape were laid down in the film’s opening sequence set in Paris.
Critics are hailing David Fincher’s The Killer as his most experimental film since Fight Club: “a subjective, cinematic tour de force,” says Bill Desowitz at IndieWire, in which we get inside the mind of Michael Fassbender’s titular assassin after he experiences his first misfire in Paris.
The movie, now streaming on Netflix, divided into six “chapters,” each with its own look, rhythm, and pace tied to Fassbender’s level of control and uncertainty. According to the film’s editor, Fincher regular Kirk Baxter, ACE, the editorial process necessitated the creation of a visual and aural language to convey subjective and objective points of view for tracking Fassbender.
Baxter (Zodiak, The Social Network) goes into detail about working on each chapter with IndieWire. We learn that the opening sequence set in Paris took the most time for Baxter to assemble because it was stitched together from different locations including interiors shot on a New Orleans stage.
“I love the whole power attack, the stretching of time, the patience of what it takes to do something properly,” Baxter said. “And I love that it’s grounded in the rule of physics and how practical it is that each detail in order to do something correctly deserves the same amount of attention.”
Later, in a chapter set in New Orleans, the Killer exacts revenge on a lawyer. The setup prep is slow as he cunningly enters the lawyer’s office dressed as a maintenance worker.
“It was one of the hardest things to put together,” Baxter tells IndieWire. “It’s a little like a Swiss watch in terms of how exacting it can be in his control. David had like 25 angles in the corridor, but when you put it all together, I love how that scene unfolds by playing both sides of the glass [between the office and corridor]. Typically, he’s gonna say as little as possible and his stillness controls the pace, and when he gets fed up, these little, tiny subtle looks from him are letting you know that’s enough and where this conversation stops.”
The nighttime fight between the assassin and a character called The Brute in the latter’s Florida home is depicted as a contest between two warriors in the dark. Speaking to Dom Lenoir, host of The Editing Podcast, Baxter explains how he and Fincher choreographed this fight as well as talking more broadly about the director’s shooting style.
“David does always provide a lot of coverage [and] that gets misinterpreted as a lot of takes [but] what he’s extremely good at is making sure that I’ve got the pieces to be able to move around as needed, or to keep something exciting. It means I can edit pretty aggressively and use just the best pieces of everything. David knows these rhythms he shoots for an editor. So, if it’s a really long scene, you will find in the wide shot that they’ll often be blocking, for example, somebody coming into the room. You sort of work your way [into the scene].”
Baxter says all that matters to him once in production is the material Fincher has captured. “I will read the scene again so that I understand the blueprint of it. You know what its intention is, but then it can be thrown away because David can evolve beyond what the script was based on, whether a location or how our actors are performing. He’ll recalibrate and readjust.”
Although The Killer proceeds on a fairly linear trajectory (hey, like a bullet…) Baxter says appearances can be deceptive when it comes to cutting.
“I found it to be one of the more challenging movies to make because it’s not juggling a bunch of different character lines or going back and forth from past to present and that sort of thing,” Baxter told IndieWire. “It’s just a straight line, but the exposure of that [means there’s] nowhere to hide. It’s like everything is just under the spotlight and you’re not having dialogue and interaction to kind of dictate your pace. It’s a series of shots and everything has to be manipulated in order to give it propulsion, or how you slow it down.”
He continues this train of thought with Lenoir, “It was a challenging movie to make from my perspective because you are showing an expert on the fringes of society but he’s still a person that operates with precision. You’re trying to illustrate that by showing precision. And it is just a lot of fiddling to make things seem easy.”
He also discusses perhaps something that you may not notice in a first watch which is that The Killer doesn’t seem to blink. It doesn’t just happen in this film either but in other Fincher movies where Baxter says he has consciously selected shots of actor’s not blinking.
“I don’t think that it was an effort to remove them through the film,” he says. “It’s just the nature of how his performance was. But there’s been an effort to remove them in previous films when they’re all kind of landing off rhythm. It’s mostly about when you get into the meat of a scene and you’re in close ups and you want something delivered with intention and purpose.”
Audio was crucial to The Killer as well. Rather than be smoothed out in the background with the edge taken off all transitions, Fincher and sound designer Ren Klyce wanted the audio to be driven by point of view. The rules of the film’s soundscape are established in the opening sequence. Given that the protagonist is not predisposed to be chatty, we learn as much from his internal monologue as from his methodological movements.
“We crawl into his ears and sit in the back of his eye sockets instead of how it’s being presented,” Baxter describes to IndieWire. “From the moment when the target turns up, it was David’s idea to try a track that was what he plays in his headphones. And when you have his POV, we turn the track up to four, and when you’re back on him, the track drops down, and you get the perspective of it playing in his ear.”
They devised rules for how to apply his voiceover but realized they couldn’t have voiceover and music at the same time because there would be too much “sonic noise” for the audience.
“So one’s got to occupy one space and one take the other. The logic said to us what’s blaring in his ears and when he’s in a monologue is when we’re looking at him. That was the rule of what was subjective and what was objective,” says Baxter.
“We tried the notion of ‘vertical’ sound cuts,” Fincher explains. “By which I mean, you’re coming out of a very quiet shot and cutting into a street scene and – boom! — you pick up this incredibly loud siren going by. You’re continually aware of the sound.”
This makes for an unusual but effective experience. For instance, there’s a scene in a Parisian park where the sound of a fountain constantly moves around depending on the featured character’s POV.
Was matching that vertical sound cutting hard?
“I guess even when you’re creating chaos, you’re trying to affect it in your own way,” Baxter tells Andy Stout at RedShark News. “You’re always seeking your own version of the perfect way to do this.”
Jennifer Chung, ACE was one of the assistant editors on the film — part of a 14-strong editing department. She also spoke with RedShark News about the tools they used.
“Obviously we use Premiere, and we heavily use Pix also,” she says. “We do a lot of our communication in post through Pix, especially during production during the dailies grind, where we’re uploading not only the dailies but selects that are coming out so that we can get that to David.”
Adobe After Effects is also used extensively, with the team using Dynamic Links to round trip the content out of Adobe Premiere and back in. Some of the assistants also script, so Python or even Excel, in some cases, were also deployed to help automate some of the critical processes.
The Killer was shot in 8K using RED V-Raptor and according to Chung proved a little tricky initially to grade in HDR.
“We definitely had some kinks we had to figure out early on,” Chung says. “We all needed HDR monitors, but we didn’t have HDR monitors at home, though we had HDR monitors at the office. We also use a lot of Dynamic Links in Premiere, and we were having some color space issues going from Premiere to After Effects back to Premiere, but because we have such a close relationship with Adobe, we were able to figure that out.”
In The Killer, the assassin experiences a gradual loss of control that is in stark contrast with his profession’s demand for precision and self-discipline. The film’s voiceover is a key storytelling tool, not only advancing the plot but also reflecting the killer’s internal conflict. This narrative choice is complemented by the film’s editing, as Baxter elaborates in his discussion with Steve Hullfish, ACE, on the Art of the Cut podcast.
“David had outlined that the movie is about process,” Baxter says, explaining how he and Fincher used this idea to help guide their creative decisions. “When it’s the killer’s process — when he is in the driver’s seat — we’re going to be extremely deliberate about everything. The camera will be steady. The pacing will be steady, considered, and exacting. Continuity will rule the day.”
On the other hand, “when things go out of his plan — when chaos erupts — we’re going to introduce camera shake. I’m going to start to jump time slightly, start to clip action and have the freedom to kind of make it more exciting and out of control.”
Baxter likens it to the thriller-horror construction of “stretch everything until you reach that point,” adding, “Most of the series of chapters, or kills, have this similar construction that is stretching process, like taking your time with it, and then exploding into action,” he says. “There’s a great use of the voiceover line at the beginning where the killer says, ‘If you don’t have patience, this is not the profession for you.’”
While the Oscar-winning editor says he can’t entirely relate to a psychopath, he does relate to the concept of process. “I relate to the faith in process to see you through,” he says, “that if you follow all of the your steps of survival, you will get to the other side with the result you’re after.”
Baxter’s approach to working with the voiceover always began with the visuals, he said. “I would just start with the visuals on their own and then try to fit the voice to it.”
Voiceover “is always a wriggly fish because you can keep changing it,” he says. “It’s rather pliable,” he continues, describing how they would add, drop or even entirely rearrange lines.
“There was a lot of voiceover to kick it off, up until the first killer misfire, then it started to streamline back into just a repeating of the mantra but dropping away sections,” he recounts. “So the mantra got shorter and shorter, as he was breaking his own rules, and his own disciplines were starting to erode.”
The sniper scene, occurring near the beginning of the film, helped define how the voiceover would function throughout the remainder of the film. “We’re blasting the music in his ears, The Smiths song up so high that it can’t fit voiceover,” Baxter says. “So it became this rule in the movie, from that scene that then sort of bled out in every direction.”
Baxter’s editing approach was significantly influenced by the film’s music, particularly in key scenes where the soundtrack played a pivotal role in dictating the pace and mood. In the sniper scene, for instance, the loudness of The Smiths’ song was used to such an extent that it left no room for voiceover. This choice led to a rule that was applied throughout the film: voiceover was used where the music wasn’t dominant. Baxter describes this as a balancing act, saying, “When he’s in his POVs, what’s going on inside his head with sound? No voiceover goes on his POVs in the movie. It all goes where the music’s not up at 10.”
The editing was further complicated by the need to adapt to changes in the music track. Baxter recounts how the music selection process was dynamic and challenging, involving various experiments with different genres and artists before settling on The Smiths. He notes, “It was the cherry on top. Always for me, I just loved playing with music: being part of choosing which tracks should go where to have the most black comedy in.”
Baxter also emphasizes the style developed in the film of “punching in and out of the volume of the music — from POV of headphones to blasting full.” This technique became a signature aspect of the movie’s sound design, especially effective in scenes like the sniper scene and when the killer is in the van observing his target, Dolores. The fluctuating music volume played a crucial role in setting the tone and pace, aligning with the visual cuts and the psychological state of the killer.
“Bring the audience to a place they are not used to being, close to this assassin,” explains DP Erik Messerschmidt
November 27, 2023
Posted
November 25, 2023
Social Commentary But Make it Cinematic: Production for “In a State of Change”
TL;DR
Donal Boyd and Frank Nieuwenhuis shot climate change documentary “In a State of Change” in order to help personalize the impacts of receding glaciers in Iceland.
The documentary short film was shot by Nieuwenhuis primarily using the Sony FX6, which was ideal for a production that frequently required packing light.
In addition to being a second shooter and drone operator, Boyd served as the main character anchoring the documentary, which he found uncomfortable but ultimately effective.
Many documentary filmmakers can relate to their aspirations not aligning with the story they ultimately tell. Such was the case for Donal Boyd and Frank Nieuwenhuis, who sought to help people understand the impact of climate change, on both Iceland’s glaciers and on Icelanders themselves.
Ultimately, he says, “The idea was to, maybe, make people more fall in love with the glaciers through what we would show.”
Initially, Boyd was inspired by Andri Snær Magnason’s On Time and Water, which explores why people don’t understand climate change or connect with the issues surrounding it. As a photographer advocating for climate change awareness, Boyd says the themes resonated and motivated him to connect with Nieuwenhuis to partner on a documentary about Iceland’s changing glaciers. (Boyd and Nieuwenhuis previously collaborated on Volcano for the People.)
They didn’t set out to have Boyd serve as the main character in the film, but Nieuwenhuis says, “It definitely helped us to navigate through the different things and connect everything. Because it was also very interesting to see that development of yours through the film.”
Boyd admits, “I changed just as the same landscape is changing.” He adds, “I hope that people will relate to the struggles that I went through.”
Using a Lightweight “Beautiful Rig”
In a State of Change was primarily filmed on the Sony FX6, which Nieuwenhuis calls a “beautiful rig” in part two. He says, “It’s lightweight, and it’s also very like run-and-gun type of camera. This is exactly what we needed and wanted for this project.”
After all, “Being out in the field, climbing up on mountains, you don’t want it to be too heavy. You want to be able to unpack it and quickly film,” Nieuwenhuis explains. The FX6 has “XLR inputs built in, and ND filters built in, so I can strip this whole thing down and I could still film.”
In terms of glass, Nieuwenhuis opted for cinema prime lenses, specifically the digital film kit from Vespid. Keeping with the theme of packing light, he notes that they are both very small and relative featherweights. But that’s not the only reason he chose them. “They’re beautiful. I love to work with them. We have a kit with the 25, the 35, the 50 and the 75,” he says.
The 35 mm is a favorite of Nieuwenhuis, and he says, “I also like to go to 25, which is really nice. When you have very open aperture lenses, especially on the full frame, it gives a lot of bokeh, even though you go white. I prefer to go wide because you get a lot of context in your scene.”
Their secondary cameras were the Sony Alpha I and the Sony Alpha 7SIII, which Boyd says they paired with Vespid primes for a second angle during interviews.
Also in their kit is an external HD monitor. Nieuwenhuis says, “ I like to have a big monitor” because “it just gives me a lot more confidence.”
One of the heaviest items in their pack is “a big V-lock battery. That does add a lot of weight. But it allows me to power up everything from one battery,” he explains. But that heft had an advantage: “It also helped me balance this rig, because the camera is so light. Without the V-lock battery in the back it would, like, tilt forward just because… the lens is even heavier than the camera, almost.”
For audio, Nieuwenhuis says the opted for lavalier mics, including a Sennheiser wireless set that Boyd wore almost constantly because they filmed without a dedicated “audio guy” and that meant no boom pole when he was on camera, unless it was during a sit-down interview.
The boom mic did make an appearance, however. “When we were out in the field, running and gunning, we did put the boom mic on top of the camera. This allowed us to record anything or anyone that wasn’t miked up with a wireless lav mic. So for example, when we were bumping into people on the hiking trail, and we wanted to have a little, you know, spontaneous conversation with them, we could just capture it on camera,” Nieuwenhuis recalls.
For indoor interviews, Nieuwenhuis relied on a “beautiful aperture 300 DS, available with a softbox just to make some soft light” to correct some of what he called “terrible” lighting situations.
Because the nature of In a State of Change, Nieuwenhuis says, is “very focused on landscapes, we needed an aerial very wide perspective that would give us a proper view to give us an idea of the whole landscape. And there’s nothing better than seeing it from the air.” To capture that footage, they opted for a DJI Mavic Air 2, primarily piloted by Boyd, who Nieuwenhuis credits as an excellent drone operator.
Capturing Scenes Under Difficult Conditions
For the human elements of the documentary, Boyd and Nieuwenhuis wanted to “create scenes rather than interviews.” Nieuwenhuis explains in part three.
Boyd elaborates, “Instead of giving too much context for anyone that we spoke to, when we had them sit in our chair, or when we got them out in the field, we gave them just enough to give an idea of what our general film was about. But we wanted to capture the authenticity, and the reactions in the conversations that we’re having out in the field, or during the interview itself.”
But in creating those scenes, Boyd explains, they had to factor in “access and the weather” — the nature of filming in Iceland.
In terms of access, their Land Rover Defenders (which Boyd calls “a super Jeep) came in handy repeatedly. Additionally, the pair rented an RV to serve as mobile studio and hotel, in order to avoid “going up and down to these glaciers. We would lose too much time. We would miss opportunities.”
Boyd says, “Having this mobile studio allowed us to not be dependent on the location where we might have booked a hotel. Or if we were staying in Reykjavik, it’s just too far to come back and forth from that.”
The RV also meant, Nieuwenhuis says they “started editing during the process of filming. And it helped us, in a way, to lead us through the story and react on what we captured. Sometimes, you know, we captured something and realized with a switch of direction or something inspired us.”
Despite the access and weather concerns, Boyd says, “The greatest challenge was actually wrapping up the entire film together in the end, and creating a cohesive story.”
Some of that cohesion was developed in post by a sound designer and a graphic artist. They employed musicians to create a custom soundtrack, and the graphic designer was tasked with developing “the vibe and overall look,” Boyd says.
The project lasted about a year, longer than they had initially planned. “At some point, we realized we have something beautiful here. Let’s not rush this,” Boyd recalls.
What They Learned
The first scene of the documentary features a glacier-carved canyon. “We could really imagine that we’re standing in the spot where the glacier used to be,” Nieuwenhuis says during part four of the Q&A.
Equally importantly, this scene, he says, created “a moment where we started seeing more of [Boyd] as a character, rather than a presenter. We start to see your emotions come through.”
Boyd admits, “It felt weird, you know, to incorporate myself as a character into the film.” Also challenging for Boyd was the need “to overcome” his fixation “on aesthetic and on beauty” during filming.
Both, Boyd says, learned they “have to be narrow and focused, keeping things personal, but also trying to be as objective and taking the advice of all the people that we incorporated into the film together, as one, to get the message across.”
For his part, Nieuwenhuis says he discovered that “it’s much more interesting when you put a person in an environment, so it can respond to the environment; it can interact with the environment.” He says, that decision “actually makes the process of… telling a visual story… much easier.” After all, “It’s not a radio show.”
In a State of Change also taught Boyd “the power of collaboration and incorporating already existing ideas and combining them to be able to communicate such a difficult issue.” He explains, “If you have an idea or a concept, you don’t have to just do it by yourself, but you can work with other people to remix it and reinvent it and get the message across in a more powerful way.”
Aidin Robbins and Eric Matt shot a documentary in the Alps and across glaciers. Learn how they managed both mountaineering and filmmaking.
November 25, 2023
How Influencer-Generated Content Has Become Core to Brand Strategies
TL;DR
In the ever-evolving landscape of digital marketing, the creator economy has emerged as a powerful force, reshaping the way brands connect withconsumers.
An overwhelming majority of brands are using creator-generated content for channels beyond social media, highlighting its versatility and reach, a new survey finds.
The terms “creator” and “influencer” tend to be used interchangeably, but marketers are applying different metrics to judge the performance of each.
Influencer-generated content is now core to brand strategies, with marketers increasingly savvy about the differences between creators and influencers and how to measure their performance.
A recent study conducted by creator marketing platform LTK underlines the profound impact of creator marketing, an industry now estimated at $21 billion globally.
Next year, worldwide, marketers are expected to spend more than $32 billion on influencer marketing. Influencer spend is now outpacing traditional ad investment, with 80% of brands saying they increased creator budgets in 2023, per the report.
Some 92% of brands plan to increase their spending on creators in 2024, and 36% plan to spend at least half of their entire digital marketing budget on creators.
Because of what LTK calls the “significant trust” creators have built with their communities, the majority of brands it surveyed said consumers are turning to creators the most compared to social media ads and celebrities.
An overwhelming majority of brands (98%) are using creator content for channels beyond just social media, highlighting its versatility and reach.
Indeed, when asked where their marketing dollars are shifting, creator marketing and connected TV shared the top position overall for investment growth, beating out channels like paid search and paid social.
The study also found that dollars are being moved from digital ads to creator marketing because the scale of creator marketing has proven to be more efficient when compared to side-by-side, all-cost measurement.
Marketers, however, are becoming more discerning about the difference between influencers and creators.
“As marketers have got more comfortable with the creator economy, influencers have become the go-to for performance marketing, while creators are considered more for branding purposes,” says Krystal Scanlon, writing at Digiday.
Marketers are feeling the pressure to be super transparent and efficient about their purchases and the reasons behind them. This means they’re getting specific about when it’s better to collaborate with an influencer versus a creator.
Lindsey Bott, senior content manager at Ruckus Marketing, told Scanlon, “Previously, influencer involvement might have organically emerged in ongoing discussions. Now, we’re seeing brands come to us more frequently with well-defined briefs or specific suggestions right from the outset.”
The days of pay-for-reach deals are long gone, it seems. In fact, influencers increasingly have specific metrics, such as engagement rate, CPM, CPE, clicks, click-through rate and conversions, tied to them.
For example, Bott’s team has observed clients gravitating toward influencers due to their established reach and engagement metrics, emphasizing performance-driven results.
Conversely, there’s a growing interest in creators who prioritize crafting genuine, narrative-based content that closely aligns with a brand’s values and campaign themes.
“They’re unbelievable storytellers who can really shape perception,” Keith Bendes, VP of strategy at Linqia, reports at Digiday.
Unlike influencers, creators usually don’t have the same set of metrics tied to them.
“Over time, as marketers understand how a specific creator’s content performs when repurposed on their social channels or paid media, they may start to benchmark specific benchmarks for that creator’s assets,” said Lindsey Gamble, associate director at influencer marketing platform Mavrck.
According to Scanlon, this shift underscores how brands are distinguishing between utilizing audience influence and cultivating content that profoundly connects with their intended audience.
“Creators have evolved into valuable assets for brands, capable of driving substantial business impact,” says Rodney Mason, VP and head of marketing at LTK, writes at Adweek. “As we move into 2024, creator marketing is fundamental shifting how brands engage with consumers. Those marketers who embrace the rise of creators will find themselves at the forefront of this transformative wave. The time to invest in creators and their unique ability to influence, engage and build trust with consumers is now.”
In a recent webinar, “The Next Wave of Creator Marketing: 2024 Forecast,” LTK’s director of strategy insights brand partnerships, Ally Anderson, shares more detail about how “creator guided shopping” is becoming the foundation for marketing efforts and now influencing consumers through all aspects of their discovery journey. Watch the full presentation in the video below: